entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.01278v2
20230703180642
Metallicity Dependence of Molecular Cloud Hierarchical Structure at Early Evolutionary Stages
[ "Masato I. N. Kobayashi", "Kazunari Iwasaki", "Kengo Tomida", "Tsuyoshi Inoue", "Kazuyuki Omukai", "Kazuki Tokuda" ]
astro-ph.GA
[ "astro-ph.GA" ]
CLOUD FOMRATION IN THE LOW-METALLICITY ENVIRONMENT Masato I.N. Kobayashi Masato I.N. Kobayashi 0000-0003-3990-1204]Masato I.N. Kobayashi Division of Science, National Astronomical Observatory of Japan, Osawa 2-21-1, Mitaka, Tokyo 181-8588, Japan I. Physikalisches Institut, Universität zu Köln, Zülpicher Str 77, D-50937 Köln, Germany 0000-0002-2707-7548]Kazunari Iwasaki Center for Computational Astrophysics, National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan 0000-0001-8105-8113]Kengo Tomida Astronomical Institute, Graduate School of Science, Tohoku University, Aoba, Sendai, Miyagi 980-8578, Japan 0000-0002-7935-8771]Tsuyoshi Inoue Department of Physics, Konan University, Okamoto 8-9-1, Kobe, Japan Astronomical Institute, Graduate School of Science, Tohoku University, Aoba, Sendai 980-8578, Japan 0000-0002-2062-1600]Kazuki Tokuda Department of Earth and Planetary Sciences, Faculty of Sciences, Kyushu University, Nishi-ku, Fukuoka 819-0395, Japan ALMA Project, National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan The formation of molecular clouds out of Hi gas is the first step toward star formation. Its metallicity dependence plays a key role to determine star formation through the cosmic history. Previous theoretical studies with detailed chemical networks calculate thermal equilibrium states and/or thermal evolution under one-zone collapsing background. The molecular cloud formation in reality, however, involves supersonic flows, and thus resolving the cloud internal turbulence/density structure in three dimension is still essential. We here perform magnetohydrodynamics simulations of 20 converging flows of Warm Neutral Medium (WNM) with 1 μG mean magnetic field in the metallicity range from the Solar (1.0 ) to 0.2 environment. The Cold Neutral Medium (CNM) clumps form faster with higher metallicity due to more efficient cooling. Meanwhile, their mass functions commonly follow dn/dm∝ m^-1.7 at three cooling times regardless of the metallicity. Their total turbulence power also commonly shows the Kolmogorov spectrum with its 80 percent in the solenoidal mode, while the CNM volume alone indicates the transition towards the Larson's law. These similarities measured at the same time in the unit of the cooling time suggest that the molecular cloud formation directly from the WNM alone requires a longer physical time in a lower metallicity environment in the 1.0–0.2 range. To explain the rapid formation of molecular clouds and subsequent massive star formation possibly within ≲ 10 Myr as observed in the Large/Small Magellanic Clouds (LMC/SMC), the Hi gas already contains CNM volume instead of pure WNM. § INTRODUCTION Molecular clouds host star formation and thus their formation and evolution is an essential step for star formation in galaxies and galaxy evolution <cit.>. The mass and volume of galactic disks are dominated by Hi gas <cit.>, but the spatial distribution of star formation rate is correlated more with molecular clouds (traced by CO lines) rather than Hi gas <cit.>. Therefore, the formation efficiency of molecular clouds out of Hi gas and the resultant molecular cloud properties set the initial condition of galactic star formation. In particular, the metallicity controls the heating/cooling rate and the thermal state of the interstellar medium (ISM) <cit.> as well as it changes the formation/destruction rate of molecules and the resultant chemical state of the ISM (, see <cit.> for the CO-to-H_2 conversion factor and see <cit.> for the conditions of the metallicity and the interstellar radiation field that makes the H_2 cooling and heating important.) Observations show that the metallicity increases with the cosmic time by the metal production from massive stars <cit.>. The metallicity also has the galactocentric gradient even within individual galaxies <cit.>. Therefore, it is crucial to investigate metallicity dependence of the ISM evolution below the Solar value () for the understanding of galactic star formation and cosmic star formation history. In 1.0 environments, theoretical studies show that the thermal instability in the Hi phase initiates the phase transition from the Warm Neutral Medium (WNM) to the Cold Neutral Medium (CNM) <cit.>. Observational studies of the emission and absorption of Hi, CII and CO lines show the existence of such multiphase ISM and they now try to constrain the geometrical structure of the WNM and the CNM: for example, the studies with Arecibo Telescope <cit.>, with Hubble Space Telescope <cit.>, with Giant Metrewave Radio Telescope <cit.>, with Herschel, Very Large Array, and IRAM 30m telescopes <cit.>. Such measurements are performed also toward lower metallicity environments such as the Large Maggelanic Cloud (LMC) with Australian Telescope Compact Array <cit.>. Previous one-zone theoretical studies have developed detailed chemical networks to comprehensively study the metallicity dependence of the ISM evolution from the present-day to primordial gas, such as thermal evolution of collapsing protostellar clouds <cit.> and the thermal/chemical steady states <cit.>. Theses studies show that the thermal instability still plays an important role even in low-metallcity environments with ≳ 10^-2 where fine structure lines of [OI] (63.2 μm) and [CII] (157.7 μm) are the dominant coolant[See also and for the effect of the H_2 cooling and the H_2 dissociation by UV radiation, as well as and for the heating by the Cosmic Microwave Background radiation at high-redshifts and for the atomic line cooling in the first star formation).]. The WNM and the CNM are in the pressure equilibrium under a typical Galactic pressure and metallicity <cit.>. Supersonic flows are believed to be an important first step to initiate the thermal instability by compressing/destabilizing the previously stable WNM to the thermally unstable neutral medium (UNM), which subsequently evolves to the CNM due to cooling. Such supersonic flows originate from the passage of galactic spiral arms, the expansion of supernova remnants and Hii regions <cit.>. Many authors investigated this condition by performing numerical simulations of WNM converging flows, initially in one-dimension <cit.>, and later in multi-dimension <cit.>. They show that the dynamically condensing motion of the UNM due to the thermal instability results in the formation of turbulent clumpy CNM structures, which are important progenitors of the filamentary structures observed in molecular clouds. This multiphase nature of the ISM seems ubiquitous also in a wide range of low-metallcity environments as suggested by large-scale simulations on a galactic/cosmological scale (, supersonic flows by supernovae, galaxy mergers , gas accretion from the host dark matter halo ). Therefore, multi-dimensional numerical studies on low-metallicity molecular clouds are still essential to understand metallicity dependence of their formation and subsequent star formation, where the thermal instability and turbulence operate simultaneously. <cit.> performed three-dimensional simulations of converging flows as well as a linear stability analysis. They confirmed the development of the thermal instability as long as the dominance of metal lines in the cooling process under modest FUV background with log(G_0)>-3 <cit.>. These previous approaches, however, employ a coarser spatial resolution in a lower metallicity aiming at resolving the thermal instability with the same number of numerical cells between different metallicities, because the maximum growth scale of the thermal instability is larger at lower metallicities. The CNM clumps on sub-pc scales are not fully resolved yet and their properties and statistics in low-metallicity environments remain unclear. Recent observations with the Atacama Large Millimeter/submillimeter Array (ALMA) reveal the existence of filamentary structures whose width is ∼ 0.1 pc in the outer disk of the Milky Way and the Magellanic Clouds (, Izumi et al., in prep.), which is similar to the ones in the Solar neighborhood <cit.>. These structures are likely inherited from clumpy/filamentary structures in Hi phase <cit.>. Such high-resolution observations from radio to optical/near-infrared bands toward extragalactic sources will advance significantly in the upcoming years by ALMA, James Webb Space Telescope (JWST), the next generation VLA (ngVLA), the Five-hundred-meter Aperture Spherical radio Telescope (FAST), the Square Kilometre Array (SKA) . Therefore, understanding of sub-pc scale structures during the molecular cloud formation from Hi gas is critical to understand the possible universality of star formation process in different metallicity environments, , the common existence of filamentary molecular clouds as observations suggest. In this article, we perform magnetohydrodynamics simulations of WNM converging flows to investigate the metallicity dependence of the molecular cloud formation, especially focusing on the thermal instability development and the resultant CNM properties. We investigate three cases with the metallicities of 1.0, 0.5, 0.2 , which correspond to the typical values of the Milky Way, LMC, and Small Magellanic Cloud (SMC), respectively. By aiming at coherently resolving the turbulence/density structures comparable to the scale of molecular filaments/cores, we employ the 0.02 pc spatial resolution at all metallicities, which is enough to resolve the cooling length of the UNM evolving to the CNM <cit.>. The typical cooling length of the WNM and UNM is 1 pc (1) – 3 pc (0.2) and that of the CNM is 0.1 pc (1) – 0.3 pc (0.2) in our simulation. This spatial resolution motivated by recent observations is higher than previous studies <cit.>, which enables us to coherently compare the statistics of CNM structures and discuss their possible universality/diversity between different metallicities. The rest of this article is organized as follows. In Section <ref>, we explain our simulation setups, and show the main results in Section <ref>. In Section <ref>, we explain the implications and discussions on the low-metallicity cloud formation based on our results. We summarize our results in Section <ref> followed with future prospects. § METHOD §.§ Basic Equations and Setups To investigate the development of the thermal instability and the formation of the multiphase ISM from the WNM, we calculate supersonic WNM converging flows by solving the following basic equations: ∂ρ/∂ t + ∇_i^ (ρ v_i) = 0 , ∂ (ρ v_i)/∂ t + ∇_j (T_ij + ρ v_jv_i) = -ρ∇_iΦ , T_ij = ( P + B^2/8π)δ_ij - B_i B_j/4π , ∂ e/∂ t + ∇_i[ (eδ_ij + T_ij ) v_j] = ∇_i[κ(T)∇_i T] -ρ v_i∇_iΦ -ρ(T,Z) , ∂ B_i/∂ t + ∇_j(v_jB_i - v_iB_j) = 0 , ∇^2 Φ = 4π Gρ , where ρ is the mass density, v represents the velocity, P represents the thermal pressure, T without any subscript is the temperature, Φ is the gravitational potential, G is the gravitational constant, and ∇_i = ∂/∂ x_i, where x_i spans x, y, and z. δ_ij is the identity matrix. We calculate the total energy density, e, as e = P/(γ-1) + ρ v^2/2 + B^2/8π where the ratio of the specific heat is γ=5/3. We introduce the thermal conductivity, κ, as κ(T) = 2.5 × 10^3 T^0.5 erg cm^-1 s^-1 K^-1, which considers collision between hydrogen atoms <cit.>. (T,Z) is the net cooling rate. We employ the functional form of (T,1.0 ) from <cit.> in the case of 1.0 condition. This functional form combines the results from <cit.> in T ≤ 14,577 K and <cit.> and <cit.> in T > 14,577 K by considering the cooling rates due to Lyα, C_ II, He, C, O, N, Ne, Si, Fe, and Mg lines with the photo-electric heating. The shock heating and compression destabilize the WNM to join the UNM (defined as (∂ (ℒ/T)/∂ T)_P <0; see ,), leading to the formation of the CNM clumps. To investigate the lower metallicity cases in this article, we apply three modifications to (T,1.0 ) to prepare (T,Z). First, the cooling rate due to the metal lines is set to be linearly scaled with the metallicity, which is a good approximation in > 10^-4 as long as the metal lines dominate the cooling <cit.>. Second, we set the photo-electric heating rate to be linearly scaled with the metallicity by assuming that the dust abundance is also scaled with the metallicity. We, therefore, use Γ_ pe = 2.0 × 10^-26( Z / ) erg s^-1. Third, we implement the X-ray and cosmic ray heating rates as Γ_ X = 2.0 × 10^-27 erg s^-1 and Γ_ CR = 8.0 × 10^-28 erg s^-1 respectively <cit.>. The X-ray and cosmic ray heating processes are subdominant in the metallicity range of this study; for example, X-ray (CR) contributes to the 10 % (30%) of the total heating rates in the 0.2 environment. Their relative importance decreases even more when the metallicity increases because the photoelectirc heating rate increases with metallicity. We show the detailed functional form of this revised (T) in Appendix <ref>. Figure <ref> shows the thermal equilibrium state, , (T,Z)=0. This shows that, in lower metallicity environments, the combination of the inefficient cooling and metallicity-independent X-ray and cosmic ray heatings allows the existence of the WNM phase until higher density in the range of 1–10. In each calculation, we assume that the uniform metallicity distribution in space as a representation of low-metallicity environments. We use the publicly available magnetohydrodynamics (MHD) simulation code Athena++ <cit.> to solve the basic equations, where we employ the HLLD MHD Riemann solver <cit.> and the constrained transport method to integrate the magnetic fields <cit.>. The self-gravity is calculated with the full multigrid method (Tomida et al., in prep.). Our simulation domain has the size of L_x,y,z=20,10,10 pc on each side in the Cartesian coordinate. We continuously inject supersonic WNM flows from the two x boundaries so that the two flows collide head-on. We employ the periodic boundary condition on the y and z boundaries. The collision forms a shock-compressed layer sandwiched by two shock fronts, in which the thermal instability converts the WNM to CNM. We employ V_ inflow = 20 as the flow velocity; this choice corresponds to a representation of the late phase of supernova remnant expansion or normal component of galactic spiral shocks. The initial velocity field is set as v_x (x) = V_ inflowtanh (-x/0.78 pc) and v_y,z=0. The initial WNM flow, and also the injected WNM flow, are thermally stable with the mean number density n_0=0.57. The corresponding pressure, temperature, and sound speed at each metallicity are listed on Table <ref>, where we use ρ_0 = n_0 μ_ M m_ p with the mean molecular weight μ_ M of 1.27 <cit.>. The ram pressure of the converging flow is P_ ram/ = 3.5 × 10^4 K. The WNM flows have density fluctuation following the Kolmogorov spectrum P_ρ(k)∝ k^-11/3 <cit.>. We impose a random phase in each k up to k/2π= 3.2 pc^-1, which corresponds to the 0.32 pc wavelength. The mean dispersion of the density fluctuation is chosen as √(⟨δ n_0^2 ⟩)/n_0 = 0.5. The injected flows have periodic distributions smoothly connected to the initial condition, so that we impose ρ (t,x=-10 pc,y,z) = ρ(t=0,x= 10 pc - V_ inflow t, y, z) and ρ (t,x=10 pc,y,z) = ρ(t=0,x=-10 pc + V_ inflow t, y, z) on the x boundaries where t represents the time. The interaction between this density inhomogeneity and shock fronts generate turbulence <cit.>. The magnetic field is initially threaded in the x direction with 1 μG strength. Previous studies show that successful molecular cloud formation occurs in such a configuration where the flow and the mean magnetic field are close to be parallel <cit.>. The typical mean field strength in Hi gas varies 1–10 μG in the Milky Way <cit.> and 0.5–5 μG in the Magellanic Clouds <cit.>. We, therefore, opt to choose 1 μG as representative strength in investigating the metallicity dependence. The detailed dependence on the field strength in low metallicity environments is left for future studies at this moment. We employ the uniform spatial resolution of 0.02 pc to resolve the typical cooling length of the UNM. This resolution is required to have the convergence in the CNM mass fraction after 1.0 t_ cool (where t_ cool is the cooling time defined as P/(γ-1)/ρℒ) <cit.>. We identify the shock front position by P>1.3P_0 to define the volume of the shock-compressed layer. Figure <ref> shows the three-dimensional view of the initial density field with the uniform 1 μG magnetic field. § RESULTS §.§ Expectations Figure <ref> shows that the CNM thermal state at n>10^2 is similar within the 1.0–0.2 range. In this metallicity range, the cooling rate is proportional to the metallicity <cit.>. These suggest that the t_ cool is longer in low-metallicity environments as ∝ Z^-1 but CNM properties may become similar between 1.0–0.2 at the same time measured in the unit of the cooling time. The typical t_ cool of the injected WNM is ∼ 1 Myr at 1, ∼ 2 Myr at 0.5 and ∼ 5 Myr at 0.2 (See Section <ref>). <cit.> investigate the converging flow at 1 and suggest that the shock-compressed layer achieves a quasi-steady state at ∼ 3 t_ cool. Therefore, to make a comparison between different metallicities both at the same physical time and at the same time measured in the unit of the cooling time, we integrate until 3 t_ cool in each metallicity. Table <ref> lists these parameters. §.§ The thermal states and magnetic field evolution Figure <ref> shows examples of the three-dimensional view of our simulation results. Panels (a), (b), and (c) compare the results at 3 Myr from the 1.0, 0.5, and 0.2 runs. Panels (d) and (e) compare the results at 3t_ cool from the 0.5 and 0.2 runs. These panels show that CNM clumpy/filamentary structures form slower in the lower metallicity environments. The development of such CNM structures similar to that in 1 environment requires similar time measured in the units of the cooling time, for example, at 3t_ cool. Compared at the same physical time of 3 Myr, the geometry of the shock-compressed layer in a lower metallicity environment is less disturbed (, Panel (c)), close to a plane-parallel configuration because the inefficient cooling keeps the layer more adiabatic. Figure <ref> compares the thermal states of the different metallicities at 3 Myr and at 3 t_ cool. This shows that, at 3 Myr, the shock-compressed layer is still dominated more by the shock-heated WNM/UNM in the lower metallicity environment. Their thermal pressure is still close to the flow ram pressure. Therefore, the temperature just starts to decrease in the shock-compressed layer of 0.2 at 3 Myr (0.6 t_ cool), so that the plane-parallel geometry of the shock-compressed layer seen in Figure <ref> is close to a simple one-dimensional shock compression (see also Section <ref> and Appendix <ref> for its impact on the turbulent velocity). The net magnetic flux in our simulation does not increase in time because the mean magnetic field is completely parallel to the WNM flow. Nevertheless as we see in Figure <ref>, the pre-shock density fluctuation induces a number of oblique shocks at the shock front, which locally fold the field lines. The magnetic fields in the shock-compressed layer are further twisted and stretched due to the turbulence, which introduces a larger scatter in the local field strength. We investigate the relation between the field strength and the number density at 3 t_ cool. Figure <ref> shows the phase histogram on the plane of the magnetic field strength and the number density in the shock-compressed layer. This figure shows that the field strength is initially amplified toward the shock-heated WNM volume through the shock compression almost following B∝ n^1 (as indicated with the translucent gray line). The strength further varies due to the turbulence at n< 10. Note that the initial mean magnetic field is completely parallel to the flow velocity and not all the volume experiences a perfect one-dimensional shock compression. Therefore, the most frequent field strength is slightly weaker than that the relation B∝ n^1 (see the upper panels of Figure <ref>). Accompanying the density enhancement toward n∼ 10^3 by the thermal instability, the field strength gradually increases but with a limited level. The black solid curve in Figure <ref> shows the volume-weighted average of the magnetic field strength as a function of the number density ⟨ B ⟩ (n), where B=| 𝐁|[Throughout this paper, ⟨·⟩ represents the volume-weighted average whereas ⟨·⟩_ρ represents the density-weighted average.]. This shows that the amplification roughly scales with ⟨ B ⟩∝ n^1/5 in the n>1 range. This slow evolution occurs because condensing motion by the thermal instability is confined along the magnetic field orientations. Similar results are previously obtained by previous numerical studies in the solar metallicity environment as well <cit.>. Our results show that this gradual amplification of the magnetic field occurs also in lower metallicity environments. Figure <ref> indicates that there is a maximum magnetic field strength attained. We can estimate this maximum strength by assuming the balance between the post-shock magnetic pressure with the ram pressure of the injected WNM as B_ eq^2/8π = ρ_0 V_ inflow^2. This gives the typical value as B_ eq = 11 μ G ( n_0/0.57 )^1/2( V_ inflow/20 ) . Although this is an extreme case where the magnetic energy dominates the shock-compressed layer, Equation <ref> should give a good approximation even when we consider local enhancement by the turbulence and the condensing motion by the thermal instability, because the inflow ram pressure determines the typical maximum pressure of the shock-compressed layer. B_ eq shown in Figure <ref> (the horizontal black dashed line) indeed outlines the typical maximum strength of the magnetic fields. Note that the field strength can increase beyond B_ eq at the densest volume where the self-gravity plays a role. This occurs in n≳ 2.6× 10^3 in our simulation setup. Such a critical density of n∼ 2.6× 10^3 can be estimated as the density at which the CNM plasma beta with B = B_ eq becomes the unity <cit.>. It is difficult to clearly confirm this amplification in our current simulations due to our limited volume and due to our diffuse WNM-only initial condition. Nevertheless, we expect that the field strength increases also in the low metallicity environment because the CNM thermal states are similar between 1.0–0.2 so are the critical densities at which the self-gravity starts to dominate. Previous simulations of a converging flow at 1.0 show ⟨ B ⟩∝ n^1/2 at n>10^3 either when they integrated much longer time to accumulate mass or when they started with two-phase atomic flows with their mean number density already n≳ 5 <cit.>. In conclusion of this Section <ref>, our results show that the development of CNM clumpy/filamentary structure and the magnetic field strength is similar between metallicities at the same time measured in the unit of the cooling time (, the evolution slows down linearly with the metallicity in terms of the physical time). The field strength is relatively constant with a few μG to 10 μG up to n<10^3 as ⟨ B ⟩∝ n^1/5, and possibly starts to amplify further by the self-gravity. Note that this field strength in n<10^3 is consistent with Zeeman measurements of low (column) density regions of molecular clouds in the Milky Way <cit.> and Faraday rotation measurements toward the ISM in the LMC and SMC <cit.>. §.§ The overall turbulent structure In <cit.>, we show that the coevolution of the turbulence and the thermal evolution by the thermal instability plays a significant role to determine the multiphase density structure and its density probability distribution function at the molecular cloud formation stage. <cit.>, however, neglect the magnetic fields and studied simple hydrodynamic converging flows. In this section, we will confirm those findings even in the current magnetohydrodynamics simulations and investigate how the turbulent structure differs/resembles between the cases with different metallicities. Panels (a), (b), and (c) of Figure <ref> show the evolution of the velocity dispersion, the mean density, and the turbulent pressure, respectively, as a function of the time measured in the unit of the cooling time (, t/t_ cool). In the early stage at t<0.5 t_ cool, the shock-compressed layer has plane-parallel geometry in the low metallicity environment (see Section <ref>). Due to the resultant efficient deceleration of the inflow and the inefficient cooling, the mass is dominated by the shock-heated WNM state with slow turbulence, with negligible volume/mass of the CNM. Therefore, the volume-weighted velocity dispersion √(⟨δ v^2⟩) is close to the density-weighted velocity dispersion √(⟨δ v^2⟩_ρ) of ∼ 2 in 0.2 (Panel (a) of Figure <ref>; see also Appendix <ref>). At t≳ 0.5 t_ cool in the lower metallicity environment, the phase transition from the shock-heated WNM to the CNM tends to occur at a pressure closer to the inflow ram pressure (see Panel (c) Figure <ref>). The mean number density is higher in lower metallicity environments accordingly (Panel (b) of Figure <ref>). During this evolution, the turbulent pressure gradually grows until it balances against the inflow ram pressure. As a result, the turbulent pressure is almost the same between the three metallicities at 3 t_ cool (Panel (c) of Figure <ref>). These results suggest that the turbulent pressure is almost the same at 3 t_ cool. The difference exists due to the efficient compression in the lower metallicity environments, because the inflow and the shock front incidents with almost 90 deg angle, which results in the denser postshock WNM with slower velocity. However, this difference is limited to by a factor of 2 (4) in the velocity dispersion (in the mean density, respectively). The value of √(⟨δ v^2⟩)∼ 6–9, close to the sound speed of the WNM, suggests that the turbulence on the molecular cloud scale is powered by the WNM super-Alfvénic turbulence not only in the Milky Way galaxy but also in the LMC and SMC[Note that the turbulence at n∼ 10^3 is typically super-Alfvénic. √(⟨δ v^2⟩) (√(⟨δ v^2⟩_ρ)) roughly traces the turbulence of the WNM (CNM) because the volume is dominated by the WNM whereas 50 percent of the mass is locked in the CNM <cit.>. Therefore, combined with the slow evolution of ⟨ B ⟩ with n, the turbulent Alfvénic Mach number typically ranges ℳ_A≃ 1.6–9.9.]. To understand the turbulence structure, we perform the Fourier analysis of the turbulence by using the Fast Fourier Transform in the West (FFTW 3.3; ), and decompose the turbulence into the solenoidal and compressive modes, which are defined respectively as ṽ_ sol() = ( ×ṽ) × , ṽ_ comp() = ( ·ṽ) . Here, represents a unit wave vector and ṽ represents the Fourier component of the velocity field. We here denote k=|𝐤|. Figure <ref> shows the power spectrum at 3 t_ cool. The solenoidal mode dominates the turbulence power on all scales. Table <ref> summarizes the total fraction of each turbulent mode at 3 t_ cool. The solenoidal mode fraction amounts to 80–90 percent. In calculating this, we employ the powers between k/2π=0.1 and 2.56 pc^-1 to avoid numerical diffusion effects on small scale (we refer the readers to the caption of Figure <ref>)[We hereafter denote the spatial frequency k/2π as the inverse of the wavelength so that the wave with k/2π=0.1 pc^-1 has the wavelength of 10 pc.]. Figure <ref> shows the time evolution of this mode fraction as a function of t/t_ cool. The solenoidal (compressive) mode fraction evolves quasi-steadily with ∼ 80 percent (20 percent, respectively) after 1 t_ cool. Such solenoidal-mode-dominated turbulence is a natural consequence of the accreting system followed with the thermal instability, as we discuss in our previous converging flow simulations at 1.0 <cit.>. At early stages, the shock fronts deform by the interaction with the inflow density inhomogeneity. This curved shock fronts introduce inhomogeneity in the postshock entropy distribution. Such entropy inhomogeneity generates the turbulent vorticities accounting for a small fraction of the solenoidal mode <cit.>. At later stages after 1.0 t_ cool, the interaction between the formed CNM clumps and the shock fronts significantly deforms the shock fronts. This deformation creates a number of oblique shocks, whose maximum size is comparable to the size of the shock-compressed layer. This induces strong shear motion into the shock-compressed layer, so that the solenoidal mode dominates the turbulence. Our results show that the solenoidal-mode dominated turbulence emerges also in the low metallicity environment. In conclusion of this Section <ref>, our results suggest that in the range of 1.0–0.2, molecular clouds forming in the shock-compressed layer have the turbulent pressure close to the inflow ram pressure. ≳ 80 percent of the turbulence power is in the solenoidal mode at all the metallicities. This indicates that even in a galactic-scale converging region, forming molecular clouds are always solenoidal-mode dominated. Therefore, a galactic-scale compressive motion is important to form molecular clouds but it does not immediately mean enhancement of star formation efficiency by enhancing compressive motion in molecular clouds. §.§ The properties/statistics of the CNM clumps We identify the CNM structures to further investigate their properties. In this section, we define the CNM as the volume with T<200 K and n>20. First, we perform the Friends-of-Friends algorithm to identify CNM clumps as groups of connected CNM cells. Each clump has more than 64 member cells to avoid numerical noise on small scales. Figure <ref> compares the size and mass functions of the identified CNM clumps at 3 t_ cool. We define the size l of each CNM clump as l=√(I_ max/M) where I_ max and M are the maximum eigenvalue of its inertia matrix and the mass of each clump. The size distribution peaks at ∼ 0.1 pc at all the metallicities[The thermal instability grows also on scales smaller than 0.1 pc, but the sharp cutoff on < 0.1 pc is originated from our criteria of ≥ 64 member cells to avoid any numerical noise on that scale.]. The mass functions follow the power-law distribution of dn/ dm ∝ m^-1.7 up to ∼ 10^2. Some CNM mass tends to stagnate in the central region of the shock-compressed layer due to the converging flow configuration (see Figure <ref> and Appendix <ref>). Large CNM clumps obtain more mass or coagulate with other CNM gas so that the most massive clumps deviate from the power-law distribution. The index -1.7 has often been reported in the 1.0 environment by previous converging WNM flow simulations (in two-dimension <cit.> and in three-dimension <cit.>). This power-law function is explained from the statistical growth of the thermal instability with a given density fluctuation spectrum. The functional form of dn/ dm ∝ m^(α-3)/3-2 is expected when the seed density fluctuation has a three-dimensional averaged spectral index of α <cit.>. When α=11/3, , the Kolmogorov fluctuation, this gives dn/ dm ∝ m^-1.7, which is indeed consistent with the numerical results of the previous authors in 1.0 environments. Our results suggest that, at the same t/t_ cool, the same CNM mass spectrum can appear also in the lower metallicity environments due to the thermal instability with the Kolmogorov turbulence background. Second, we investigate the turbulence structure of the CNM volume alone by measuring the two point correlation of the CNM velocity field. We measure the second-order velocity structure function S(r) = ⟨| v(𝐫+𝐱) - v(𝐱) |^2 ⟩ , where r=|𝐫| and x is the three-dimensional position of the CNM cells. We select 27 sub-volumes in the shock-compressed layer, where each sub-volume is a (3.3 pc)^3 cube and their volume-centered position is (x,y,z)=(0±3.3 pc, 0±3.3 pc, 0±3.3 pc). Figure <ref> shows √(S(r)) from each metallicity at 3 t_ cool. The CNM volume overall shows the transition towards the Larson-type scale-dependence as √(S(r))∝ r^1/2. This relation extends from small scales within individual CNM clumps to large scales beyond those clumps. This indicates that the commonly observed Larson's law in molecular clouds is inherited from the CNM phase and is ubiquitous also in low metallicity environment down to 0.2. Note that the dynamic range is still limited if we consider only the scales without the numerical diffusion (, the scales larger than the vertical dotted line). S(r) in this range is between Kolmogorov and Larson's relations. Further high-resolution simulations are required to finally conclude that CNM velocity structure function follows the Larson-type relation on all scales. Nevertheless, the strongest turbulence power within the CNM clumps is in eddies whose scale is comparable to the typical CNM clump size ∼ 0.1 pc. Our results, therefore, suggest that the internal velocity dispersion within the CNM clumps remains ≲ 1 km s^-1 while the clump-to-clump relative velocity is 3 – 5 km s^-1. Note that the amplitude of √(S(r)) at 0.2 is smaller by a factor of two than that at 1.0, as we see in √(⟨δ v^2 ⟩_ρ) (see Figure <ref>). This is consistent with the comparison of the CO line width between clouds in the LMC/SMC and the Milky Way <cit.>, except for those in extreme star-forming systems, such as 30 Dor R136 cluster <cit.>. Also note that, a variation at a given spatial displacement r is larger in the lower metallicity environment because the shock-compressed layer is thicker and thus the total number of cells in this analysis increases toward lower metallicity. In conclusion of this Section <ref>, our results show that CNM mass spectrum is dn/ dm ∝ m^-1.7 commonly in the 1.0–0.2 range with the Kolmogorov turbulence background. The second-order structure function of the CNM volume alone indicates the transition towards the Larson-type turbulence scale-dependence ubiquitously at all metallicities with its amplitude smaller by a factor of two at 0.2 compared with 1.0. Combining all the results in Section <ref>, Figure <ref> schematically summarizes the hierarchical thermal/turbulent structure of the multi-phase medium in this metallicity range. § IMPLICATIONS AND DISCUSSIONS Our results show that the physical properties of a shock-compressed layer scales with the metallicity in the 1.0–0.2 range. This suggests that the properties of subsequently-formed molecular clouds are likely similar between the Milky Way, LMC, and SMC if we somehow select and compare clouds at the same t/t_ cool. §.§ Pre-existence of CNM structures in the WNM: CNM determines the formation of molecular clouds? On one hand in the 1.0 context, previous authors often investigate the cloud formation by supersonic flows in supernova remnants, galactic spirals <cit.>. For example, a single supernova remnant expands to 50 pc size in 0.3 Myr (∼ 0.3 t_ cool) and accumulates 10^4 Hi mass <cit.>. The compilation of multiple supernovae events is able to create even more massive molecular clouds <cit.>. On the other hand, in a low metallicity environment, the molecular cloud formation out of the WNM alone requires longer physical time. This is the case even in the configuration where the mean magnetic field is completely parallel to the WNM inflow as we have studied, which forms molecular clouds most efficiently compared with the magnetic fields inclined against the flow <cit.>. To estimate how this timescale has an impact on molecular cloud formation in a low metallicity environment, let us assume that all the CNM volume eventually evolves to the molecular gas. Based on our simulation results[The CNM mass in the shock-compressed layer at 15 Myr at 0.2 is 1.1×10^3 in our simulation.], the expected mass of the cloud, M_ cloud, is approximately M_ cloud≃ 1.1× 10^3 (n_0/0.57 ) (V_ inflow/20 ) (L/10 pc)^2 (t/3 t_ cool) ∝(n_0/0.57 )^2 (V_ inflow/20 ) (L/10 pc)^2 (Z/0.2) (t/15 Myr) . Here L^2 is the inflow cross section in our calculation domain and thus the first three factors in Equation <ref> come from the inflow mass flux. The exact dependence of t_ cool on n_0 and V_ inflow is still left for future studies, which is involved in the conversion from Equation <ref> to Equation <ref>. We here employ t_ cool∝ n_0^-1 Z^-1 as the fiducial dependence, which is consistent with our definition of t_ cool as calculated in Table <ref> (see Section <ref> and Equation <ref>). If we suppose the case that a single flow event with n_0=0.57 and V_ inflow=20 creates a typical maximum cloud of a few 10^5 in the LMC/SMC, Equation <ref> indicates that such a WNM flow should continue coherently on scales of L≃ 100 pc and 15 Myr. 15 Myr is one order of magnitude longer compared with the typical expansion timescale of a single supernova remnant. A superbubble rather than a single supernova event is more likely to keep such a coherent one-directional flow over 15 Myr timescale. However, prevalent existence of molecular clouds that is not associated with superbubbles in the LMC/SMC indicates that faster flows and/or pre-existence of CNM structure in the WNM is important for the cloud formation and the evolution in a lower metallicity environment <cit.>[Note that the CNM formation efficiency is higher in the lower metallicity environments as we discussed in Section <ref>. This variation is, however, limited compared with the difference of the cooling time between metallicities (, Z^-1 in Equation <ref>). In our simulations, the total mass of the CNM at 3t_ cool are 69 at 1, 174 at 0.5, and 694 at 0.2 respectively.]. One possibility is a fast Hi gas flow, which is observed as the tidal interaction between the LMC and the SMC <cit.>. Its velocity is as high as 50–100, comparable to the escape velocity of the LMC/SMC. We can straightforwardly expect the coherent continuation of such a flow over a few 10 Myr timescale because it travels on a 10 kpc scale between the LMC and the SMC. However, such a fast flow induces strong turbulence in molecular clouds, which can also destroy dense structures (, ). In addition, even for such efficient mass accumulation, the thermal evolution from the WNM to molecular clouds still requires the cooling. For example, albeit on L=100 pc scale with a coarser spatial resolution of 0.2 pc, <cit.> perform converging WNM flow simulations with V_ inflow=100 km s^-1 but without any CNM structure pre-existed in the WNM flow. In the 0.2 environment with the initial WNM density of n_0 = 0.75, they show that the molecular cloud formation takes 23 Myr. The cloud mass with n>10^4 is a few 10^5 at 23 Myr, which is consistent with Equation <ref>[In Equation <ref>, we count all the CNM mass with n>10^2. This therefore gives a typical maximum mass of formed molecular clouds.]. Therefore, even in the fast flow environment, the pre-existence of CNM structure in the WNM is important <cit.>. This leads to another question “how does the ISM form the CNM in the first place in low metallicity environments?” As seen in our simulations, low-mass CNM clumps form if the mean magnetic field is parallel to compressive events and this inflow can be as slow as V_ inflow=20. Therefore, some previous generations of a few 10 flow along with the mean magnetic field (such as a single supernova) are responsible to introduce CNM clumps in the WNM. Such CNM pre-existence and subsequent compression due to shocks presumably determine the formation site, the mass, and the structure of molecular clouds. Indeed, based on the recent ALMA observations toward N159E/W regions in the LMC, <cit.> indicate that a fast HI gas flow with a multiple-scale substructure potentially explains the formation of filamentary molecular clouds on various size scales. Such a hierarchical structure may also determines the fractal nature of the young stellar structures in the LMC, suggested by the spatial clustering analysis of star clusters <cit.>. In addition, the similarity between the power-law spectrum of molecular cloud mass function (, dn/ dm ∝ m^-α where α = 1.7–1.9 in the LMC: <cit.>) and that of CNM clump mass function in our simulation also indicates that the molecular cloud formation is pre-determined by the CNM structure (Figure <ref>). This implication needs further studies to confirm in the future. §.§ H_2 cooling The H_2 is an important coolant in the context of the primordial gas cooling. Chemical paths to the H_2 formation exist even in the primordial gas, especially that via the gas-phase interaction between H and H^-, where the electron fraction impacts the abundance of H^-. <cit.> calculate the electron fraction in the postshock layer of supersonic ISM flows and <cit.> show that, with V_ inflow=30, H_2 cooling is dominant over the metal line cooling in the postshock region even in 0.1 without any UV background that dissociates the H_2. This is not the case in our calculation where we employ G_0 = 1 for all the metallicity by assuming ongoing active star formation as in the LMC/SMC. The metal line cooling is always dominant in this setup. The impact of the H_2 cooling on thermal instability is limited typically to G_0 ≤ 10^-3 environments (<cit.>; see also <cit.>). It is left for future studies to investigate the dynamical formation and evolution of the multi-phase ISM driven by supersonic flows in lower metallicity environment with UV radiation field of G_0 <1. §.§ Different assumptions: Dust-to-Gas ratio, electron fraction, and radiation field In this section, we introduce several previous studies to list how the adopted assumptions impact our results. Firstly, many previous studies also assume that the dust-to-gas ratio is linearly scaled with the gas-phase metallicity in their fiducial models. Meanwhile, based on the extragalactic observational indications <cit.>, there are also studies testing a broken power-law dependence. For example, 𝒟∝ Z^2 in <cit.> and Z^3 in in <cit.>, where 𝒟 is the dust-to-gas ratio. Such a superlinear decrease of the dust abundance at ≲ 0.2 reduces the photoelectric heating rate relatively to the CII cooling rate. The thermally stable states of the UNM and the CNM is cooler in this superlinear case[See also Figure 2 of <cit.> for the transition from the photoelectric heating dominated regime to the cosmic-ray heating dominated regime.]. However in the range of ≥ 0.2, they show that this difference has a limited impact on the CNM thermal equilibrium state (see also Figure 5 of <cit.>). In addition, this superlinear relation of the dust-to-gas ratio does not significantly impact the overall dynamics in our simulations because CII cooling at the shock-heated WNM right after the shock compression provides the typical evolutionary timescale (as discussed in Appendix <ref>). Secondly, the gas-phase electron fraction changes the photoelectric heating efficiency. The photoelectric heating rate by FUV radiation can be estimated as Γ_ phot(1.0 ) = 1.3 × 10^-24ϵ G_0 erg s^-1 where ϵ is the photoelectric heating efficiency as ϵ = 4.9×10^-2/1+4.0×10^-3(G_0 T^1/2/n_eϕ_ PAH)^0.73 + 3.7×10^-2(T/10^4)^0.7/1+2.0×10^-4(G_0T^1/2/n_eϕ_ PAH) (see Equation (43) in ). Here n_e is the electron number density so that the electron fraction can be defined as x_e = n_e/n_ H, and the polycyclic aromatic hydrocarbon (PAH) reaction rate is ϕ_ PAH=0.5 <cit.>. The G_0 T^1/2/n_e dependence in the denominator describes the relative importance of the ionization against the recombination on the dust surface[See also Equations (19) and (20) in <cit.> and Equations (32)–(34) in <cit.> for the exact form; ,.]. Our fiducial value, Γ_ phot(1.0 )=2.0 × 10^-26 erg s^-1 in Equation <ref>, corresponds to the typical condition of the thermally stable WNM at 1.0 with the x_e ∼ 0.02 determined by cosmic ray ionization (, Equation (12) of ). The photoelectric heating efficiency may deviate from this fiducial value right after the shock compression where the collisional ionization increases the electron fraction. <cit.> performed a high-resolution simulation (albeit one-dimensional) of shock propagation into the WNM. The postshock shock-heated WNM volume reaches n∼5, x_e∼ 0.1, and T ∼ 6400 K, so that Equation <ref> expects that Γ_ phot(1.0 )=6 × 10^-26 erg s^-1. The factor of three difference form our fiducial value can expand the parameter space for the thermally stable WNM. However note that a factor of few uncertainty also exists in the assumption on the PAH size distribution and shape <cit.>. Also note that such high x_e volume is limited to 10^-1 pc scale from the shock front by the recombination (, Figure 3 of ). Therefore we opt to use the fiducial constant photoelectric heating efficiency (and its linear dependence on the metallicity) for our comparison between metallicities in this article. Lastly, the interstellar radiation field varies across space. It is ideal to numerically investigate the 10pc-scale local radiation field, , zoom-in approach consistently coupled with galactic-scale star formation <cit.> and this is left for future studies. §.§ Applicability of converging flow results to the ISM in reality Our results indicate that the solenoidal-mode-dominated turbulence is generally realized in molecular clouds undergoing supersonic compressions in the metallicity range of 1.0–0.2 . The density inhomogeneity (and/or velocity fluctuation) in the converging flows can be larger (can exist) in reality, especially if the flows are already multiphase than the one-phase WNM <cit.>. Such a strong density/velocity inhomogeneity leads to stronger shock deformation and induces stronger shear motion into the shock-compressed layer. The stronger inhomogeneity in the inflow density/velocity induces stronger turbulence <cit.>, but the turbulent speed is expected to be on the order of the WNM sound speed (, Appendix <ref>). Expansion of multiple supernova remnants also induce cloud compressions as often observed in galactic scale numerical simulations <cit.> and may explain the bubble features in nearby galaxies observed in JWST <cit.>. Each compression event occurs at diverse angles at various timing and the generation of solenoidal mode turbulence locally occurs due to the deformed shock fronts. It is likely that the compressive mode of turbulence does not become dominant unless multiple compressions from various angles occurs at a same timing with relatively uniform density or unless the cloud itself becomes massive enough to self-gravitationally collapse. Previous analytic studies also show that the solenoidal mode of turbulence can grow faster than the compressive mode even in gravitationally contracting background <cit.>. §.§ Implications to the cosmic star formation history Our simulations show that physical properties of molecular clouds resemble if compared at the same time in the unit of the cooling time. How about those in lower metallicity ranges, which are important to understand the overall cosmic star formation history beyond the Magellanic Clouds? The metallicity dependence that we have shown comes primarily from the line coolings of [OI] (63.2 μm) and [CII] (157.7 μm). This metallicity dependence in the cooling rate is expected to continue down to Z∼ 10^-3, until which [OI] and [CII] are the dominant coolant to induce thermal instability. In contrast, the main heating process changes from the photo-electric heating to the X-ray and cosmic ray heating at ∼0.1. The heating rate in <0.1 becomes independent to the metallicity if we employ the same constant X-ray and cosmic ray heating rate. Under such conditions in <0.1, <cit.> show that the density of the thermally stable CNM scales roughly as n_ CNM∝ Z^-1 (see their Equation 18). This results in a roughly constant cooling time in <0.1. In an even lower metallicity range of Z≲ 10^-3, the UV field strength impacts the growth of thermal instability (due to H_2 dissociation) and thus the comprehensive investigations remain by changing also the UV field strength for future studies. Overall, we presume that, in the range of ≥0.1, physical properties of molecular clouds is similar between metallicities at the same time measured in the unit of the cooling time, and we need further investigation for in the range of <0.1. Meanwhile, our results also indicate that the difference in the cloud formation condition (, different n_0, V_ inflow ) is important to form molecular clouds with different properties, which are required to comprehensively understand the cosmic star formation history. As an example, recent ALMA observations started to spatially resolve galactic disk of star-forming galaxies at the so-called Cosmic Noon at the redshifts of 2–4. They show the existence of molecular clouds in the gravitationally unstable galactic disks and the clouds tend to have higher column density / higher star formation rate density (, Σ_ gas > 100 pc^-2 and Σ_ SFR>1 yr^-1 kpc^-2) than those in the Milky Way <cit.>. Such properties resemble those observed in nearby luminous infrared galaxies <cit.>, who possibly experience galaxy mergers. These indicate a possibility that some drastic mechanisms, such as galaxy mergers with high velocity, form high column density molecular clouds that host active star formation at the Cosmic Noon. In addition, even starting with the same cloud formation condition to form a similar molecular cloud across the metallicity, the variation of the stellar initial mass function with the metallicity, if any, may arise from subsequent fragmentation in collapse of the clouds and the circum-stellar disks. We here remark that some previous authors investigated this phenomena with numerical simulations using the same initial cloud properties for all metallicities, such as . The above discussions are just speculations at this moment. Further simulations with various initial density, inflow velocity , are needed and left for future studies. § SUMMARY AND PROSPECTS To understand metallicity-dependence of molecular cloud formation at its initial stages, we perform and compare the MHD simulations of the WNM supersonic converging flows with 20 on 10 pc scales in 1.0, 0.5, and 0.2 environments. We impose the mean magnetic filed strength of 1 μG as representative strength in the metallicity range from the Milky Way to the Magellanic Clouds. The field orientation is parallel to the supersonic flows, which is a promising configuration for efficient molecular cloud formation. The flow forms a shock-compressed layer sandwiched by two shock fronts, within which thermal instability occurs to develop the multi-phase ISM. We employ the 0.02 pc spatial resolution to resolve the thermal instability and turbulent structures. We summarize our findings as follows: * The development of the CNM structure in the shock-heated WNM requires longer time in the lower metallicity environment where the typical t_ cool is almost inversely proportional to the metallicity. The CNM thermal states at different metallicities are similar if compared at the same time measured in the unit of the cooling time, instead of the same physical time. * The typical field strength of magnetic fields evolves gradually with ⟨ B ⟩∝ n^1/5 up to B∼ 11 μG at n∼ 10^3. This is consistent with Zeeman measurements of low (column) density regions of molecular clouds in the Milky Way, and Faraday rotation measurements toward the ISM in the LMC and SMC. * The postshock turbulent pressure balances against the inflow ram pressure (Panel (c) of Figure <ref>). The turbulent velocity is slower in a lower metallicity environment (albeit the difference is within a factor of two), which is consistent with the line width of molecular clouds observed in the LMC/SMC. * The velocity power spectrum follows the Kolmogorov's law if averaged over the entire volume of the shock-compressed layer, while two-point velocity correlation of the CNM volume alone exhibits the transition towards the Larson's law. * At all metallicities after the 1.0 t_ cool, the solenoidal (compressive) mode of the turbulence in the shock-compressed layer accounts for >80 percent (<20 percent, respectively) of the total turbulence power. This indicates that, even in a galactic-scale converging region, forming molecular clouds are always solenoidal-mode dominated. Therefore, a galactic-scale compressive motion is important for molecular cloud formation, but it does not immediately mean an enhancement of star formation efficiency by enhancing compressive motion in molecular clouds. * The CNM clump mass function has a power-law distribution as dn/ dm ∝ m^-1.7, which can be explained by the thermal instability growth under the Kolmogorov turbulence background. * These results suggest the common existence of hierarchical thermal and turbulent structure in molecular cloud precursors in the 1.0–0.2 range. The WNM/UNM components occupy most of the volume with strong turbulence of 4–10. Meanwhile, the CNM component has the inter-clump velocity of 3–5 as well as the internal velocity dispersion of ≲ 1 within individual clumps (Figure <ref>). * We expect that this similarity in molecular cloud properties across the metallicity at the same t/t_ cool continues to hold down to Z∼ 10^-3, because the dominant coolant are [OI] (63.2 μm) and [CII] (157.7 μm) until this metallicity. Meanwhile, this indicates that some different cloud formation condition (, different n_0, V_ inflow due to galaxy mergers ) is required to form molecular clouds with higher columnd density / higher star formation rate density, as observed in luminous infrared galaxies and star-forming galaxies at the Cosmic Noon. * Our results show that, in the lower metallicity environment, the longer physical time is required for the development of CNM structures out of the pure WNM. This indicates that, at the formation stage of molecular clouds out of the WNM in low-metallicity environments, the pre-existence of CNM structure in the WNM volume controls the formation site and the mass of molecular clouds. Our calculation is still limited to the early phase of molecular cloud formation. Investigating further compression in low metallicity environments is left for future studies. In a 1.0 environment, it is known that the efficiency of the molecular cloud formation depends on the inclination of the mean magnetic field against the inflow <cit.>. We are planning to investigate similar dependence of the magnetic field geometry in our forthcoming article. Collisions between flows with different metallicities is ubiquitous in the context of galaxy mergers, including the LMC-SMC tidal interaction. It is interesting to investigate the spatial and temporal variation of the metallicity in the shock-compressed layer created by WNM flows with different metallicities, but is also left for future studies. Studying cloud formation in extremely low metallicity environments down to 10^-4 is also important in revealing the initial condition of star formation in young galaxies. For example, formation of close binaries of massive stars in such environments is interesting as an origin of the massive binary black holes whose coalescence events are observed by gravitational waves <cit.>. Previous numerical studies in this context start with cosmological initial conditions <cit.>, or with an idealized model of a star forming core without its formation process <cit.>, or focus on the kpc-scale thermal instability without resolving core scales ∼ 0.1 pc <cit.>. We are planning to reveal the formation and the internal structure of such star-forming clouds as an extension of our studies in this article. The dependence on radiation field strength and cosmic-ray/X-ray intensities are other important parameters in such conditions <cit.>, but are also left for future studies (see also Section <ref>). § ACKNOWLEDGMENTS We appreciate the reviewer for the careful reading and comments, which improved our draft. Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. MINK (JP18J00508, JP20H04739, JP22K14080), KI (JP19K03929, JP19H01938, JP21H00056), K.Tomida (JP16H05998, JP16K13786, JP17KK0091, JP21H04487), TI (JP18H05436, JP20H01944), KO (JP17H01102, JP17H06360, JP22H00149), and K.Tokuda (JP21H00049, JP21K13962) are supported by Grants-in-Aid from the Ministry of Education, Culture, Sports, Science, and Technology of Japan. We appreciate Atsushi J. Nishizawa and Chiaki Hikage for helping our Fourier analysis, and appreciate Kohei Kurahara for discussions on Faraday rotation measurements toward the Magellanic Clouds. We are grateful to Tomoaki Matsumoto, Hajime Susa, Sho Higashi, Gen Chiaki, Hajime Fukushima, Shu-ichiro Inutsuka, Shinsuke Takasao, Tetsuo Hasegawa, Kengo Tachihara, and Jeong-Gyu Kim for fruitful comments. We are grateful to Hiroki Nakatsugawa who contributed the early phase of this study through his master thesis work. § HEATING AND COOLING RATE We calculate the metallicity-dependent net cooling rate as ρ(T,Z) = -(ρ/) Γ(Z) + (ρ/)^2 Λ(T,Z) , where = μ_ M m_ p. The heating part consists of the photoelectric heating, X-ray heating, and cosmic-ray heating as Γ(Z) = Γ_ phot(Z) + Γ_ X + Γ_ CR , Γ_ phot(Z) = 2.0 × 10^-26( Z/1.0 ) erg s^-1 , Γ_ X = 2.0 × 10^-27 erg s^-1 , Γ_ CR = 8.0 × 10^-28 erg s^-1 . Here, the photoelectric heating rate is proportional to the metallicity because it is dominated by dust grains <cit.>. On the other hand, X-ray and cosmic ray heating rates do not depend on the metallicity because they mostly heat hydrogen directly. The cooling part consists of the cooling due to Lyα, C_ II, He, C, O, N, Ne, Si, Fe, and Mg lines (see also ). The functional form of the total cooling rate normalized in the unit of Γ_ phot(1.0 ) is Λ(T,Z)/Γ_ phot(1.0 ) = 10^7 exp( -118400/T+1000) + 1.4× 10^-2(Z/1.0 Z_⊙) √(T)exp( -92/T) cm^3 (for T≤14,577 K) , 5 × 10^3 + 1.4× 10^-2(Z/1.0 Z_⊙) √(T)exp( -92/T) cm^3 (for 14,577 K< T ≤ 19,449 K) , (Z/1.0 Z_⊙) [ 3.75 × 10^4 ( 1 -tanh(T-2×10^5/2×10^5) ) exp(-5×10^4/T) +10^3 exp( -5×10^4/T) ] cm^3 (for T > 19,449 K) . Here, the cooling rate by Lyα and He does not scale with the metallicity, while the other coolings due to the metals' lines are proportional to the metallicity. § COOLING TIME §.§ The typical cooling time The cooling time, P/(γ-1)/ρℒ, varies locally with space and time, depending on the local density/temperature variation. Nevertheless, the typical cooling time of this converging flow system can be estimated based on the typical state of the shock-heated WNM as follows. To determine the representative thermal state, let us start with a simple estimation in a perfect one-dimensional shock compression. Once the inflow WNM passes the shock front, the density in the downstream increases to n≃ 4 n_0 = 2.3. Meanwhile, the temperature also increase to T≃ P_ ram/k_ B/(4n_0)=1.5×10^4 K, but it rapidly decreases to T≃ T_0 because of the efficient coolings by Lyα and heavy metals' lines. For example, the temperature decreases on the timescale of 2.0×10^-4 Myr, 4.1×10^-4 Myr, and 1.0×10^-3 Myr at 1.0, 0.5, and 0.2 based on the T>14,577 K regime of Equation <ref> with n=4n_0 and T= P_ ram/k_ B/(4n_0). These timescales are shorter than the typical sound crossing time on a single numerical cell of 0.02 pc size with T= P_ ram/k_ B/(4n_0), which is 1.2× 10^-3 Myr. Therefore, we need to perform simulations with a higher spatial resolution to confirm whether this shock-heated WNM evolves isochorically or isobarically during the return to T≃ T_0. In the following, let us assume that this rapid cooling leads to rather isochorical evolution and employ n= 4 n_0 and T=T_0 as the representative initial density and temperature to estimate the typical t_ cool (see also the discussions in Section <ref>). The corresponding pressure, 4 n_0 k_ B T_0, is smaller than the ram pressure roughly by a factor of two (P/k_ B≃ P_ ram /2k_ B = 1.75 × 10^4 erg cm^-3). This is consistent with most of the volumes in our simulations as shown in Figure <ref>. Such a thermal pressure smaller than the ram pressure is achieved also because the turbulent pressure almost balances with the inflow ram pressure as shown in Figure <ref>. Starting with n= 4 n_0 and T=T_0, the net cooling rate during the subsequent thermal evolution is mostly determined by the C_ II cooling given by the first equation of Equation <ref>: ρℒ ≃ n^2 Λ(T,Z) = n^2 × 2.8 × 10^-28 (Z/) √(T)exp(-92/T) erg s^-1 cm^-3 . Threfore, the cooling time can be estimated in general as t_ cool = P/(γ-1)/n^2Λ(T,Z) ≃ 2.99 Myr ( P/k_ B/10^4 K ) ( n/1 )^-2( Z/)^-1( T/6400 K)^-1/2exp( 1-T/6400 K) With n=4n_0, T=T_0, and P/k_ B = P_ ram /2k_ B, the typical cooling time is t_ cool = 2.99 Myr ( P_ ram/2k_ B/10^4 K ) ( 4n_0/1 )^-2( Z/)^-1( T_0/6400 K)^-1/2exp( 1-T_0/6400 K) ≃ 1.01 Myr(Z/)^-1 . Equation <ref> gives the typical value of t_ cool as 1.01 Myr at 1.0, 2.03 Myr at 0.5, and 5.11 Myr at 0.2 as summarized in Table <ref>. Equation <ref> gives a rough metallicity-dependence. §.§ The dependence of the typical cooling time on the initial condition In the current article, we focus on the metallicity dependence starting with the fixed initial condition. Therefore, it is left for future studies to investigate the exact dependence of t_ cool on the initial conditions other than the metallicity. This requires parameter surveys by changing the mean density, the inflow velocity, the magnetic field strength, the inclination of the mean magnetic fields, the UV background, and so on. The resultant impacts by all these changes are coupled each other because they modify the ram pressure, the turbulent pressure, and the following resultant shock-heated WNM state. Nevertheless, as a first step, let us list several possible dependences to the two parameters n_0 and V_ inflow. For simplicity, we here assume that the metallicity dependence comes only from the cooling/heating rate and is always independent from the choice of n_0 and V_ inflow. The other conditions are as same as our simulation setup in this article (, the WNM inflow is parallel to the orientation of the mean magnetic field with B=1 μG, the injected WNM is on the thermally stable state .). These simplistic assumptions have to be investigated by further simulations, which is left for future studies. Suppose that the postshock pressure, density, and temperature have the dependence as P∝ n_0^α V_ inflow^β, n∝ n_0^γ V_ inflow^δ, and T = P/nk_ B∝ n_0^α/γ V_ inflow^β/δ, t_ cool∝ n_0^α-3β/2 V_ inflow^β-3δ/2 Z^-1 If we employ n= 4 n_0 and T=T_0 as the shock-heated WNM state right behind the shock as we did in Section <ref>, α=γ=1 and β=δ=0. The cooling time is t_ cool∝ n_0^-1 Z^-1 . We employ Equation <ref> as our fiducial dependence to derive Equation <ref>. Alternatively, most of the shock-heated WNM volume evolves close to T=T_0 even beyond n>4n_0 (Figure <ref>), and thus we may consider that an isothermal shock approximation describes the overall postshock state. In this case, α=γ=1 and β=δ=2. The cooling time is t_ cool∝ n_0^-1 V_ inflow^-2 Z^-1 , accordingly. We may also consider a simpler condition with P=P_ ram and T=T_0, and n=P_ ram/k_ B/T. This gives α=β=δ=2 and γ=1 and the cooling time is t_ cool∝ n_0^-1/2 V_ inflow^-2 Z^-1 , However as shown in our simulations, such a perfect isothermal compression is limited to the central volume during the very early stage when the shock compression is still close to a plane parallel geometry without significant deformation (, Panel (c) of Figure <ref> and Panel (c) of Figure <ref>). Lastly, we would like to note that the above arguments assume the maximum pressure of the thermally stable state is independent of the metallicity. The flow ram pressure in the <0.1 environments must be higher (e.g., with a higher n_0, a faster V_ inflow) to successfully overcome this maximum pressure to enter the thermally unstable regime, because the maximum pressure depends on the matallicity roughly as ∝ 1/Z <cit.>. § TURBULENT VELOCITY Panel (a) of Figure <ref> shows that the turbulent velocity at earlier stages is much slower than the WNM sound speed, especially in lower metallicity environments (, ∼ 2 in 0.2 at t ≲ 0.5 t_ cool). The shock front configuration controls this turbulence strength. As we discussed in the first two paragraphs of Section <ref>, CNM forms slower in lower metallicity environments and most of the volume resides in the shock-heated WNM state for longer physical time. Its thermal pressure, comparable to the inflow ram pressure, keeps the shock fronts as the plane-parallel configuration. (see Panels (a)–(c) of Figures <ref> and <ref>). Such configuration decelerates the inflow more efficiently than deformed shock fronts, and the postshock volume becomes denser and less turbulent. The physical state of the shock-heated WNM in this efficient deceleration regime can be estimated with the one-dimensional isothermal shock jump condition as v_1/V_ inflow = n_0/n_1 = P_0/P_1 = ℳ^-2 . (See also Panel (c) of Figure <ref> and the discussion after Equation <ref> for the validity of the one-dimensional isothermal approximation.) Here v_1 is the postshock flow velocity, n_1 is the postshock density, P_1 is the postshock thermal pressure, and ℳ is the inflow Mach number. The setup of the injected WNM expects v_1=2.0 and n_1=5.8, which is seen especially in Panel (c) of Figure <ref>. This is not prominent in higher metallicity environments because the physical time of the plane-parallel shock configuration is shorter due to the efficient transition from the WNM to the CNM (, the resultant interaction between CNM clumps and shock fronts induce more deformation of the shock fronts at earlier timing). In addition, in lower metallicity environments, the shock-heated WNM travels more distance from the shock front when they become the CNM. Dense CNM clumps thus tend to form in the central region in lower metallicity environments (Panel (e) of Figure <ref>) and the turbulent development delays. This contributes to the difference between the metallicities in Panels (a) and (b) of Figure <ref> even after normalized with t_ cool until the turbulence is fully developed at t∼ 3 t_ cool. The turbulent velocity increases in time once the shock fronts start to deform. <cit.> performed a systematic survey to show that the level of the shock deformation changes with the strength of the inflow density inhomogeneity. The turbulent velocity is, however, limited on the order of the WNM sound speed and it does not exceed 10. This is a natural consequence of the oblique shocks. As seen in our current simulation and <cit.>, the typical spatial scale of the shock deformation reaches the system size (see also Figure 8 of ). Therefore, if we consider α∼ 45 degree as the typical shock angle against the inflow, the oblique isothermal shock jump conditions provide the postshock physical states as v_1/V_ inflow = n_0/n_1 = P_0/P_1 = (ℳsinα)^-2∼ 0.2 . Here v_1 is the postshock velocity component normal to the shock front and this gives v_1≃ 4.1. The angle is locally smaller (α< 45 deg) to generate faster flow, but v_1>10 can be achieved when the inflow is parallel to the shock front with α≤ 27 deg. Such a closely parallel configuration does not occur on the entire area of the shock front so that the volume-averaged postshock velocity dispersion remains on the order of the WNM sound speed. aasjournal
http://arxiv.org/abs/2307.01005v1
20230703134037
Incomplete Information Linear-Quadratic Mean-Field Games and Related Riccati Equations
[ "Min Li", "Tianyang Nie", "Shunjun Wang", "Ke Yan" ]
math.OC
[ "math.OC" ]
Incomplete Information Linear-Quadratic Mean-Field Games and Related Riccati Equations This work was supported by the National Key R&D Program of China (No. 2022YFA1006104), the National Natural Science Foundation of China (Nos. 12022108, 11971267, 12001320, 11831010, 61961160732, 61977043), the China Postdoctoral Science Foundation (No. 2023M732085), the Natural Science Foundation of Shandong Province (Nos. ZR2022JQ01, ZR2019ZD42, ZR2020ZD24), the Taishan Scholars Young Program of Shandong (No. TSQN202211032), the Distinguished Young Scholars Program and the Young Scholars Program of Shandong University. (Corresponding author: Ke Yan.) Min Li,  Tianyang Nie,  Shunjun Wang  and  Ke Yan Min Li is with the School of Mathematics, Shandong University, Jinan 250100, China, and also with the Geotechnical and Structural Engineering Research Center, Shandong University, Jinan 250061, China (e-mail: lim@sdu.edu.cn). Tianyang Nie and Ke Yan are with the School of Mathematics, Shandong University, Jinan 250100, China (e-mail: nietianyang@sdu.edu.cn; 201812092@mail.sdu.edu.cn). Shujun Wang is with the School of Management, Shandong University, Jinan 250100, China (e-mail: wangshujun@sdu.edu.cn). ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We study a class of linear-quadratic mean-field games with incomplete information. For each agent, the state is given by a linear forward stochastic differential equation with common noise. Moreover, both the state and control variables can enter the diffusion coefficients of the state equation. We deduce the open-loop adapted decentralized strategies and feedback decentralized strategies by mean-field forward-backward stochastic differential equation and Riccati equations, respectively. The well-posedness of the corresponding consistency condition system is obtained and the limiting state-average turns out to be the solution of a mean-field stochastic differential equation driven by common noise. We also verify the ε-Nash equilibrium property of the decentralized control strategies. Finally, a network security problem is studied to illustrate our results as an application. Keywords: Common noise, forward-backward stochastic differential equation, mean-field game, incomplete information, Riccati equation, ε-Nash equilibrium § INTRODUCTION The present work of mean-field games (MFGs) were introduced by Lasry and Lions <cit.> and simultaneously by Huang, Malhamé and Caines <cit.>. The asymptotic Nash equilibrium for stochastic differential games with infinite number of agents subject to a mean-field interaction when the number of agents goes to infinity leads to the theory of MFGs. In recent years, it has widely application in reality, such as finance, economics and engineering, etc. The interested readers can refer to can refer to <cit.> for linear-quadratic (LQ) MFGs, and refer to <cit.> for further analysis of MFGs and related topics. In the real world, usually agents can only get incomplete information at most cases. Wang, Wu, and Xiong <cit.> studied the optimal control problems for forward-backward stochastic differential equation (FBSDE) with incomplete information and introduced a backward separation idea to overcome the difficulty arising from an LQ optimal control problem. Recently, the large population problems with incomplete information are extensively studied. For example, Huang, Caines and Malhamé <cit.> investigated dynamic games in a large population of stochastic agents which have local noisy measurements of its own state. Huang, Wang and Wu <cit.> studied the backward mean-field linear-quadratic-Gaussian (LQG) games of weakly coupled stochastic large population system with full and partial information. Şen and Caines <cit.> considered the nonlinear MFGs where an individual agent has noisy observation on its own state. Furthermore, some literature can be found in <cit.> for the mean-filed type control problems with incomplete information. The above mentioned literatures about MFGs for stochastic large population systems are independent of the random factors in each agent's state process. However, many models in applications do not satisfy this assumption. For example, the financial market models often consider some common market noise affecting the agents. In <cit.>, Carmona, Fouque and Sun gave an explicit example of a LQ mean-field model with common noise used to interbank lending and borrowing. The presence of common noise which means the fact that they are influenced by common information e.g. public data. Consequently, the model of MFGs with common noise is a general setting in reality and have attracted significant attentions recently. In our paper, we consider N-player game models in which individual agents are subjected to two independent sources of noise: an idiosyncratic noise, independent from one individual to another, and a homogenous one, identical to all the players, accounting for the common environment in which the individual states evolve. This is mainly due to the fact that the states of the individual agents are subjected to correlated noise term. Let us recall the literatures on MFGs with common noise. Carmona, Delarue and Lacker <cit.> discussed the existence and uniqueness of an equilibrium in the presence of a common noise, see also <cit.>, Volume 2, Chapter 3. Using PDE approach, Cardaliaguet, Delarue, Lasry and Lions <cit.> studied the master equation and convergence of MFGs with common noise. Tchuendom <cit.> showed that a common noise may restore uniqueness in LQ MFGs. Bayraktar, Cecchin, Cohen and Delaure <cit.> focused on finite state MFGs with common noise. Mou and Zhang <cit.> considered non-smooth data MFGs and global well-posedness of master equations with common noise. Li, Mou, Wu, Zhou <cit.> discussed a class of LQ mean field games of controls with common noise, which proved the global well-posedness of the corresponding master equation without any monotonicity conditions. The literatures related to our work mainly include <cit.>. Huang and Wang <cit.> studied the dynamic optimization of large population system with partial information and common noise. Bensoussan, Feng and Huang <cit.> considered a class of LQG MFGs with partial observation structure for individual agents and the dynamic of individuals are driven by common noise. Huang and Yang <cit.> investigated asymptotic solvability of a LQ mean-field social optimization problem with controlled diffusions and indefinite state and control weights, where the state is driven by the individual noise and common noise. The contributions of our paper are the following. * Firstly, we assume the states of all agents are governed by some underlying common noise, so the individual agents are not independent of each other. Compared with our recent paper <cit.>, where the state of all agents depend on two independent noises, it leads to the state-average limit is deterministic. In current paper, with the help of conditional expectation, the presence of common noise makes the state-average limit in MFGs analysis be some stochastic process instead of deterministic process, i.e., the dynamic of the limit of state-average satisfies a stochastic differential equation (SDE) driven by the common noise, see Remark <ref> and (<ref>) for more details. * Secondly, we study LQ MFGs with common noise, in which the state and the control variable can both enter the diffusion coefficients in front of individual noise and common noise for individual state. Compared with <cit.>, the diffusion terms in <cit.> are independent of the state and control variables. Our model is more complicated than the one in <cit.>. Due to the dependence of control variables of diffusion coefficients, our Riccati equations are no longer in standard form. Thus, there arise difficulties when we represent the optimal feedback control strategies of open-loop adapted policies via some Riccati equations. * Thirdly, we introduce two Riccati equations to decompose the consistency condition system of our LQ MFGs. One Riccati equation is introduced due to the mean-field interaction, see (<ref>). Another Riccati equation, see (<ref>), is no longer in standard form as in Yong and Zhou <cit.>, since both the diffusion coefficients in front of individual noise and common noise depend on the control variable. We are successful to establish the well-posedness of these two Riccati equations under suitable assumptions, see Lemma <ref>, Theorem <ref> and Theorem <ref>. * Finally, we apply the results to solving a network security problem. An explicit form of the unique approximate Nash equilibrium is obtained through Riccati equations. The rest of the paper is organized as follows. In Section <ref>, we formulate the LQ MFG problem with incomplete information. Moreover, we introduce the Riccati equations to decompose the consistency condition system and verify the ε-Nash equilibrium of the control strategies. Section <ref> establishes the well-posedness of this new kind of Riccati equation and the strategies are given explicitly by solutions of Riccati equations. In Section <ref>, we give an example for the network security model. § INCOMPLETE INFORMATION LQ MFGS For fixed T >0, we consider a finite time interval [0,T]. Let (Ω,ℱ,𝔽,ℙ) be a complete filtered probability space satisfying the usual conditions on which a standard (1+N)-dimensional Brownian motion {W_0(t),W_i(t), 1≤ i ≤ N}_0≤ t≤ T is defined. Here, W_i is the individual noise while W_0 is the common noise due to underlying common factors. Let 𝔽={ℱ_t}_0≤ t≤ T be the natural filtration with ℱ_t:=σ{W_i(t),0≤ i ≤ N, 0≤ t ≤ T}∨𝒩_ℙ (where 𝒩_ℙ is the class of ℙ-null sets of ℱ_t). Define ℱ_t^W_0:=σ{W_0(t),0≤ t ≤ T}∨𝒩_ℙ, ℱ_t^W_i:=σ{W_i(t),0≤ t ≤ T}∨𝒩_ℙ, ℱ_t^i:=σ{W_0(t),W_i(t),0≤ t ≤ T}∨𝒩_ℙ and 𝒢_t:=σ{W_i(t),1≤ i ≤ N, 0≤ t ≤ T}∨𝒩_ℙ. In our setting, we denote by {ℱ_t^W_0}_0≤ t≤ T the common information taking effects on all agents; {ℱ_t^W_i}_0≤ t≤ T the individual information of the i-th agent; {ℱ_t^i}_0≤ t≤ T the full information of the i-th agent; {ℱ_t}_0≤ t≤ T denotes the complete information of the system. In our paper, the i-th agent cannot access the information of other agents and can only observe its individual noise W_i(·). Throughout the paper, ℝ^n denotes the n-dimensional Euclidean space with its norm and inner product denoted by |·| and ⟨·, ·⟩, respectively. For a given vector or matrix M, let M^⊤ stand for its transpose. We denote the set of symmetric n× n (resp. positive semi definite) matrices with real elements by 𝒮^n (𝒮_+^n). If M∈𝒮^n is positive (semi) definite, we write M>(≥)0. For positive constant k, if M∈𝒮^n and M > kI, we denote M≫0. For a given Hilbert space ℋ and a filtration {ℱ_t}_0≤ t ≤ T, let L^2_ℱ(0,T;ℋ) denote the space of all ℱ_t-progressively measurable processes g(·) satisfying 𝔼∫_0^T|g(t)|^2dt<+∞; L^2(0,T;ℋ) denotes the space of all deterministic functions g(·) satisfying ∫_0^T|g(t)|^2dt<+∞; L^∞(0,T;ℋ) denotes the space of uniformly bounded functions; C([0,T];ℋ) denotes the space of continuous functions. Now, we consider a large population system with N individual agents {𝒜_i}_1≤ i ≤ N. The state x_i(·) for i-th agent 𝒜_i is given by the following linear SDE { dx_i(t)= [A(t)x_i(t)+B(t)u_i(t)+α(t)x^(N)(t)+b(t)]dt +[C(t)x_i(t)+D(t)u_i(t)+β(t)x^(N)(t)+σ (t)]dW_i(t) +[C_0(t)x_i(t)+D_0(t)u_i(t)+β_0(t)x^(N)(t)+σ_0(t)]dW_0 (t), x_i(0)= x, . where x∈ℝ^n and x^(N)(·):=1/N∑_i=1^N x_i(·) denotes the state-average of population. The corresponding coefficients A(·),B(·),α(·),b(·),C(·),D(·),β(·),σ(·), C_0(·),D_0(·),β_0(·), σ_0(·) are some deterministic matrix-valued functions with appropriate dimensions. For 1≤ i≤ N, the centralized admissible control set 𝒰_ad^c is defined as 𝒰_ad^c:={u_i(·)|u_i(·)∈ L^2_𝒢_t(0,T;ℝ^k)}. Moreover, for 1≤ i≤ N, the decentralized admissible control set 𝒰_i is defined as 𝒰_i:={u_i(·)|u_i(·)∈ L^2_ℱ_t^W_i(0,T;ℝ^k)}. Let u(·)=(u_1(·),…,u_N(·)) be the set of control strategies of all agents and u_-i(·)=(u_1(·),…,u_i-1(·),u_i+1(·),…,u_N(·)) be the set of control strategies except for i-th agent 𝒜_i. The cost functional of 𝒜_i is given by 𝒥_i(u_i(· ),u_-i(· )) =1/2𝔼{∫_0^T[⟨ Q(t)(x_i(t)-x^(N)(t)),x_i(t)-x^(N)(t)⟩+⟨ R(t)u_i(t),u_i(t)⟩]dt+⟨ Gx_i(T),x_i(T)⟩}. Moreover, we introduce the following assumptions of coefficients. (A1) A(·),α(·),C(·),β(·),C_0(·),β_0(·) ∈ L^∞(0,T;ℝ^n× n); b(·),σ(·),σ_0(·) ∈ L^∞(0,T;ℝ^n); B(·),D(·),D_0(·) ∈ L^∞(0,T;ℝ^n× k); (A2) Q(·)∈ L^∞(0,T;𝒮^n),Q(·)≥ 0, R∈ L^∞(0,T;𝒮^k), R(·)≫0, G∈𝒮^n, G≥ 0. We mention that under assumptions , the system of SDE (<ref>) admits a unique solution. Under assumptions , the cost functional (<ref>) is well-defined. Now, we formulate the following incomplete information large population problem and aim to find Nash equilibrium. Problem (1). For 1 ≤ i ≤ N, finding the control strategy set u̅(·)=(u̅_1(·),… ,u̅_N(·)) such that 𝒥_i(u̅_i(· ),u̅_-i(· ))= u_i( ·) ∈𝒰_ad^cinf𝒥 _i(u_i(· ),u̅_-i(· )), where u̅_-i(·)=(u̅_1(·),…,u̅_i-1(·),u̅_i+1(·),…,u̅_N(·)). For 1≤ i≤ N, we call u̅_i(·) the Nash equilibrium of Problem (1). Moreover, the corresponding state x̅_i(·) is called the optimal centralized trajectory. Due to the coupling state-average x^(N)(·), it is complicated to study Problem (1). We will use MFG theory to seek the approximate Nash equilibrium, which bridges the “centralized” LQ games to the limiting state-average, as the number of agents N tends to infinity. It is usually to replace the state-average by its frozen limit term. As N → +∞, suppose x̅^(N)(·)=1/N∑_i=1^Nx̅_i(·) is approximated by some processes m(·) which will be determined later by the consistency condition system. We introduce the following auxiliary state for 𝒜_i { dz_i(t)= [A(t)z_i(t)+B(t)u_i(t)+α(t)m(t)+b(t)]dt +[C(t)z_i(t)+D(t)u_i(t)+β(t)m(t)+σ(t)]dW_i(t) +[C_0(t)z_i(t)+D_0(t)u_i(t)+β_0(t)m(t)+σ_0(t)]dW_0(t), z_i(0)=  x, . and the limiting cost functional J_i(u_i(· )) = 1/2𝔼{∫_0^T[⟨ Q(t)(z_i(t)-m(t)),z_i(t)-m(t)⟩ +⟨ R(t)u_i(t),u_i(t)⟩]dt +⟨ Gz_i(T),z_i(T)⟩}. Then the corresponding auxiliary limiting problem with incomplete information is proposed as follows. Now, we formulate the following limiting stochastic control problem with incomplete information. Problem (2). For 1≤ i≤ N, find u̅_i(·)∈𝒰_i satisfying J_i(u̅_i(· ))= u_i( ·) ∈𝒰_iinfJ _i(u_i(· )). Then u̅_i(·) is called the decentralized optimal control for Problem (2). Moreover, the corresponding state z̅_i(·) is called the decentralized optimal trajectory. Compared with <cit.>, where the state-average limit is deterministic, the counterpart m(·) in our setting turns out to be a stochastic process which is ℱ_t^W_0-adapted. In fact, we will see later that m(·) solves a SDE driven by the common noise, see (<ref>), which yields that m(·)∈ L^2_ℱ^W_0(0,T;ℝ^n). §.§ Open-loop decentralized strategies Noting that with the help of the frozen limiting state-average, the Problem (2) is essentially a LQ stochastic control problem with incomplete information. To this end, we will apply the stochastic maximum principle with partial information to obtain optimal control. Let us define the Hamiltonian function H:[0,T]×ℝ^n×ℝ^n×ℝ^n×ℝ^n×ℝ^m→ℝ as H(t,p_i,q_i,q_i,0,z_i,u_i) = ⟨ p_i,Az_i+Bu_i+α m+b⟩ +⟨ q_i,Cz_i+Du_i+β m+σ⟩ +⟨ q_i,0,C_0z_i+D_0u_i+ β_0m+σ_0⟩ -1/2⟨ Q(z_i-m),z_i-m⟩ -1/2⟨ Ru_i,u_i⟩. and we introduce the following adjoint equation { dp_i(t)= -[A^⊤(t)p_i(t)+C^⊤(t)q_i( t) + C_0^⊤(t)q_i,0( t)-Q(t)(z_i(t) -m(t))]dt +q_i( t) dW_i(t)+q_i,0( t) dW_0(t), p_i(T)= -Gz_i( T). . Note that the solution (p_i(·),q_i(·),q_i,0(·)) of (<ref>) belongs to the space L^2_ℱ^i(0,T;ℝ^n)× L^2_ℱ^i(0,T;ℝ^n)× L^2_ℱ^i(0,T;ℝ^n). By applying the stochastic maximum principle, we can obtain the open-loop decentralized strategies for Problem (2). For 1≤ i≤ N, let - hold. Then the decentralized optimal control of Problem (2) is given by u̅_i(t) =R(t)^-1(B^⊤(t)𝔼[p̅ _i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i]+D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i]), where (z̅_i(·),p̅_i(·),q̅_i(·),q̅_i,0(·)) satisfy the following stochastic Hamiltonian system { dz̅_i(t)= {A(t)z̅_i(t)+B(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+α(t) m(t)+b(t)}dt +{C(t)z̅_i(t)+D(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)| ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+β(t)m(t)+σ (t)}dW_i(t) +{C_0(t)z̅_i(t)+D_0(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)| ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+β_0(t)m(t)+σ_0(t)}dW_0(t), dp̅_i(t)= -[A^⊤(t)p̅_i(t)+C^⊤(t)q̅_i( t) + C_0^⊤(t)q̅_i,0( t) -Q(t)(z̅_i(t)-m(t))]dt +q̅_i( t) dW_i(t)+q̅_i,0( t) dW_0(t), z̅_i(0)= x, p̅_i(T)=-Gz̅_i( T). . The proof is simple. In fact, the results follows by the application of the stochastic maximum principle. For the decentralized optimal control u̅_i(·), suppose that z̅_i(·) is corresponding state trajectory, and (p̅_i(·),q̅_i(·),q̅_i,0(·)) is the unique solution to the second equation of (<ref>) with respect to (z̅_i(·),u̅_i(·)), the maximum principle reads as the following form 𝔼[⟨∂ H/∂ u_i(t,p̅_i(t),q̅_i(t), q̅_i,0(t),z̅_i(t),u̅_i(t)),u_i-u̅_i(t)⟩|ℱ _t^W_i]= 0, for any u_i∈ℝ^k, a.s. a.e. which yields that (by noticing (<ref>)) ⟨ 𝔼[B^⊤(t)p̅_i(t)+D^⊤(t)q̅_i(t)+D_0^⊤(t)q̅_i,0(t)-R(t)u̅_i(t)|ℱ_t^W_i], u_i-u̅_i(t)⟩ = 0, and by noticing u_i is an arbitrary element of ℝ^k, we have 𝔼[B^⊤(t)p̅_i(t)+D^⊤(t)q̅_i(t)+D_0^⊤(t)q̅_i,0(t)-R(t)u̅_i(t)|ℱ _t^W_i]= 0. Then from R(·)≫0, we obtain (<ref>). Due to the convexity of the admissible control set, here we only need the first-order adjoint equation. Moroever, since the cost functional of Problem (2) is strictly convex, it admits a unique optimal control, thus the sufficiency of optimal control can also be obtained. Furthermore, a feedback optimal control can be obtained via Riccati equation by applying stochastic maximum principle instead of using dynamic programming principle (DPP) and Hamilton-Jacobi-Bellman (HJB) equation. We mention that it will have some difficulties to obtain feedback optimal control via DPP, in fact feedback optimal control can be given through solving related HJB equations, however due to the randomness of the coefficients of (<ref>) (recall that m is an ℱ_t^W_0-adapted process), the related HJB equation will be stochastic PDE whose solvability is challenging. Now let us study the unknown frozen limiting state-average m(·). When N→∞, we would like to approximate x̅_i(·) by z̅_i(·) (see (<ref>)), thus 1/N∑_i=1^N x̅_i(·) is approximated by 1/N∑_i=1^N z̅_i(·). Moreover, inspired by <cit.>, recall that for i≠ j, z̅_i and z̅_j are identically distributed and conditional independent (under 𝔼[·|ℱ_·^W_0]), by the conditional strong law of large number, it follows that (the convergence is in the sense of almost surely, see <cit.>) m(·)=N→∞lim1/NN i=1∑z̅_i(· )=𝔼[z̅_i(·)|ℱ_·^W_0]. From (<ref>), by replacing m(·) by 𝔼[z̅_i(·)|ℱ_·^W_0], we obtain { dz̅_i (t)={A(t)z̅_i(t)+B(t)u̅_i(t)+α(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+b(t)}dt +{C(t)z̅_i(t)+D(t)u̅_i(t)+β(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ (t)}dW_i(t) +{C_0(t)z̅_i(t)+D_0(t)u̅_i(t)+β_0(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ_0(t)}dW_0(t), z̅_i (0)=x. . By taking expectation on both side of (<ref>), we obtain { d𝔼[z̅_i(t)]= {(A(t)+α(t))𝔼[z̅_i(t)] +B(t)𝔼[u̅_i(t)]+b(t)}dt, 𝔼[z̅_i(0)]= x. . Recall that the decentralized admissible control u̅_i is {ℱ_t^W_i}_0≤ t≤ T-adapted and W_i(·) and W_0(·) are independent, we have 𝔼[u̅_i(t)|ℱ_t^W_0]=𝔼[u̅_i(t)]. Thus, by taking conditional expectation w.r.t. ℱ_t^W_0 on both side of (<ref>), we have (recalling m(·)=𝔼[z̅_i(·)|ℱ_·^W_0]) { dm(t)={(A(t)+α(t))m(t)+B(t)𝔼[u̅_i(t)]+b(t)}dt +{(C_0(t)+β_0(t))m(t)+D_0(t)𝔼[u̅_i(t)]+σ_0(t)}dW_0(t), m(0)=x, . which implies that { d𝔼[m(t)]= {(A(t)+α(t))𝔼[m(t)] +B(t)𝔼[u̅_i(t)]+b(t)}dt, 𝔼[m(0)]= x. . Now, we are going back to (<ref>), by replacing m(·) with 𝔼[z̅_i(·)|ℱ^W_0], we deduce the following consistency condition system (or Nash certainty equivalence equation system, see, e.g. <cit.>) which is a mean-field forward backward stochastic differential equation (MF-FBSDE) { dz̅_i(t)= {A(t)z̅_i(t)+B(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ _t^W_i])+α(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+b(t)}dt +{C(t)z̅_i(t)+D(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)| ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+β(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ (t)}dW_i(t) +{C_0(t)z̅_i(t)+D_0(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)| ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i]) +β_0(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ_0(t)}dW_0(t), dp̅_i(t)= -[A^⊤(t)p̅_i(t)+C^⊤(t)q̅_i( t) + C_0^⊤(t)q̅_i,0( t) -Q(t)(z̅_i( t) -𝔼[z̅_i(t)|ℱ_t^W_0])]dt +q̅_i( t) dW_i(t)+q̅_i,0( t) dW_0(t), z̅_i(0)= x, p̅_i(T)=-Gz̅_i( T). . Now let us consider MF-FBSDE (<ref>), which has fully coupled structure and conditional expectation terms. There are difficulties for establishing its well-posedness due to these features. By the discounting method, Hu, Huang and Nie <cit.> first proved the well-posedness of a kind of MF-FBSDE with conditional expectation. The follow-up work <cit.> extended <cit.> to more general case. Using Theorem 3.3 in <cit.>, we can obtain the following well-posedness of MF-FBSDE (<ref>). Let λ^∗ be the maximum eigenvalue of matrix A+A^⊤/2, suppose that 4λ^∗<-2|α|-6|C|^2-6|C_0|^2-5|β|^2-5|β_0|^2, there exists a constant θ_1>0 , which depends on λ^∗, |C|, |C_0|, |α|, |β|, |β_0|, |Q|, |G|, and does not depend on T, such that when |B|, |D|, |D_0| and |R^-1|∈[0,θ_1), then there exists a unique adapted solution (z̅_i(·),p̅_i(·),q̅_i(·),q̅_i,0(·))∈ L^2_ℱ^i([0,T];ℝ^n×ℝ^n×ℝ^n×ℝ^n) to MF-FBSDE (<ref>). We mention that the consistency condition system (<ref>) is a fully coupled conditional MF-FBSDE, which has the conditional expectation terms 𝔼[z̅_i(·)|ℱ^W_0], 𝔼[p̅_i(·)|ℱ^W_i], 𝔼[q̅_i(·)|ℱ^W_i] and 𝔼[q̅_i,0(·)|ℱ^W_i]. In comparison with our current work, the MF-FBSDE in <cit.> includes the expectation term 𝔼[z̅_i(·)]. Even through our MF-FBSDE (<ref>) includes the conditional expectation term 𝔼[·|ℱ^W_0], the approach for proving the well-posedness of (<ref>) is the same as we discussed in <cit.>. §.§ ε-Nash Equilibrium Analysis for Problem (1) In subsection <ref>, we obtained the optimal strategy profile u̅(·)=(u̅_1(·),u̅_2(·),…,u̅_N(·)) of Problem (2), see Theorem 2.1 and (<ref>). In this section, we will show u̅(·)=(u̅_1(·),u̅_2(·),…,u̅_N(·)) is an ε-Nash equilibrium for Problem (1). Firstly, we give the definition of ε-Nash equilibrium. A set of controls u_i(·)∈𝒰_ad^c, 1≤ i ≤ N, for N agents is called an ε-Nash equilibrium with respect to the cost 𝒥_i, 1≤ i ≤ N, if there exists ε=ε(N)≥ 0 and N→∞limε(N)=0 such that for any 1≤ i ≤ N, we have 𝒥_i(u̅_i(· ),u̅_-i(· ))≤𝒥_i(u_i(·),u̅_-i(·))+ε, when any alternative strategy u_i(· )∈𝒰_ad^c is applied by 𝒜_i. If ε=0, Definition <ref> can reduce to the usual exact Nash equilibrium. Now, we give the following main result of this section and its proof will be given later. Under and , the strategy set (u̅_1(·),u̅_2(·),…,u̅_N(·)) is an ε-Nash equilibrium of Problem (1), where u̅_i(·), 1≤ i≤ N, is given by (<ref>). In order to prove the above theorem, let us give several lemmas. Let - hold, it follows that 𝔼0≤ t≤ Tsup|x̅^(N)(t)-m(t)|^2=O(1/N), 1≤ i≤ Nsup𝔼0≤ t≤ Tsup| x̅_i(t)-z̅_i(t)|^2=O(1/N). From (<ref>) and (<ref>), by using Cauchy-Schwarz inequality and Burkholder-Davis-Gundy (BDG) inequality, it follows that there exists a constant K (independent of N, which may vary line to line in the following) 𝔼 0≤ s≤ tsup|x̅^(N)(s)-m(s)|^2≤ K 𝔼∫_0^t|x̅^(N)(s)-m(s)|^2ds +K𝔼∫_0^t|1/N∑_i=1^N(u̅_i(t)-𝔼[u̅_i(t)])|^2dt +K/N^2𝔼Ni=1∑∫_0^T|C(t)x̅_i(t)+D(t)u̅_i(t)+α(t)(x̅^(N)(t)-m(t))+α(t)m(t)+σ (t)|^2dt +K/N^2𝔼Ni=1∑∫_0^T|C_0(t)x̅_i(t)+D_0(t)u̅_i(t)+β(t)(x̅^(N)(t)-m(t)) +β_0(t)m(t)+σ_0(t)|^2dt. Therefore, noticing that 𝔼sup_0≤ t≤ T|x̅_i(t)|^2<+∞ and 𝔼sup_0≤ t≤ T|m(t)|^2≤ K, we obtain (<ref>) by Gronwall's inequality. Then, from (<ref>), (<ref>) and (<ref>), we can show (<ref>) by Gronwall's inequality. For 1≤ i ≤ N, we have |𝒥_i(u̅_i(· ),u̅_-i(· ))-J_i(u̅ _i(· ))|=O(1/√(N)). From (<ref>) and (<ref>), we have 𝒥_i(u̅_i(· ),u̅_-i(·))-J_i(u̅_i(·)) = 1/2𝔼{∫_0^T[⟨ Q(t)(x̅ _i(t)-x̅^(N)(t)),x̅_i(t)-x̅^(N)(t)⟩-⟨ Q(t)(z̅_i(t)-m(t)),z̅_i(t)-m(t)⟩]dt +⟨ Gx̅_i(T),x̅_i(T)⟩-⟨ Gz̅_i(T),z̅_i(T)⟩}. By noticing that Lemma <ref>, 𝔼sup_0≤ t≤ T|z̅_i(t)|^2≤ K and 𝔼sup_0≤ t≤ T|m(t)|^2≤ K, for some constant K independent of N, we have (using ⟨ Qa,a⟩-⟨ Qb,b⟩=⟨ Q(a-b),a-b⟩+2⟨ Q(a-b),b⟩) |𝔼∫_0^T[⟨ Q(t)(x̅_i(t)-x̅^(N)(t)), x̅_i(t)-x̅^(N)(t)⟩ -⟨ Q(t)(z̅ _i(t)-m(t)),z̅_i(t)-m(t)⟩]dt| ≤ K∫_0^T𝔼|x̅_i(t)-z̅_i(t)|^2dt+K ∫_0^T𝔼|x̅^(N)(t)-m(t)|^2dt +K∫_0^T(𝔼| x̅_i(t)-z̅_i(t)|^2+𝔼|x̅^(N)(t)-m(t)|^2)^1/2dt=O(1/√(N)). Similarly, we can also prove that the order of terminal term is O(1/√(N)), which completes the proof. Next, let us consider a perturbed control u_i(·) for 𝒜_i, the corresponding state system is, for 1≤ i≤ N, { dy_i(t)= [A(t)y_i(t)+B(t)u_i(t)+α(t) y^(N)(t)+b(t)]dt +[C(t)y_i(t)+D(t)u_i(t)+β(t) y^(N)(t)+σ(t)]dW_i(t) +[C_0(t)y_i(t)+D_0(t)u_i(t)+β_0(t) y^(N)(t)+σ_0(t)]dW_0(t), y_i(0)= x, . whereas other agents keep the control u̅_j(·), 1≤ i≤ N, j≠ i, and have the following state dynamics { dy_j(t)= [A(t)y_j(t)+B(t)u̅_j(t)+α(t) y^(N)(t)+b(t)]dt +[C(t)y_j(t)+D(t)u̅_j(t)+β(t) y^(N)(t)+σ(t)]dW_j(t) +[C_0(t)y_j(t)+D_0(t)u̅_j(t)+β_0(t) y^(N)(t)+σ_0(t)]dW_0(t), y_j(0)= x, . where y^(N)(·)=1/N∑_i=1^N y_i(·). To prove that (u̅_1(·),…,u̅_N(·)) is an ε-Nash equilibrium, we need to show u_i( ·) ∈𝒰_ad^cinf𝒥 _i(u_i(· ),u̅_-i(· )) ≥𝒥_i(u̅_i(· ),u̅_-i(· ))-ε . Then it only needs to consider the perturbation 𝒰_ad^c such that 𝒥_i(u_i(· ),u̅_-i(· )) ≤𝒥_i(u̅_i(· ),u̅_-i(· )). Then 𝔼∫_0^T⟨ R(t)u_i(t),u_i(t)⟩ dt≤𝒥 _i(u_i(· ),u̅_-i(· ))≤𝒥_i(u̅_i(· ), u̅_-i(· )) = J_i(u̅_i(· ))+O(1/√(N)), which yields that 𝔼∫_0^T|u_i(t)|^2dt≤ K. For the i-th agent, we consider the limiting state with perturbation control { dy̅_i(t)= [A(t)y̅_i(t)+B(t)u_i(t)+α(t)m(t)+b(t)]dt +[C(t)y̅_i(t)+D(t)u_i(t)+β(t)m(t)+σ(t)]dW_i(t) +[C_0(t)y̅_i(t)+D_0(t)u_i(t)+β_0(t)m(t)+σ_0(t)]dW_0(t), y̅_i(0)= x. . Then, we have the following estimates. Let - hold, then 𝔼0≤ t≤ Tsup|y^(N)(t)-m(t)|^2=O(1/N), 1≤ i≤ Nsup𝔼0≤ t≤ Tsup |y_i(t)-y̅_i(t)|^2=O(1/N). Firstly, we prove (<ref>). From (<ref>), (<ref>), (<ref>) and BDG inequality, we get 𝔼 0≤ s≤ tsup|y^(N)(s)-m(s)|^2≤ K 𝔼∫_0^t|y^(N)(s)-m(s)|^2ds+K/N^2𝔼∫_0^t|u_i(s)|^2ds +K 𝔼∫_0^t|1/N∑_j=1,j≠ i^Nu̅_j(s)-𝔼[u̅_i(s)]|^2ds+K/N^2∑_j=1^N𝔼∫_0^t|C(t)y_j(s)+β(t)(y^(N)(t)-m(t)) +β(t)m(t)+σ(s)|^2ds +K/N^2∑_j=1^N𝔼∫_0^t|C_0(t)y_j(s)+β_0(t)(y^(N)(t)-m(t)) +β_0(t)m(t)+σ_0(s)|^2ds +K/N^2∑_j=1,j≠ i^N𝔼∫_0^t|u̅_j(s)|^2ds. Moreover, taking conditional expectation w.r.t. ℱ_t^W_i on both side of (<ref>), we have { dẑ̅̂_i(t)= {A(t)ẑ̅̂_i(t)+B(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)|ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+α(t) 𝔼[m(t)]+b(t)}dt +{C(t)ẑ̅̂_i(t)+D(t)R^-1(t)(B^⊤(t)𝔼[p̅_i(t)| ℱ_t^W_i]+D^⊤(t)𝔼[q̅_i(t)|ℱ_t^W_i] +D_0^⊤(t)𝔼[q̅_i,0(t)|ℱ_t^W_i])+β(t)𝔼[m(t)]+σ (t)}dW_i(t) dp̂̅̂_i(t)= -[A^⊤(t)p̂̅̂_i(t)+C^⊤(t)q̂̅̂_i( t) + C_0^⊤(t)q̂̅̂_i,0( t) -Q(t)(ẑ̅̂_i(t) -𝔼[m(t)])]dt +q̂̅̂_i( t) dW_i(t), ẑ̅̂_i(0)= x, p̂̅̂_i(T)=-Gẑ̅̂_i( T), . then from (<ref>) and (<ref>), we obtain that {u̅_i(t)}, 1≤ i ≤ N are independent and identically distributed. Then, it follows that 𝔼[u̅_i(·)]=𝔼[u̅_j(·)], for 1≤ i,j ≤ N and j≠ i. Denote μ(t)=𝔼[u̅_i(t)], we have ∫_0^T𝔼|1/NNj=1,j≠ i∑u̅_j(t)-μ(t)|^2dt≤2/N^2∫_0^T𝔼|μ(t)|^2dt +2(N-1)^2/N^2∫_0^T𝔼|1/N-1Nj=1,j≠ i∑u̅_j(t)-μ(t)|^2dt =2(N-1)/N^2∫_0^T𝔼|u̅_j(t)-μ(t)|^2dt+2/N^2∫_0^T𝔼|μ(t)|^2dt =O(1/N). By using the fact that 𝔼0≤ t≤ Tsup|y_j(t)|^2≤ K, and recalling 𝔼∫_0^T|u_i(t)|^2dt≤ K, we have K/N^2𝔼∫_0^T|u_i(t)|^2dt+K/N^2𝔼Nj=1∑∫_0^T|C(t)y_j(t)+β(t)(y^(N)(t)-m(t)) +β(t)m(t)+σ(t)|^2dt +K/N^2𝔼Nj=1∑∫_0^T|C_0(t)y_j(t)+β_0(t)(y^(N)(t)-m(t))+β_0(t)m(t)+σ_0(t)|^2dt ≤K/N(1+𝔼∫_0^T|y^(N)(t)-m(t)|^2dt). Moreover, by i.i.d property of u̅_i(·), we get K/N^2𝔼Nj=1,j≠ i∑∫_0^T|u̅_j(t)|^2dt=O(1/N). Then, we have 𝔼0≤ t≤ Tsup|y^(N)(t)-m(t)|^2≤ K𝔼 ∫_0^T|y^(N)(t)-m(t)|^2dt+O(1/N). From Gronwall's inequality, we obtain (<ref>). Secondly, from (<ref>), (<ref>) and (<ref>), by using Gronwall's inequality, we obtain (<ref>). Let - hold, for any 1≤ i≤ N, we have |𝒥_i(u_i(· ),u̅_-i(· ))-J_i(u_i(· ))|=O(1/√(N)). The proof is similar to Lemma <ref>, so we omit it here. Finally, let us give the proof of Theorem <ref>. Proof of Theorem <ref>. Combining Lemmas <ref> and <ref>, we have 𝒥_i(u̅_i(·),u̅_-i(·))=J_i(u̅_i(·))+O(1/√(N)) ≤ J_i(u_i(·))+O(1/√(N)) =𝒥_i(u_i(·),u̅_-i(·))+O(1/√(N)). Thus, Theorem <ref> holds by taking ε=O(1/√(N)).▪ § FEEDBACK DECENTRALIZED STRATEGIES In this section, we will further study the above decentralized control strategies (<ref>) which can be represented as the feedback of filtered state by Riccati approach as given in the following theorem. Although we have obtained the decentralized control strategies (<ref>) for Problem (2), it is not an implementable control policy, since it involves 𝔼[p_i(·)|ℱ_·^W_i], 𝔼[q_i(·)|ℱ_·^W_i] and 𝔼[q_i,0(·)|ℱ_·^W_i], where (p_i(·),q_i(·),q_i,0(·)) is the solution to FBSDE (<ref>). As usual, we would like to obtain a feedback representation of the decentralized control strategy via Riccati equations. For 1≤ i ≤ N, to simplify symbols, we denote f̂_i(t)=𝔼[f_i(t)|ℱ_t^W_i] as the filtering of f_i(t) w.r.t. ℱ_t^W_i. Then, the decentralized strategies (<ref>) can be further expressed by u̅_i(t)= R(t)^-1(B^⊤(t)p̂̅̂ _i(t)+D^⊤(t)q̂̅̂_i(t)+D_0^⊤(t)q̂̅̂_i,0(t)). Then we recall the consistency conditional system (<ref>) as { dz̅_i(t)= {A(t)z̅_i(t)+B(t)R^-1(t)(B^⊤(t)p̂̅̂_i(t)+D^⊤(t)q̂̅̂_i(t) +D_0^⊤(t)q̂̅̂_i,0(t))+α(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+b(t)}dt +{C(t)z̅_i(t)+D(t)R^-1(t)(B^⊤(t)p̂̅̂_i(t)+D^⊤(t)q̂̅̂_i(t) +D_0^⊤(t)q̂̅̂_i,0(t))+β(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ (t)}dW_i(t) +{C_0(t)z̅_i(t)+D_0(t)R^-1(t)(B^⊤(t)p̂̅̂_i(t)+D^⊤(t)q̂̅̂_i(t) +D_0^⊤(t)q̂̅̂_i,0(t))+β_0(t) 𝔼[z̅_i(t)|ℱ_t^W_0]+σ_0(t)}dW_0(t), dp̅_i(t)= -[A^⊤(t)p̅_i(t)+C^⊤(t)q̅_i( t) +C_0^⊤(t)q̅_i,0(t) -Q(t)(z̅_i( t) -𝔼[z̅_i(t)|ℱ_t^W_0])]dt +q̅_i( t) dW_i(t)+q̅_i,0( t) dW_0(t), z̅_i(0)= x, p̅_i(T)=-Gz̅_i( T). . Let - hold, then the decentralized strategies for Problem (2) can be represented as u̅_i(t)= -Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t))ẑ̅̂_i(t) -Σ(t)^-1(B^⊤(t)Γ(t)+D^⊤P(t)β(t)+D_0^⊤(t)P(t)β_0(t))𝔼[m(t)] -Σ(t)^-1(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t)), 1≤ i ≤ N, with Σ(t)=R(t)+D^⊤(t)P(t)D(t)+D_0^⊤(t)P(t)D_0(t), where P(·) and Γ(·) solve the following Riccati equations, respectively { Ṗ(t)+P(t)A(t)+A^⊤(t)P(t)+C^⊤(t)P(t)C(t)+C_0^⊤(t)P(t)C_0(t) +Q(t) -(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t))Σ(t)^-1 ×(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t))=0, P(T)=G, . and { Γ̇(t)+Γ(t)(A(t)-B(t)Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t)))+(A(t)-B(t)Σ(t)^-1 ×(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t)))^⊤Γ(t) -Γ(t)B(t)Σ(t)^-1(D^⊤(t)P(t)β(t)+D_0^⊤(t)P(t)β_0(t)) +C^⊤P(t)β(t)+C_0^⊤P(t)β_0(t) -(P(t)B(t)+C^⊤P(t)D(t)+C_0^⊤P(t)D_0(t))Σ(t)^-1 ×(D^⊤P(t)β(t)+D_0^⊤P(t)β_0(t)) +(P(t)+Γ(t))α(t) -Γ(t)B(t)Σ(t)^-1B^⊤(t)Γ(t)-Q(t)=0, Γ(T)=0, . the deterministic function Φ(·) solves the following standard ordinary differential equation (ODE) { Φ̇(t)+(A^⊤(t)-(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1B^⊤(t)-Γ(t)B(t)Σ(t)^-1B^⊤(t))Φ(t) +(C^⊤(t) -(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1D^⊤(t)-Γ(t)B(t)Σ(t)^-1D^⊤(t))P(t)σ(t) +(C_0^⊤(t)-(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1D_0^⊤(t) -Γ(t)B(t)Σ(t)^-1D_0^⊤(t))P(t)σ_0(t) +(P(t)+Γ(t))b(t)=0, Φ(T)=0, . the limit value of the state-average m(·) solves the following SDE { dm(t)={(A(t)+α(t))m(t) -B(t)Σ(t)^-1(B^⊤(t)P(t) +D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t)+B^⊤(t)Γ(t) +D^⊤(t)P(t)β(t)+D_0^⊤(t)P(t)β_0(t) )𝔼[m(t)] +b(t)-B(t)Σ(t)^-1 ×(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t))}dt, +{(C_0(t)+β(t))m(t)-D_0(t)Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t)+B^⊤(t)Γ(t)+D^⊤(t)P(t)β(t) +D_0^⊤(t)P(t)β_0(t))𝔼m(t)+σ_0(t) -D_0(t)Σ(t)^-1 ×(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t))}dW_0(t), m(0)=x, . and the optimal filtering ẑ̅̂_i(·) solves the following SDE { dẑ̅̂_i(t)= {(A(t)-B(t)Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t)))ẑ̅̂_i(t) +(α(t)-B(t)Σ(t)^-1(B^⊤(t)Γ(t) +D^⊤P(t)β(t) +D_0^⊤(t)P(t)β_0(t)) )𝔼[m(t)]-B(t)Σ(t)^-1(B^⊤(t)Φ(t) +D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t))+b(t)}dt +{(C(t)-D(t)Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t)))ẑ̅̂_i(t) +(β(t)-D(t)Σ(t)^-1(B^⊤(t)Γ(t) +D^⊤P(t)β(t) +D_0^⊤(t)P(t)β_0(t)) )𝔼[m(t)] -D(t)Σ(t)^-1(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t)) +σ(t)}dW_i(t), ẑ̅̂_i(0)=x. . Due to the terminal condition and structure of (<ref>), we suppose p̅_i(t)=-P(t)z̅_i(t)-Γ(t)𝔼[z̅_i(t)]-Φ(t), 1≤ i ≤ N, with P(T)=G, Γ(T)=0 and Φ(T)=0, where P(·), Γ(·) and Φ(·) will be specified later. Applying Itô's formula to (<ref>), we have d p̅_i(t)=[-(Ṗ(t)+P(t)A(t))z̅_i(t) -(Γ̇(t)+Γ(A(t)+α(t)))𝔼[z̅_i(t)] -P(t)α(t)𝔼[z̅_i(t)|ℱ_t^W_0] -P(t)B(t)u̅_i(t)-Γ(t)B(t)𝔼[u̅_i(t)] -Φ̇(t)-(P(t)+Γ(t))b(t)]dt -P(t)[C(t)z̅_i(t) +D(t)u̅_i(t)+β(t)𝔼[z̅_i(t)|ℱ_t^W_0]+σ(t)]dW_i(t) -P(t)[C_0(t)z̅_i(t) +D_0(t)u̅_i(t)+β_0(t)𝔼[z̅_i(t)|ℱ_t^W_0]+σ_0(t)]dW_0(t). Comparing with the diffusion terms in the second equation of (<ref>), we get q̅_i(t)= -P(t)[C(t)z̅_i(t) +D(t)u̅_i(t)+β(t)𝔼[z̅_i(t)|ℱ_t^W_0]+σ (t)], q̅_i,0(t)= -P(t)[C_0(t)z̅_i(t) +D_0(t)u̅_i(t)+β_0(t)𝔼[z̅_i(t)|ℱ_t^W_0]+σ_0 (t)]. By taking conditional expectation w.r.t. ℱ_t^W_i of (<ref>) and (<ref>), we have p̂̅̂_i(t)= -P(t)ẑ̅̂_i(t)-Γ(t)𝔼[z̅_i(t)]-Φ(t), q̂̅̂_i(t)= -P(t)[C(t)ẑ̅̂_i(t) +D(t)u̅_i(t)+β𝔼[z̅_i(t)]+σ(t)], q̂̅̂_i,0(t)= -P(t)[C_0(t)ẑ̅̂_i(t) +D_0(t)u̅_i(t)+β_0𝔼[z̅_i(t)]+σ_0(t)]. Then substituting them into (<ref>), we can derive u̅_i(t)= -Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t))ẑ̅̂_i(t) -Σ(t)^-1(B^⊤(t)Γ(t)+D^⊤P(t)β(t)+D_0^⊤(t)P(t)β_0(t))𝔼[z̅_i(t)] -Σ(t)^-1(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t)), 1≤ i ≤ N. By recalling that m(·)=𝔼[z̅_i(·)|ℱ_t^W_0], we have 𝔼[m(·)]=𝔼[z̅_i(·)], then (<ref>) holds. Moreover, we have 𝔼[u̅_i(t)]= -Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t) +B^⊤(t)Γ(t)+D^⊤(t)P(t)β(t)+D_0^⊤(t)P(t)β_0(t))𝔼[z̅_i(t)] -Σ(t)^-1(B^⊤(t)Φ(t)+D^⊤(t)P(t)σ(t)+D_0^⊤(t)P(t)σ_0(t)), 1≤ i ≤ N. Now, let us consider the equations for P(·), Γ(·) and Φ(·). Comparing the drift terms of (<ref>) with second equation in (<ref>), by noticing (<ref>) and (<ref>), we obtain (Ṗ(t)+P(t)A(t)+A^⊤(t)P(t)+C^⊤(t)P(t)C(t)+C_0^⊤(t)P(t)C_0(t) +Q(t))z̅_i(t) +(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t))u̅_i(t) +(Γ̇(t)+Γ(t)(A(t)+α(t))+A^⊤(t)Γ(t))𝔼[z̅_i(t)] +Γ(t)B(t)𝔼[u̅_i(t)] +(P(t)α(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)-Q(t)) ×𝔼[z̅_i(t)|ℱ_t^W_0] +Φ̇(t)+P(t)b(t)+Γ(t)b(t)+A^⊤(t)Φ(t) +C^⊤(t)P(t)σ(t)+C_0^⊤(t)P(t)σ_0(t)=0. By substituting (<ref>) and (<ref>) into (<ref>) and taking conditional expectation w.r.t. ℱ_t^W_i of (<ref>), we obtain the equations for P(·), Γ(·) and Φ(·) which solve (<ref>), (<ref>) and (<ref>), respectively. In addition, by substituting (<ref>) into (<ref>), it is easy to show that m(·) solves (<ref>). Moreover, from (<ref>) and (<ref>), then equation (<ref>) of the optimal filtering ẑ̅̂_i(·) is obtained, which completes the proof. To summarize, we obtain that (P(·),Γ(·),Φ(·)) solves the following Riccati system { Ṗ(t)+P(t)A(t)+A^⊤(t)P(t)+C^⊤(t)P(t)C(t)+C_0^⊤(t)P(t)C_0(t) +Q(t) -(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t))Σ(t)^-1 ×(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t))=0, Γ̇(t)+Γ(t)(A(t)-B(t)Σ(t)^-1(B^⊤(t)P(t)+D^⊤(t)P(t)C(t) +D_0^⊤(t)P(t)C_0(t)))+(A(t)-B(t)Σ(t)^-1 ×(B^⊤(t)P(t)+D^⊤(t)P(t)C(t)+D_0^⊤(t)P(t)C_0(t)))^⊤Γ(t) -Γ(t)B(t)Σ(t)^-1(D^⊤(t)P(t)β(t)+D_0^⊤(t)P(t)β_0(t)) +C^⊤P(t)β(t)+C_0^⊤P(t)β_0(t) -(P(t)B(t)+C^⊤P(t)D(t)+C_0^⊤P(t)D_0(t))Σ(t)^-1 ×(D^⊤P(t)β(t)+D_0^⊤P(t)β_0(t)) +(P(t)+Γ(t))α(t) -Γ(t)B(t)Σ(t)^-1B^⊤(t)Γ(t)-Q(t)=0, Φ̇(t)+(A^⊤(t)-(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1B^⊤(t)-Γ(t)B(t)Σ(t)^-1B^⊤(t))Φ(t) +(C^⊤(t) -(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1D^⊤(t)-Γ(t)B(t)Σ(t)^-1D^⊤(t))P(t)σ(t) +(C_0^⊤(t)-(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t)) ×Σ(t)^-1D_0^⊤(t) -Γ(t)B(t)Σ(t)^-1D_0^⊤(t))P(t)σ_0(t) +(P(t)+Γ(t))b(t)=0, P(T)=G, Γ(T)=0, Φ(T)=0, . where Σ(t)=R(t)+D^⊤(t)P(t)D(t)+D_0^⊤(t)P(t)D_0(t). Noticing that due to the term P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t) in (<ref>) and D^⊤(t)P(t)D(t)+D_0^⊤(t)P(t)D_0(t) ≠(D^⊤(t)+D_0^⊤(t))^⊤P(t)(D^⊤(t)+D_0^⊤(t)), it means that (<ref>) is not standard Riccati equation. Concerning (<ref>), it is more complicated since the non-symmetry comes from not only the term Γ(t)B(t)Σ(t)^-1(D^⊤(t)P(t)β(t)+D_0^⊤(t)P(t)β_0(t)) but also the term C^⊤P(t)β(t)+C_0^⊤P(t)β_0(t)-(P(t)B(t)+C^⊤P(t)D(t)+C_0^⊤(t)P(t)D_0(t))Σ(t)^-1(D^⊤P(t)β(t)+D_0^⊤P(t)β_0(t)) and the term Γ(t)α(t). Concerning the equation of Φ(·), it is a ODE once P(·) and Γ(·) are determined. In general, we see that the solvability of (P(·),Γ(·),Φ(·)) is very complicated. Firstly, we investigate the well-posedness of equation (<ref>). Let - hold. Then the Riccati equation (<ref>) admits a unique solution P(·)∈ C([0,T];𝒮_+^n). Firstly, we prove (<ref>) admits at most one solution P(·)∈ C([0,T];𝒮_+^n). Suppose P_1(·) and P_2(·) are two solutions of (<ref>) and we set P(·)=P_1(·)-P_2(·), then P(·) satisfies { Ṗ(t)+P(t)A(t)+A^⊤(t)P(t)+C^⊤(t)P(t)C(t) +C_0^⊤(t)P(t)C_0(t) -(P(t)B(t)+C^⊤(t)P(t)D(t) +C_0^⊤(t)P(t)D_0(t))Σ_1(t)^-1 ×(P_1(t)B(t)+C^⊤(t)P_1(t)D(t)+C_0^⊤(t)P_1(t)D_0(t))^⊤ -(P_2(t)B(t)+C^⊤(t)P_2(t)D(t)+C_0^⊤(t)P_2(t)D_0(t)) ×Σ_2(t)^-1(P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t))^⊤ +(P_2(t)B(t)+C^⊤(t)P_2(t)D(t)+C_0^⊤(t)P_2(t)D_0(t)) ×Σ_1(t)^-1(D^⊤(t)P(t)D(t) +D_0^⊤(t)P(t)D_0(t))Σ_2(t)^-1 ×(P_1(t)B(t)+C^⊤(t)P_1(t)D(t)+C_0^⊤(t)P_1(t)D_0(t))^⊤=0, P(T)=0, . where Σ_i(t)=R(t)+D^⊤(t)P_i(t)D(t)+D_0^⊤(t)P_i(t)D_0(t), i=1,2. Noting that for i=1,2, Σ_i(t)^-1≤ R(t)^-1 and |Σ_i(t)^-1|<∞, recalling R≫0 (which implies R^-1∈ L^∞(0,T;𝒮^k)). Consequently, |Σ_1(t)^-1| and |Σ_2(t)^-1| are uniformly bounded. Using Gronwall's inequality, we get P(t)=0. This means that the uniqueness of P(·) is proved. Secondly, let us focus on the existence of solution to (<ref>). Motivated by <cit.>, we can verify that equation (<ref>) is equivalent to the following equation { Ṗ(t)+P(t)A(t)+A^⊤(t)P(t)+C^⊤(t)P(t)C(t) +C_0^⊤(t)P(t)C_0(t)+Q(t)=0, P(T)=G, . where { A(t)=A(t)-B(t)Ψ(t), C(t)=C(t)-D(t)Ψ(t), C_0(t)=C_0(t)-D_0(t)Ψ(t), Q(t)=Q(t)+Ψ^⊤(t)R(t)Ψ(t), Ψ(t)=(R(t)+D^⊤(t)P(t)D(t)+D_0^⊤(t)P(t)D_0(t))^-1 × (P(t)B(t)+C^⊤(t)P(t)D(t)+C_0^⊤(t)P(t)D_0(t))^⊤. . We mention that different from <cit.>, we have additional terms in (<ref>) and (<ref>), thus we need modify Ψ(t), which includes D_0^⊤(t)P(t)D_0(t). Next, with the help of modified iterative method and mathematical induction, we can prove the existence of solution to (<ref>), which yields that (<ref>) also has a solution. To do this, we set { Ṗ_0(t)+P_0(t)A(t)+A^⊤(t)P_0(t)+C^⊤(t)P_0(t)C(t) +C_0^⊤(t)P_0(t)C_0(t)+Q(t)=0, P_0(T)=G, . which admits a unique P_0(·)∈ C([0,T];𝒮_+^n) by noticing Q(·)≥ 0 (see Lemma 7.3 of <cit.>). For i≥0, we define { Ψ_i(t)=(R(t)+D^⊤(t)P_i(t)D(t)+D_0^⊤(t)P_i(t)D_0(t))^-1 × (P_i(t)B(t)+C^⊤(t)P_i(t)D(t)+C_0^⊤(t)P_i(t)D_0(t))^⊤, A_i(t)=A(t)-B(t)Ψ_i(t), C_i(t)=C(t)-D(t)Ψ_i(t), C_0,i(t)=C_0(t)-D_0(t)Ψ_i(t), Q_i(t)=Q(t)+Ψ_i^⊤(t)R(t)Ψ_i(t). . Noticing that (<ref>) is also quite different from <cit.> due to the appearance of D_0^⊤(t)P_i(t)D_0(t). Let P_i+1(t) be the solution of { Ṗ_i+1(t)+P_i+1(t)A_i(t)+A_i^⊤(t)P_i+1(t) +Q_i(t) +C_i^⊤(t)P_i+1(t)C_i(t) +C_0,i^⊤(t)P_i+1(t)C_0,i(t)=0, P_i+1(T)=G. . Noticing that R≫0, Q≥0 and G≥0, by using Lemma 7.3 of <cit.> and mathematical induction, one can check that for i≥ 0, P_i(·) is well defined and P_i(·)∈ C([0,T];𝒮_+^n). Now we will show that P_i(·), for i≥0, is a decreasing sequence in C([0,T];𝒮_+^n). For simplicity, we denote Δ_i-1(t)=P_i-1(t)-P_i(t), Υ_i(t)=Ψ_i-1(t)-Ψ_i(t) and Υ_0(t)=-Ψ_0(t). From (<ref>) and (<ref>), when i=0, we have { Δ̇_0(t)+P_0(t)(A(t)-A_0(t))+Δ_0(t)A_0(t)+A_0^⊤(t)Δ_0(t) +(A(t)-A_0(t))^⊤ P_0(t)+Q(t)-Q_0(t) +C^⊤(t)P_0(t)C(t)-C_0^⊤(t)P_0(t)C_0(t)+ C_0^⊤(t)Δ_0(t)C_0(t) +C_0^⊤(t)P_0(t)C_0(t)-C_0,0^⊤(t)P_0(t)C_0,0(t)+ C_0,0^⊤(t)Δ_0(t)C_0,0(t)=0, Δ_0(T)=0, . which yields that -[Δ̇_0(t)+Δ_0(t)A_0(t)+A_0^⊤(t)Δ_0(t)+C_0^⊤(t)Δ_0(t)C_0(t)+C_0,0^⊤(t)Δ_0(t)C_0,0(t)] = Υ^⊤_0(t)(R(t)+D^⊤(t)P_0(t)D(t)+D_0^⊤(t)P_0(t)D_0(t))Υ_0(t)≥ 0. By noting Δ_0(T)=0 and Lemma 7.3 of <cit.>, we get P_0(t)≥ P_1(t), for all t∈ [0,T]. For i≥1, suppose P_i-1(t)≥ P_i(t), t∈[0,T], it is sufficient to prove P_i(t)≥ P_i+1(t), t∈[0,T]. Using (<ref>) and Δ_i(t)=P_i(t)-P_i+1(t), we have { Δ̇_i(t)+P_i(t)(A_i-1(t)-A_i(t))+Δ_i(t)A_i(t) +(A_i-1(t)-A_i(t))^⊤ P_i(t)+A_i^⊤(t)Δ_i(t) +C_i^⊤(t)Δ_i(t)C_i(t)+C_i-1^⊤(t)P_i(t)C_i-1(t)-C_i^⊤(t)P_i(t)C_i(t) +C_0,i^⊤(t)Δ_i(t)C_0,i(t)+C_0,i-1^⊤(t)P_i(t)C_0,i-1(t)-C_0,i^⊤(t)P_0,i(t)C_0,i(t) +Q_i-1(t)-Q_i(t)=0, Δ_i(T)=0. . According to (<ref>), we have A_i-1(t)-A_i(t)=-B(t)Υ_i(t), C_i-1(t)-C_i(t)=-D(t)Υ_i(t), C_0,i-1(t)-C_0,i(t)=-D_0(t)Υ_i(t), C_i-1^⊤(t)P_i(t)C_i-1(t)-C_i^⊤(t)P_i(t)C_i(t) = Υ_i^⊤(t)D^⊤(t)P_i(t)D(t)Υ_i(t)-C_i^⊤(t)P_i(t)D(t)Υ_i(t)-Υ_i^⊤(t)D^⊤(t)P_i(t)C_i(t), C_0,i-1^⊤(t)P_i(t)C_0,i-1(t)-C_0,i^⊤(t)P_i(t)C_0,i(t) = Υ_i^⊤(t)D_0^⊤(t)P_i(t)D_0(t)Υ_i(t)-C_0,i^⊤(t)P_i(t)D_0(t)Υ_i(t)-Υ_i^⊤(t)D_0^⊤(t)P_i(t)C_0,i(t), Q_i-1(t)-Q_i(t) = Υ_i^⊤(t)R(t)Υ_i(t) +Ψ_i^⊤(t)R(t)Υ_i(t) +Υ_i^⊤(t)R(t)Ψ_i(t). From (<ref>), we obtain -[Δ̇_i(t)+Δ_i(t)A_i(t)+A_i^⊤(t)Δ_i(t)+C_i^⊤(t)Δ_i(t)C_i(t)+C_0,i^⊤(t)Δ_i(t)C_0,i(t)] = Υ_i^⊤(t)(R(t)+D^⊤(t)P_i(t)D(t)+D_0^⊤(t)P_i(t)D_0(t))Υ_i(t) ≥ 0. Using Δ_i(T)=0 and Lemma 7.3 of <cit.> again, we have P_i(t)≥ P_i+1(t), t∈[0,T]. Therefore, {P_i(·)} is a decreasing sequence in C([0,T];𝒮_+^n), and thus has a limit denoted by P(·). Now by integrating equation (<ref>) on the interval [t,T] and then sending i to infinite, with the help of dominate convergence theorem, one can show that P(·) solves (<ref>) (and hence (<ref>)). The proof is complete. One may applying the method of Bellman's Pinciple of quasi linearization and monotone convergence result of symmetric matrices as in <cit.> (see also <cit.>) to prove the well-posdeness of Riccati equation (<ref>). Here, we use pure algebraic technique which is different to <cit.> to obtain the well-posedness. Next, we give the well-posedness of Riccati system (<ref>). From Lemma <ref>, we know that equation (<ref>) admits a unique solution P(·)∈ C([0,T];𝒮_+^n), once Γ(·) is uniquely solved, the well-posedness of Φ(·) holds by noting that the equation for Φ(·) is just an ODE. If Γ(·) is uniquely solved, then system (<ref>) admits a unique solution (P(·),Γ(·), Φ(·))∈ C([0,T];𝒮_+^n×𝒮^n×ℝ^n). Under certain conditions, Γ(·) is uniquely solved. One trivial example is that n = k = 1, in which case equation (<ref>) becomes a one-dimensional ODE whose well-posedness is obvious. Now, let us consider some non-trivial case such that (<ref>) has a unique solution. Firstly, to guarantee the symmetry of Riccati equation (<ref>) (see the discussion before Lemma <ref>), one natural assumption is that α(·)=δ I and β(·)=β_0(·)=0, where δ is a constant. We will keep this assumption in the following arguments. The following theorem gives a general condition to guarantee the well-posedness of Riccati equation (<ref>). Suppose that α(·)=δ I, β(·)=β_0(·)=0, if -C^⊤PDΣ^-1D_0^⊤PC_0-C_0^⊤PD_0Σ^-1D^⊤PC≥0, then Riccati equation (<ref>) admits a unique solution. Inspired by Yong <cit.>, we transform the solvability of Γ(·) to the solvability of another Riccati equation Π(·). To do this, we set Π(·):=P(·)+Γ(·), then from system (<ref>), the function Π(·) solves the following equation (with t being suppressed) { Π̇+Π[A-BΣ^-1(D^⊤PC+D_0^⊤PC_0)] +[A-BΣ^-1(D^⊤PC+D_0^⊤PC_0)]^⊤Π+δΠ +C^⊤(P -PDΣ^-1D^⊤P)C+C_0^⊤(P -PD_0Σ^-1D_0^⊤P)C_0 -C^⊤PDΣ^-1D_0^⊤PC_0-C_0^⊤PD_0Σ^-1D^⊤PC -Π BΣ^-1B^⊤Π=0, Π(T)=G. . Recalling that Σ=R+D^⊤PD+D_0^⊤PD_0, we have P-PDΣ^-1D^⊤P =P-PD(R+D^⊤PD+D_0^⊤PD_0)^-1D^⊤P = P^1/2[I-P^1/2DR^-1/2[I+R^-1/2(D^⊤P^1/2P^1/2D +D_0^⊤P^1/2P^1/2D_0)R^-1/2]^-1 R^-1/2D^⊤P^1/2]P^1/2 = P^1/2[I-Λ(I+Λ^⊤Λ+Λ^⊤Λ)^-1Λ^⊤]P^1/2, where Λ=P^1/2DR^-1/2 and Λ=P^1/2D_0R^-1/2. From (I+Λ^⊤Λ+Λ^⊤Λ)^-1≤ (I+Λ^⊤Λ)^-1, we have I-Λ(I+Λ^⊤Λ+Λ^⊤Λ)^-1Λ^⊤≥ I-Λ(I+Λ^⊤Λ)^-1Λ^⊤. By noting that I-Λ(I+Λ^⊤Λ)^-1Λ^⊤=(I+ΛΛ^⊤)^-1, we have P-PDΣ^-1D^⊤P ≥ P^1/2(I+ΛΛ^⊤)^-1P^1/2 = P^1/2(I+P^1/2DR^-1D^⊤P^1/2)^-1P^1/2≥ 0. Therefore, the following inequality holds C^⊤(P-PDΣ^-1D^⊤P)C≥ 0. Similarly, it holds also that C_0^⊤(P-PD_0Σ^-1D_0^⊤P)C_0≥ 0. Recall (<ref>), by the standard result (see Theorem 7.2 in <cit.>), we have that equation (<ref>) admits a unique solution Π(·)∈ C([0,T];𝒮_+^n). By recalling Π=P+Γ and Lemma <ref>, we get the well-posedness of equation (<ref>). Condition (<ref>) can be easily satisfied, for example, C_0=0 or D_0=0 or C_0=-C, D_0=D or C_0=C, D_0=-D. Comparing with Huang and Wang <cit.>, our diffusion terms of (<ref>) can depend both on the state variable and control variable. This makes the Riccati equations are more challenging to solve. Our results generalize the one of <cit.> in non-trivial manner. Moreover, if let C=D=D_0=β=β_0=0, our results will be consistent with <cit.>. § APPLICATION IN NETWORK SECURITY MODEL With the development of information technology, there exist high-frequency network communications. Accompanying that technology is the network security problems, and the readers can refer to <cit.> and the reference therein for the study of network security model by using MFGs theory. In this section, we investigate that how the investments of the users affect network security by applying our theoretical results. Consider a network with N users 𝒜_i, for 1≤ i≤ N. We suppose that each user 𝒜_i is willing to improve internet security level at first and individual users only know their own information. Let x_i(·) be the safety level of the network, which is characterized by the following SDE driven by two mutually independent standard Brownian motions W_i, and W_0, { dx_i(t)= [ax_i(t)+bu_i(t)+δ x^(N)(t)+k]dt +[cx_i(t)+du_i(t)+σ ]dW_i(t)+[d_0u_i(t)+σ_0]dW_0 (t), x_i(0)= x. . Here, W_i, and W_0 can be interpreted as the individual noise and common noise of user 𝒜_i. The control input u_i(·) of user 𝒜_i represents the investment in security maintenance, such as purchasing antivirus software and updating computer hardware. x^(N)(·)=1/N∑_j=1^Nx_j(·) is the average security level of all users which represents that the levels of security of all users are interacted, such as if a user's level of security is reduced, he may spread the virus to other users when he communicates with them online. Each user is devoted to minimizing the following performance criteria 𝒥_i(u_i(· ),u_-i(· ))=1/2𝔼{∫_0^T[ q(x_i(t)-x^(N)(t))^2+ ru_i^2(t)]dt+gx_i^2(T) }, where u_-i=(u_1,…,u_i-1,u_i+1,…,u_N), 1≤ i≤ N and q, g are nonnegative constants and r>0. Obviously, this network security model is a kind of LQ MFGs studied in section <ref>. By applying Theorem <ref>, we obtain that the decentralized strategy is given by, for 1≤ i≤ N, u̅_i(t)=-(bP(t)+cdP(t))ẑ̅̂_i(t)+bΓ(t) 𝔼[m(t)]+bΦ(t)+dσ P(t)+d_0σ_0P(t)/r+d^2P(t)+d_0^2P(t), where P(·),Γ(·),Φ(·),𝔼[m(·)] solve respectively (see (<ref>)-(<ref>), Ṗ(t)+(2a+c^2)P(t)+q-(b+cd)^2P^2(t)/r+d^2P(t)+d_0^2P(t)=0, P(T)=g, and Γ̇(t)+2(a-b(b+cd)P(t)/r+d^2P(t)+d_0^2P(t))Γ(t) +(P(t)+Γ(t))δ -b^2Γ(t)^2/r+d^2P(t)+d_0^2P(t)-q=0, Γ(T)=0, and Φ̇(t)+(P(t)+Γ(t))k +(a-b(b+cd)P(t)/r+d^2P(t)+d_0^2P(t)-b^2Γ(t)/r+d^2P(t)+d_0^2P(t))Φ(t) +(c-d(b+cd)P(t)/r+d^2P(t)+d_0^2P(t)-bdΓ(t)/r+d^2P(t)+d_0^2P(t))P(t)σ -(d_0(b+cd)P(t)/r+d^2P(t)+d_0^2P(t)+bd_0Γ(t)/r+d^2P(t)+d_0^2P(t))P(t)σ_0 =0, Φ(T)=0, and d𝔼[m(t)]= {[a+δ-b(bP(t)+cdP(t)+bΓ(t))/r+d^2P(t)+d_0^2P(t)]𝔼[m(t)] +k-b(bΦ(t)+dσ P(t)+d_0σ_0P(t))/r+d^2P(t)+d_0^2P(t)}dt, 𝔼[m(0)]= x. The optimal filtering ẑ̅̂_i(·) is given by { dẑ̅̂_i(t)= {(a-b(b+cd)P(t)/r+d^2P(t)+d_0^2P(t))ẑ̅̂_i(t)+(δ-b^2Γ(t)/r+d^2P(t)+d_0^2P(t))𝔼[m(t)] -b(bΦ(t)+dσ P(t)+d_0σ_0P(t))/r+d^2P(t)+d_0^2P(t)+k}dt +{(c-d(b+cd)P(t)/r+d^2P(t)+d_0^2P(t))ẑ̅̂_i(t)-dbΓ(t)/r+d^2P(t)+d_0^2P(t)𝔼[m(t)] -d(bΦ(t)+dσ P(t)+d_0σ_0P(t))/r+d^2P(t)+d_0^2P(t)+σ}dW_i(t), ẑ̅̂_i(0)= x. . In order to better illustrate the application in network security, and solve explicitly the Nash equilibrium, we set a=b=δ=σ=σ_0=r=g=1, c=d=d_0=k=0 and q=3. From (<ref>), the decentralized strategy becomes u̅_i(t)=-(P(t)ẑ̅̂_i(t)+Γ(t)𝔼[m(t)]+Φ(t)), where { Ṗ(t)+2P(t)+3-P^2(t)=0, Γ̇(t)+2(1-P(t))Γ(t) +(P(t)+Γ(t)) -Γ(t)^2-3=0, Φ̇(t)+(1-P(t)-Γ(t))Φ(t)=0, P(T)=1, Γ(T)=0, Φ(T)=0, . and { dm(t) =[2m(t)-(P(t)+Γ(t))𝔼[m(t)]-Φ(t)]dt, m(0) =x. . Then, we have m(t)=𝔼[m(t)], thus { dm(t) =[(2-P(t)-Γ(t))m(t)-Φ(t)]dt, m(0) =x. . Moreover, the optimal filtering ẑ̅̂_i(·) is given by { dẑ̅̂_i(t)= [(1-P(t))ẑ̅̂_i(t)+(1-Γ(t))𝔼[m(t)] -Φ(t)]dt +dW_i(t), ẑ̅̂_i(0)= x. . From (<ref>), it is easy to get P(t)=3-e^4(t-T)/1+e^4(t-T), Φ(t)=0, t∈ [0,T]. Moreover, let Π(·):=P(·)+Γ(·), then Π(·) solves the following Riccati equation { Π̇(t)+3Π(t)-Π^2(t)=0, t∈ [0,T] Π(T)=1, . which yields that Γ(t)=Π(t)-P(t)=3/1+2e^3(t-T)-3-e^4(t-T)/1+e^4(t-T). In this setting, the decentralized strategy is given by u̅_i(t)= e^4(t-T)-3/1+e^4(t-T)(ẑ̅̂_i(t)-m(t)) -3/1+2e^3(t-T)m(t), where m(t)= xe^2t-∫_0^t3/1+2e^3(s-T)ds, ẑ̅̂_i(t)= xe^t-∫_0^t3-e^4(s-T)/1+e^4(s-T)ds +∫_0^te^∫_s^t(1-3-e^4(u-T)/1+e^4(u-T))dudW_i(s) +x∫_0^t(1-3/1+2e^3(t-T)+3-e^4(s-T)/1+e^4(s-T)) e^2s-∫_s^t3-e^4(u-T)/1+e^4(u-T)du-∫_0^s3/1+2e^3(u-T)duds. Fig 1 gives the numerical solutions of P(·), Γ(·) and Φ(·). One can see that P(·) is gradually decreasing, Φ(·) is always 0. Γ(·) changes slowly at the beginning, and there is an upward trend near the terminal time. Fig 2 gives the numerical solution of m(·) which shows that m(·) decreases first and then increases. Fig 3 presents the optimal filtering of decentralized states of 50 users. By comparing Fig 3 and Fig 2, we can find the overall trend of the states of 50 users is consistent with the trend of m(·), moreover Fig 3 shows clearly the fluctuation of the individual noise. Fig 4 draws the control strategies of 50 users, we can find that the investment in security maintenance of each user is increasing, although they may have different fluctuations. In general, it is difficult to find an explicit solution. To better illustrate our results, we give some numerical simulations below. Suppose that N=50, T=1. Set the initial data as x=1, a=1.5, b=2.8, δ=1, k=2, c=0.6, d=2.5, d_0=6, σ=0.8, σ_0=0.3, q=3.3, r=2.5, g=5. Figure <ref> gives the numerical solutions of P(·), Γ(·) and Φ(·). Figure <ref> gives the numerical solutions of m(·) and 𝔼[m(·)]. Figure <ref> and Figure <ref> show, respectively, the optimal filtering of decentralized states and control strategies of 50 users. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 00 BP2014 M. Bardi and F. Priuli, Linear-quadratic N-person and mean-field games with ergodic cost. SIAM J. Control Optim., 52, 3022-3052 (2014). BFH2021 A. Bensoussan, X. Feng and J. Huang, Linear-Quadratic-Gaussian mean-field-game with partial observation and common noise. Math. Control Relat. Fields., 11, 23-46 (2021). BFY2013 A. Bensoussan, J. Frehse and P. Yam, Mean field games and mean field type control theory. SpringerBriefs in Mathematics, Springer, New York, 2013. BCCD2022 E. Bayraktar, A. Cecchin, A. Cohen and F. Delarue, Finite state mean field games with wright-fisher common noise as limits of N-player weighted games. Math. Oper. Res., 1-51 (2022). BCL2017 R. Buckdahn, Y. Chen and J. Li, Partial derivative with respect to the measure and its applications to general controlled mean-field systems. Stochatic Process. Appl., 134, 265-307 (2021). BDLP2009 R. Buckdahn, B. Djehiche, J. Li and S. Peng, Mean field backward stochastic differential equations: A limit approach. Ann. Probab., 37, 1524-1565 (2009). BLM2017 R. Buckdahn, J. Li and J. Ma, A mean-field stochastic control problem with partial observations. Ann. Appl. Probab., 27, 3201-3245 (2017). C2010 P. Cardaliaguet, Notes on Mean Field Games. Technical report, Paris Dauphine University, 2010. CD2013 R. Carmona and F. Delarue, Probabilistic analysis of mean field games. SIAM J. Control Optim., 51, 2705-2734 (2013). CD2018 R. Carmona and F. Delarue, Probabilistic Theory of Mean-Field Games with Applications. Springer, New York, 2018. CDL2016 R. Carmona, F. Delarue and D, Lacker, Mean field games with common noise. Ann. Probab., 44, 3740-3803 (2016). CDLL2019 P. Cardaliaguet, F. Delarue, J. Lasry and P. Lions, The master equation and the convergence problem in mean field games. Princeton University Press, 2019. CFS2015 R. Carmona, J. Fouque and L. Sun, Mean field games and systemic risk. Commun. Math. Sci., 13, 911-933 (2015). HCM2006 M. Huang, P. Caines and R. Malhamé, Distributed multi-agent decision-making with partial observations: asymtotic Nash equilibria. Proc. the 17th Internat. Symposium on Math. Theory on Networks and Systems, Kyoto, Japan., 2006. HY2021 M. Huang and X. Yang, Linear quadratic mean field social optimization: asymptotic solvability and decentralized control. Appl. Math. Optim., 84, 1969-2010 (2021). HHL2018 Y. Hu, J. Huang and X. Li, Linear quadratic mean field game with control input constraint. ESAIM Control Optim. Calc. Var., 24, 901-919 (2018). HHN2018 Y. Hu, J. Huang and T. Nie, Linear-Quadratic-Gaussian mixed mean-field games with heterogeneous input constraints. SIAM J. Control Optim., 56, 2835-2877 (2018). HMC2006 M. Huang, R. Malhamé and P. Caines, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Comm. Inform. Systems, 6, 221-251(2006). HW2016 J. Huang and S. Wang, Dynamic optimization of large-population systems with partial information. J. Optim. Theory Appl., 168, 231-245 (2016). HWW2016 J. Huang, S. Wang and Z. Wu, Backward mean-field Linear-Quadratic-Gaussian (LQG) games: full and partial information. IEEE Trans. Automat. Control, 61, 3784-3796 (2016). LL2007 J. Lasry and P. Lions, Mean field games. Japan J. Math., 2, 229-260 (2007). LF2019 R. Li and F. Fu, The maximum principle for partially observed optimal control problems of mean-field FBSDEs. Int. J. Control, 92, 2463-2472 (2019). LMWZ2022 M. Li, C. Mou, Z. Wu and C. Zhou, Linear-quadratic mean-filed games of controls with Non-Monotone Data. Trans. Amer. Math. Soc., 376(6), 4105-4143 (2023). LNW2022 M. Li, T. Nie and Z. Wu, Linear-quadratic large-population problem with partial information: Hamiltonian approach and Riccati approach. preprint, arXiv:2203.10481, 2022. Forthcoming in SIAM J. Control Optim. MB2017 J. Moon and T. Başar, Linear quadratic risk-sensitive and robust mean field games. IEEE Trans. Automat. Control, 62, 1062-1077 (2017). MNZ2005 D. Majerek, W. Nowak and W. Ziȩba, Conditional strong law of large number. Int. J. Pure Appl. Math., 20, 143-156 (2005). MZ2020 C. Mou and J. Zhang, Wellposedness of second order master equations for mean field games with nonsmooth data. Mem. Amer. Math. Soc., accepted, 2020. P1992 S. Peng, Stochastic Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim., 30, 284-304 (1992). SC2019 N. Şen and P. Caines, Mean field games with partial observation. SIAM J. Control Optim., 57, 2064-2091 (2019). ST2016 A. T. Siwe and H. Tembine, Network security as public good: A mean-field-type game theory approach. Proc. 13th Int. Multiconf. Syst. Singles Devices (SSD), 601-606 (2016). T2003 S. Tang, General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations. SIAM J. Control Optim., 42, 53-75 (2003). T2018 R. F. Tchuendom, Uniqueness for linear-quadratic mean field games with common noise. Dyn. Games. Appl., 8, 199-210 (2018). W1968 W. M. Wonham, On a matrix Riccati equation of stochastic control. SIAM J. Control., 6, 681-697 (1968). WWX2018 G. Wang, Z. Wu and J. Xiong, An introduction to optimal control of FBSDE with incomplete information. Springer, Cham, 2018. WZ2017 B. Wang and J. Zhang, Social optima in mean field Linear-Quadratic-Gaussian models with Markov jump parameters. SIAM J. Control Optim., 55, 429-456 (2017). WXX2017 G. Wang, H. Xiao and G. Xing, An optimal control problem for mean-field forward-backward stochastic differential equation with noisy observation. Automatica, 86, 104-109 (2017). WZZ2013 G. Wang, C. Zhang and W. Zhang, Stochastic maximum principle for mean-field type optimal control under partial information. IEEE Trans. Automat. Control, 59, 522-528 (2013). Y2013 J. Yong, Linear-quadratic optimal control problems for mean-field stochastic differential equations. SIAM J. Control Optim., 51, 2809-2838 (2013). YZ1999 J. Yong and X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York, 1999. ZP2021 W. Zhang and C. Peng, Indefinite mean-field stochastic cooperative linear-quadratic dynamic difference game with its application to the network security model. IEEE Trans. Cybern., 1-14 (2021).
http://arxiv.org/abs/2307.02432v1
20230705165331
A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty
[ "Atul Agrawal", "Phaedon-Stelios Koutsourelakis" ]
physics.flu-dyn
[ "physics.flu-dyn", "cs.LG", "physics.comp-ph", "stat.ML" ]
Elsevier
http://arxiv.org/abs/2307.00497v2
20230702070645
Don't Memorize; Mimic The Past: Federated Class Incremental Learning Without Episodic Memory
[ "Sara Babakniya", "Zalan Fabian", "Chaoyang He", "Mahdi Soltanolkotabi", "Salman Avestimehr" ]
cs.LG
[ "cs.LG", "cs.AI" ]
[ Don’t Memorize; Mimic The Past: Federated Class Incremental Learning Without Episodic Memory Sara Babakniyacs Zalan Fabianee Chaoyang Hecomp Mahdi Soltanolkotabiee Salman Avestimehree csDepartment of Computer Science, University of Southern California, Los Angeles, USA eeMing Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, USA compFedML Sara Babakniyababakniy@usc.edu Federated Learning, ICML, Continual Learning 0.3in ] Deep learning models are prone to forgetting information learned in the past when trained on new data. This problem becomes even more pronounced in the context of federated learning (FL), where data is decentralized and subject to independent changes for each user. Continual Learning (CL) studies this so-called catastrophic forgetting phenomenon primarily in centralized settings, where the learner has direct access to the complete training dataset. However, applying CL techniques to FL is not straightforward due to privacy concerns and resource limitations. This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data. Then, clients can leverage the generative model to mitigate catastrophic forgetting locally. The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients. Therefore, it reduces the risk of data leakage as opposed to training it on the client's private data. We demonstrate significant improvements for the CIFAR-100 dataset compared to existing baselines. § INTRODUCTION Federated learning (FL) <cit.> is a decentralized machine learning technique that enables privacy-preserving collaborative learning. In FL, multiple users (clients) train a common (global) model in coordination with a centralized node (server) without sharing personal data. In recent years, FL has attracted tremendous attention in both research and industry and has been successfully employed in various fields. Despite its popularity, deploying FL in practice requires addressing challenges such as resource limitation and statistical heterogeneity <cit.>. Furthermore, there are still common assumptions in most FL frameworks that are far from reality. One such assumption is that the client's local data distribution does not change over time. However, in real-world <cit.>, users' data constantly evolve due to changes in the environment or trends. In such scenarios, the model must rapidly adapt to the incoming data while preserving performance in the past. In the centralized setting, such problems have been explored in continual learning <cit.>. Despite all the significant progress for the centralized problems, most methods cannot be directly employed in the FL setting due to inherent differences between the two settings. For instance, experience replay <cit.> is a popular approach, where a portion of past data points is saved to maintain some representation of past distributions throughout the training. However, deploying experience replay in FL has resource and privacy limitations. It requires clients to store and keep their data which may not be possible because of privacy reasons. This can be highly important, especially in cases for example a service provider can store its customer's data for only a short time. Besides, even storing is possible; such data overhead increase the memory usage of already resource-limited clients. To address the above problems, we propose , Mimicking Federated Continual Learning. In particular, is based on training a generative model in the server and sharing it with clients to sample synthetic examples of past data instead of clients storing their data. The generative model training is data-free in the sense that it only requires the global model without any form of training data from the clients. This is specifically important because this step does not require powerful clients and does not cause any extra data leakage from them. Finally, our experiments demonstrate improvement by 20% in average accuracy while reducing the training overhead of the clients. We summarize our contributions below: * We propose a novel framework to tackle the problem of federated class incremental learning more efficiently. Our framework specifically targets applications where past samples are unavailable. * We point out potential issues with relying on client-side memory for FCL, and propose using a generative model trained by the server in a data-free manner to reduce catastrophic forgetting while preserving privacy. * We demonstrate the efficacy of our method in more realistic scenarios with a larger number of clients and a more challenging dataset (CIFAR-100). § RELATED WORK Continual Learning. Catastrophic forgetting <cit.> is a fundamental problem: when we train a model on new examples, its performance on past data degrades. This problem is investigated in continual learning <cit.>, and the goal is for the model to learn new information while preserving its old ones. Recent works focus on three scenarios, namely task-, domain- and class-incremental learning <cit.>. In Task-IL, tasks are disjoint, and the output spaces are separated by task IDs provided during training and test time. For Domain-IL, the output space is still the same, but the task IDs are no more provided. Finally, in Class-IL, new tasks introduce new classes to the output space, and the number of classes increases incrementally. Among these scenarios, we focus on Class-IL, which is the more challenging and realistic, especially in FL. In the FL applications, there is no task ID available, and it is preferred to learn a single model useable for all the observed data. Federated Continual Learning. In Federated Continual Learning (FCL), the main focus is to adapt the global model to new data while maintaining knowledge of past data, all under the standard restrictions of FL. This important problem has only gained attention very recently, and <cit.> is the first paper on this topic. It focuses on Task-IL, which requires a unique task id per task during inference. Furthermore, it adapts separate masks per task to improve personalized performance without preserving a common global model. This setting is considerably different than ours as we target class-IL with a single global model that can classify all the classes seen so far. <cit.> employs knowledge distillation using a surrogate dataset. <cit.> relaxes the problem as clients have access to large memory to save the old examples and share their data which is different from the standard FL setting. <cit.> explore the FCL problem in domains other than image classification. This work focuses on Class-IL for supervised image classification without memory replay, which has been also discussed in <cit.>. However, <cit.> allows overlapping classes between tasks and focuses on few-shot learning, which is different from the standard class-IL. The most closely related work to ours is <cit.>, where authors propose FedCIL. This work also benefits from methods based on generative replay to compensate for the absence of old data and overcome forgetting. In FedCIL, clients train the discriminator and generator locally. Then, the server takes a consolidation step after aggregating the updates. In this step, the server generates synthetic data using all the generative models trained by the clients to consolidate the global model and improve the performance. The main difference between this work and ours is that in our work, the generative model is trained by the server in a data-free manner which can reduce clients' computation and does not require their private data. Data-Free Knowledge Distillation. Knowledge distillation (KD)<cit.> is a popular method to transfer knowledge from a well-trained teacher model to a (usually) smaller student model using at least a small portion of training data. However, in cases that such data is unavailable (e,g, privacy concerns), a new line of work <cit.> proposes data-free knowledge distillation. In such methods, a generative model is used as a training data substitute. This model generates synthetic data such that the teacher model predicts them as their assigned label. Data-free KD has been previously used in FL <cit.> as a solution for data heterogeneity. However, to the best of our knowledge, this is the first work that adapted such a technique in the context of FCL. § FEDERATED CLASS-IL WITH In federated class-IL, a shared model is trained on T tasks. However, the distributed nature of FL makes it distinct from the centralized version. In FL, users may join, drop out or change their data independently. Also, required data or computation power for some centralized algorithms may not be available due to privacy and resource constraints. To address the mentioned problems, we propose . This algorithm includes two essential parts: first, at the end of each task, the server trains a generative model with data-free knowledge distillation methods to learn the representation of the seen classes. Second, clients diminish catastrophic forgetting by generating synthetic images from the generative model. This way, clients do not require memories to store old data. Also, since the server trains the generative model training without additional information, this step does not introduce new privacy issues. Finally, can help mitigate the data heterogeneity problem, as clients can synthesize samples from classes they do not own. Here, we explain the two key parts of our algorithm: server-side (Fig. <ref>. left) and client-side (Fig. <ref>. right). §.§ Server-Side: Generative Model The motivation for deploying a generative model is to synthesize images that mimic the old tasks and to avoid storing past data. However, training these generators on the client's side, where the training data exists, is computationally expensive and requires a large amount of training data and can be potentially privacy concerning. On the other hand, the server has only access to the global model and no data. We propose training a generative model on the server, but in a data-free manner, i.e., by means of model-inversion image synthesis <cit.>. In such approaches, the goal is to synthesize images optimized with respect to the discriminator (global model). Then, the generative model is shared with the clients to be later used in sampling images during local training. To this aim, we utilize a generative model, , that takes noise z ∼(0, 𝐈) as input and produces a synthetic sample . In training this model, we employ the following training objectives. Cross Entropy Loss. First, the synthetic data should be labeled correctly by the current discriminator model (global model or ). Therefore, we employ cross entropy classification loss between its assigned label z and the prediction of on synthetic data . Note, that noise dimension can be arbitrary and greater than the current discovered classes of task t, and we only consider the first q dimension here, where q = ∑_i=1^t |𝒴^i|. Then, we can define this loss as _CE = CE(argmax(z[:q]), ()). Diversity Loss. Synthesized images can suffer from a lack of class diversity, and we utilize information entropy (IE) <cit.> to solve this. For a probability vector =(p_1, p_2,..., p_q), IE is evaluated as () = - 1/q∑_ip_i log(p_i). Therefore, diversity loss is defined as _div = -(1/bs∑_i=1^bs(_i)). This loss measures the IE for samples of a batch (batch size =bs). Maximizing this term encourages the output distribution of the generator to be balanced for all the classes. Batch Statistics Loss. Prior works <cit.> in the centralized setting have recognized that the distribution of synthetic images can drift from real data. We can use batch statistics loss _BN to avoid such problems. Specifically, the goal is to enforce synthetic images to produce similar statistics in all BatchNorm layers to the ones that are already produced during training. To this end, we minimize the layer-wise distances between the two statistics written as _BN = 1/L∑_i=1^LKL((μ_i,σ^2_i), (μ_i,σ^2_i)) Here, L denotes the number of BatchNorm layers in the model, μ_i and σ_i are the mean and standard deviation stored in BatchNorm layer i of the global model, μ_i,σ_i are measured statistics of BatchNorm layer i for the synthetic images, KL stands for the Kullback-Leibler divergence. Finally, we can write the training objective of as (<ref>) where w_div and w_BN control the weight of each term. min__ce + w_div_div + w_BN_BN, §.§ Client-side: Continual Learning For client-side training, inspired by <cit.>, we distill the stability-plasticity dilemma into three critical requirements of CL and aim to address them one by one. Current task. To have plasticity, the model needs to learn the new features in a way that is least biased toward the old tasks. So, here, the CE loss is computed for the new classes only by splitting the linear heads and excluding the old ones: _CE^t = CE(_t(x), y)    if y ∈𝒴^t else 0. Previous tasks. To reduce forgetting, we train the model using synthetic and real data simultaneously. However, the distribution of the synthetic data differs from the real one, and it becomes important to prevent the model from distinguishing between old and new data. To address this problem, for fine-tuning the decision boundary using the sampled synthetic data (=Sample(_t-1)), clients freeze the feature extraction part and only update the classification head (represented by _t^*). This loss can be formulated as _FT^t = CE(_t^*(), y). Finally, to minimize forgetting, the common method is knowledge distillation over the prediction layer. However, <cit.> proposed importance-weighted feature distillation: instead of using the knowledge in the decision layer, they use the output of the feature extraction part of the model (penultimate layer). This way, only the more significant features of the old model are transferred, enabling the model also to learn the new features from the new tasks. This can be written as below where is the frozen linear head of the model trained on the last task (= _t-1^L). _KD^t = || (_t^1: L-1(x̂)) - ( _t-1^1: L- 1(x̂)) ||^2_2, In summary, the final objective on the client side as min__t_CE^t + w_FT_FT^t + w_KD_KD^t, w_FT and w_KD determine the importance of each loss term. §.§ Algorithm For the first task, clients train the model using the _ce. At the end of training task t=1, the server trains the generative model by optimizing (<ref>). Then, the server freezes and saves and the global model (_t-1). This procedure repeats for all future tasks, with the only difference being that for t>1, the server needs to send the current global model (_t), the previous task's final model (_t-1) and to clients. Since _t-1 and are fixed during the whole process of training _t, the server can send them to each client once per task to reduce the communication cost. To further decrease this overhead, we can use communication-efficient methods, such as <cit.>, that highly compress the model with minor performance degradation. § EXPERIMENTS Setting. We demonstrate the efficacy of our method on dataset: CIFAR-100 <cit.>. We use the baseline ResNet18 <cit.> as the global model and ConvNet architecture for . In our experiments, there are 50 clients in total and 5 randomly sampled participants in every round. Also, there are 10 non-overlapping tasks (T=10), and for each task, the model is trained for 100 FL rounds. We use Latent Dirichlet Allocation (α=1) <cit.> to distribute the data of each task among the clients. We compare the baselines based on three metrics –average accuracy, average forgetting and wallclock time– which we explain more in the appendix. All the results are reported after averaging over 3 different random seeds. Baseline. We compare our method with FedAvg <cit.>, FedProx <cit.>, FedCIL <cit.>, FedLwF-2T<cit.> and Oracle. FedAvg and FedProx are the two most common aggregation methods in FL. FedCIL is a GANs-based method where clients train the discriminator and generator locally to generate samples from the old tasks. In FedLwF-2T, clients use two teachers – the global model and their previously trained local model – to distill their knowledge of the past. Finally, Oracle as an upper bound: during the training of the i_th task, clients have access to all of their data from t=1, ..., i. Metrics. We evaluate each approach with the following metrics; – Accuracy (𝒜^t): Accuracy of the model at the end of task t, over all the classes observed so far. – Average Accuracy (𝒜): Average of all 𝒜^t for all the T available tasks. 𝒜 = 1/T∑_t=1^T𝒜^t – Forgetting (f^t): The difference between the highest accuracy of the model on task t and its performance at the end of the training. – Average Forgetting (f): Average of the forgetting over all the tasks. f = 1/T-1∑_t=1^T-1 f^t – Wallclock time. This is the time that it takes for the client or server to perform one round of federated learning. The time is measured in seconds and averaged between different clients and rounds. It is worth noting that all the experiments are done in the same GPU, and the number could change by changing the hardware. §.§ Results Table <ref> shows each method's average forgetting and accuracies. FedAvg and FedProx have the highest forgetting as they are not designed for FCL. Also, high forgetting for FedLwF-2T indicates that extra teachers cannot be effective in the absence of old data. FedCIL and have lower forgetting and better accuracy. outperforms FedCIL because the generative models in FedCIL need to train for a long time to generate effective synthetic data. We also compare methods' compute costs. Some methods change after learning the first task; therefore, we distinguish between the cost of the first task and later ones. As depicted, can significantly improve accuracy and forgetting at the cost of a slight increase in the clients' training time for T > 1 (due to using synthetic data). The server cost in is similar to FedAvg except at the end of each task, where it needs to train the generative model. This extra computation cost should not be a bottleneck because it occurs once per task, and servers usually have access to better computing power compared to clients. § DISCUSSION §.§ Overheads of generative model Client-side. Using on the client side would increase the computational costs compared to vanilla FedAvg. However, existing methods in CL often need to impose additional costs such as memory, computing, or both to mitigate catastrophic forgetting. Nevertheless, there are ways to reduce costs for MFCL. For example, clients can perform inference once, generate and store synthetic images only for training, and then delete them all. They can further reduce costs by requesting that the server generate synthetic images and send them the data instead of . Here, we raise two crucial points about the synthesized data. Firstly, there is an intrinsic distinction between storing synthetic (or ) and actual data; the former is solely required during training, and clients can delete them right after the training. Conversely, the data in episodic memory should always be saved on the client's side because once deleted, it becomes unavailable. Secondly, synthetic data is shared knowledge that can assist anyone with unbalanced data or no memory in enhancing their model's performance. In contrast, episodic memory can only be used by one client. Server-side. The server needs to train the once per task. It is commonly assumed that the server has access to more powerful computing power and can compute more information faster than clients. This training step does not have overhead on the client side and, overall, might slow down the whole process. However, tasks do not change rapidly in real life, giving the server ample time to train the generative model before any shifts in trends or client data occur. Communication cost. Transmitting the generative model can be a potential overhead for , as it is a cost that clients must bear once per task to prevent or reduce catastrophic forgetting. However, several possible methods, such as compression, can significantly reduce this cost while still maintaining excellent performance. This could be an interesting direction for future research. §.§ Privacy of Federated Learning, specifically FedAvg, is vulnerable to different attacks, such as data poisoning, model poisoning, backdoor attacks, and gradient inversion attacks <cit.>. generally does not introduce any additional privacy issues and is prone to the same set of attacks as FedAvg. trains the generative model based on the weights of the global aggregated model, which is already available to all clients in the case of FedAvg. On the contrary, in some of the prior work, the clients need to share a locally trained generative model or perturbed private data, potentially causing more privacy problems. For FedAvg, various solutions and defenses, such as differential privacy or secure aggregation <cit.>, are proposed to mitigate the effect of such privacy attacks. One can employ these solutions in the case of as well. Particularly, in , the server does not require access to the individual client's updates and uses the aggregated model for training. Therefore, training a generative model is still viable after incorporating these defense mechanisms. benefits from Batch Statistics Loss (_BN) in training the generative model. However, some defense mechanisms suggest not sharing local Batch Statistics with the server. While training the generative model without the _BN is still possible; it can reduce the accuracy. Addressing this is an interesting future direction. § CONCLUSION This work presents a federated Class-IL framework while addressing resource limitations and privacy challenges. We exploit generative models trained by the server in a data-free fashion, obviating the need for the client's memory. langley00 icml2023 § ALGORITHM IN DETAIL §.§ Generative Model Architectures In Table <ref>, we show the generative model architectures used for CIFAR-100. The global model has ResNet18 architecture, we change the first layer kernel size to 3× 3 from 7× 7. In this table, layers are reported as K × K (C_in, C_out), where K, C_in and C_out are the size of the kernel, input channel and output channel of the layer, respectively. §.§ Hyperparameters Table <ref> presents some of the more important parameters. [c]0.5 1cCIFAR-100 1c(1000, 128 × 8 × 8) 1c(-, 128, 8, 8) 1c(128) 1c(2) 1c3×3(128, 128) 1c(128) 1c 1c(2) 1c3×3(128, 64) 1c(64) 1c 1c3×3(64, 3) 1c 1c(3) tableGenerative model Architecture [c]0.5 Dataset CIFAR-100 Data Size 32 × 32 # Tasks 10 # Classes per task 10 # Samples per class 500 Batch Size 32 Synthetic Batch Size 32 FL round per task 100 Local epoch 10 tableParameter Settings in different datasets
http://arxiv.org/abs/2307.01897v1
20230704194226
Generalized ARRIVAL Problem for Rotor Walks in Path Multigraphs
[ "David Auger", "Pierre Coucheney", "Loric Duhazé", "Kossi Roland Etse" ]
cs.DM
[ "cs.DM", "nlin.CG" ]
D. Auger et al. DAVID Lab., UVSQ, Université Paris Saclay, 45 avenue des Etats-Unis,78000,Versailles, France Generalized ARRIVAL Problem for Rotor Walks in Path Multigraphs David Auger1 Pierre Coucheney1 Loric Duhazé1Kossi Roland Etse1 August 1, 2023 ================================================================== Rotor walks are cellular automata that determine deterministic traversals of particles in a directed multigraph using simple local rules, yet they can generate complex behaviors. Furthermore, these trajectories exhibit statistical properties similar to random walks. In this study, we investigate a generalized version of the reachability problem known as arrival in Path Multigraphs, which involves predicting the number of particles that will reach designated target vertices. We show that this problem is in NP and co-NP in the general case. However, we exhibit algebraic invariants for Path Multigraphs that allow us to solve the problem efficiently, even for an exponential configuration of particles. These invariants are based on harmonic functions and are connected to the decomposition of integers in rational bases. § INTRODUCTION The rotor routing, or rotor walk model, has been studied under different names: eulerian walkers <cit.> and patrolling algorithm <cit.>. It shares many properties with a more algebraically focused model: abelian sandpiles <cit.>. General introductions to this cellular automaton can be found in <cit.> and <cit.>. Here is how a rotor walk works: in a directed graph, each vertex v with an outdegree of k has its outgoing arcs numbered from 1 to k. Initially, a particle is placed on a starting vertex, and the following process is repeated. On the initial vertex, the particle moves to the next vertex following arc 1. The same rule then applies on subsequent vertices. However, when a vertex is revisited, the particle changes its movement to the next arc, incrementing the number until the last arc is used. Then, the particle restarts from arc 1 if it visits this vertex again. This simple rule defines the rotor routing, which exhibits many interesting properties. Particularly, if the graph is sufficiently connected, the particle will eventually reach certain target vertices known as sinks. The time required for such exploration can be exponential in the number of vertices. The problem of determining, given a starting configuration (numbering) of arcs and an initial vertex, which sink will be reached first, is known as the ARRIVAL problem. It was defined in <cit.>, along with a proof that the problem belongs to the complexity class NP ∩ co-NP. Although the problem is not known to be in P, <cit.> showed that it belongs to the smaller complexity class UP ∩ co-UP. Furthermore, a subexponential algorithm based on computing a Tarski fixed point was proposed in <cit.>. Despite these general bounds, little is known about efficiently solving the problem in specific graph classes, especially when extending it to the routing of multiple particles. In <cit.>, we addressed the problem in multigraphs with a tree-like structure and provided a linear algorithm for solving it with a single particle. However, the recursive nature of the algorithm provided limited insights into the structure of rotor walks in the graph. We also examined the structure of rotor walks and the so-called sandpile group in the case of a simple directed path, where simple invariants can explain the behavior of rotor walks. In this work, we focus specifically on a family of multigraphs that consist of directed paths with a fixed number of arcs going left and right on each vertex, with a sink located at both ends of the path. We present an efficient algorithm for solving the ARRIVAL problem in this general context, considering a potentially exponential number of particles and antiparticles, a concept introduced in <cit.>. Our approach involves introducing algebraic invariants for rotor walks and chip-firing, enabling a complete description of the interplay between particle configurations and rotor configurations/walks. These invariants are derived from harmonic functions in graphs, which are functions invariant under chip-firing. Additionally, we introduce a related concept for rotor configurations called arcmonic functions, inspired by <cit.>. An essential tool for analyzing rotor routing in Path Multigraphs is the decomposition of integer values, which is closely associated with the AFS number system (<cit.>), where numbers are decomposed into rational bases. While we draw inspiration from these results, our approach focuses on proving precisely what is necessary, using our own methodology. Additionally, we derive other outcomes, such as the cardinality of the Sandpile Group of Path Multigraphs or its cyclic structure. These results can also be derived from Kirchoff's Matrix-Tree Theorem or the notion of co-eulerian graphs <cit.>. Nevertheless, our results remain self-contained. § MECHANICS AND TOOLS FOR ROTOR ROUTING IN MULTIGRAPHS §.§ Multigraphs A directed multigraph G is a tuple G=(V,A,,) where V and A are respectively finite sets of vertices and arcs, and head and tail are maps from A to V defining incidence between arcs and vertices. An arc with tail x and head y is said to be from x to y. Note that multigraphs can have multiple arcs with the same head and tail, as well as loops. For a vertex u∈ V, we denote by A^+(u) the subset of arcs going out of u, i.e. A^+(u)={a∈ A|(a)=u} and ^+(u) = |A^+(u)| is the outdegree of u. We denote by V_0 the set of vertices with positive outdegree and S_0 vertices with zero outdegree, i.e. sinks. A directed multigraph is stopping if for every vertex u, there is a directed path from u to a sink. In this whole paper, we suppose that G is a stopping multigraph. In the second part of this work, we consider the following multigraph: the Path multigraph P^x,y_n on n+2 vertices is a multigraph G= (V_0 ∪ S_0,A,,) such that: * V_0={u_1,u_2,...,u_n} and S_0={u_0,u_n+1}; * for k∈≪ 1,n, we have ^+(u_k)=x+y with x arcs from u_k to u_k+1 and y arcs from u_k to u_k-1 * u_0 and u_n+1 are considered as sinks with no outgoing arcs. This graph is clearly stopping if x+y ≥ 1. See Fig. <ref> for a representation of P^2,3_n. We consider the case n ≥ 1, and 1 ≤ x < y with x,y coprime. §.§ Rotor Structure If u ∈ V_0, a rotor order at u is an operator denoted by θ_u such that: * θ_u: A^+(u) → A^+(u) ; * for all a∈ A^+(u), the orbit {a,θ_u(a),θ_u^2(a),...,θ_u^^+(u)-1(a)} of a under θ_u is equal to A^+(u), where θ_u^k(a) is the composition of θ_u applied to arc a exactly k times. A rotor order for G is then a map θ : A → A such that the restriction θ_u of θ to A^+(u) is a rotor order at u for every u ∈ V_0. Note that all θ_u as well as θ are one to one. If C ⊆ V_0, the composition of operators θ_u for all u ∈ C does not depend on the order of composition since they act on disjoint sets A^+(u); we denote by θ_C this operator and θ^-1_C is its inverse. Finally, we use the term rotor graph to denote a stopping multigraph together with a rotor order θ. In P^x,y_n, we define a rotor order by simply considering all arcs going right before all arcs going left, cyclically (see Fig. <ref>). Formally, let a^k_i denote for i ∈≪ 0, x-1 the x arcs from u_k to u_k+1 and for i ∈≪ x,x+y-1 the y arcs from u_k to u_k-1; then we define θ(a^k_i) = a^k_j with j = i + 1 x+y. §.§ Configurations A rotor configuration of a rotor graph G is a mapping ρ from V_0 to A such that ρ(u)∈ A^+(u) for all u∈ V_0. We denote by (G) or simply the set of all rotor configurations of the rotor graph G. The graph induced by ρ on G=(V,A,,) is G(ρ) = (V,ρ(V_0),,), in which each vertex in V_0 has outdegree one. A particle configuration of a rotor graph G is a mapping σ from V to . We denote by Σ(G) or simply Σ the set of all particle configurations of the rotor graph G. The set Σ(G) can be identified with ^V and has a natural structure of additive abelian group. If u ∈ V, we identify u with the element of Σ(G) with exactly one chip on u. Thus we can write, e.g. σ + 3u to denote the configuration obtained from σ∈Σ by adding 3 to σ(u). If σ(u) ≥ 0, we interpret it as a number of particles on vertex u, whereas if σ(u) ≤ 0 it can be interpreted as antiparticles, or simply a debt of particles. The degree of a particle configuration σ is defined by (σ) = ∑_u ∈ Vσ(u). Finally, a rotor-particle configuration is an element of (G) ×Σ(G). §.§ Rotor Routing Let G be a rotor graph, we define operators indexed by vertices u ∈ V_0 on (G) ×Σ(G): * ^+_u:(G) ×Σ(G) →(G) ×Σ(G) is defined by ^+_u(ρ,σ)=(ρ, σ + (ρ(u)) - u); * ^+_u: (G) ×Σ(G) →(G) ×Σ(G) is defined by ^+_u(ρ,σ)=(θ_u ∘ρ,σ). Note that θ_u ∘ρ is the rotor configuration equal to ρ on all vertices except in u where θ has updated the arc. Applying ^+_u to (ρ,σ) can be interpreted as moving a particle from u to the head of arc ρ(u), whereas applying ^+_u updates the rotor configuration at u. It is easy to see that these operators are bijective on (G) ×Σ(G), and we denote by ^-_u and ^-_u their inverses. We now define the routing operators by ^+_u = ^+_u ∘^+_u, and its inverse is obviously ^-_u = ^-_u ∘^-_u. Routing a rotor-particle configuration (ρ,σ) consists in applying a series of ^+ and ^- operators. Since they act on different vertices and disjoint sets of arcs, the following result is straightforward. The family of operators ^+_u and ^-_u for all u ∈ V_0 commute. Since the order in which routing operators are applied does not matter, we define a routing vector as a map from V_0 to . We define ^r as the operator obtained by composing all elements of the family { (^+_u)^r(u)}_u ∈ V_0 in any order, where the exponent r(u) stands for composition of the operator or its inverse with itself, depending on the sign of r(u). We shall use the term routing when we apply any operator ^r as well. We end this subsection by pointing out that the kind of routing defined here, which we call move and turn routing, is used in <cit.> and <cit.>, and is more adapted to study the arrival problem. Another kind of routing, the turn and move routing, used for instance in <cit.>, is more widely used in the literature and is more adapted to study the link between the sandpile group and rotor configurations. However, it is easy to see that these two definitions of routing are conjugate by θ, and all results obtained for one of them can be translated into the other context. §.§ Legal Routing and arrival Applying ^+_u to (ρ,σ) ∈×Σ is said to be a legal routing if σ(u) > 0. A sequence of legal routings (ρ_0,σ_0) (ρ_1,σ_1) ⋯(ρ_k,σ_k), where denotes a legal routing at vertex u ∈ V_0, is maximal if for all u ∈ V_0 we have σ_k(u) ≤ 0, i.e. no other legal routing can be applied. The classic version of the commutativity result for rotor routing is the following: For all (ρ,σ) ∈×Σ with σ≥ 0, there is a unique (ρ',σ') with σ'(u)=0 for all u ∈ V_0, such that all maximal legal routings from (ρ,σ) end in (ρ',σ'). Furthermore, all legal routings can be continued in such a maximal legal routing. The previous result states that we can always route legally all particles to the sinks, in any order by choosing every time a vertex where the routing is legal, and we will always reach the same final configuration. Moreover, it can be shown that the routing vectors corresponding to all maximum legal routings are the same. For such a maximal legal routing, we shall say that (ρ,σ) is fully routed to sinks, and write (ρ',σ') = ^∞_L(ρ,σ), where the L stands for legal. .3cm The original arrival problem consists in the following decision problem: if (ρ, σ) ∈×Σ with σ≥ 0 and (σ)=1, if (ρ',σ') = ^∞_L(ρ,σ), for a given sink s ∈ S_0, does σ'(s)=1 ? .3cm This problem is known to be in NP and co-NP, but the best algorithm known to this date (see <cit.>) has complexity 2^O(√(|V|))) in the case of a simple graph. We shall now generalize this problem to any number of positive and negative particles, and remove the legality assumption. §.§ Equivalence classes of Rotors Two rotor-particle configurations (ρ,σ) and (ρ',σ') are said to be equivalent, which we denote by (ρ,σ) ∼ (ρ',σ'), if there is a routing vector r such that ^r(ρ,σ) = (ρ',σ'). It is easy to see that this defines an equivalence relation on ×Σ. Two rotor configurations ρ, ρ' are said to be equivalent, which we denote by ρ∼ρ', if there is σ∈Σ such that (ρ,σ) ∼ (ρ',σ). In this case, the relation is true for any σ∈Σ, and it defines an equivalence relation on . Cycle Pushes. Suppose that ρ∈ and let C be a directed circuit in G(ρ). The positive cycle push of C in ρ transforms ρ into θ_C ∘ρ; see Figure <ref>. Similarly, if C is a directed circuit in G(θ^-1∘ρ), the negative cycle push transforms ρ into θ^-_C ∘ρ. A sequence of cycle pushes is a finite or infinite sequence of rotor configurations (ρ_i) such that each ρ_i+1 is obtained from ρ_i by a positive or negative cycle push. Note that if C is a directed circuit in G(ρ), for any σ∈Σ, we can obtain (θ_C ∘ρ, σ) by applying ^r_C to (ρ,σ), and if C is a circuit in G(θ^-1∘ρ), then (θ^-1_C ∘ρ,σ) is equal to ^-r_C (ρ,σ), where in both cases r_C is the routing vector consisting in routing once every vertex of C. In other words, a cycle push is a shortcut in the routing of a particle on the circuit. Given two rotor configurations ρ and ρ', ρ∼ρ' if and only if ρ' can be obtained from ρ by a sequence of cycle pushes. Suppose that ρ' can be obtained from ρ by a sequence of cycle pushes. Since cycle push operations can as well be obtained by routing operators, we have that for any σ∈Σ, (ρ, σ) ∼ (ρ', σ), and consequently ρ∼ρ'. Conversely, assume that, for a given σ∈Σ, there is a routing vector r from (ρ, σ) to (ρ', σ). We show that ρ' can be obtained by cycle pushes from ρ, by induction on the L^1-norm of r, i.e. |r|_1=∑_u ∈ V_0 | r(u) |. If |r|_1 = 0, then ρ = ρ'. Otherwise, consider the partition of V in sets P, N, and Z corresponding to vertices u such that r(u) is positive, negative and null respectively. Assuming P is nonempty (we can interchange the roles of P and N if needed), we observe that the degree of σ on P, i.e. ∑_u ∈ Pσ(u), cannot increase through positive routing on P. Similarly, negative routing on N cannot increase the degree of σ on P. However, after performing all the routings in r, we end up with the same particle configuration σ, which implies the degree on P remains unchanged. Consequently, all positive move operations within P have exclusively been performed on arcs with head in P. In particular, ρ(P) contains a directed circuit C. By applying a cycle push on circuit C, we obtain (θ_C ∘ρ, σ). Since a routing vector from (θ_C ∘ρ, σ) to (ρ',σ') is r - r_C, with |r-r_C|_1 < |r|_1, we can apply induction to continue the sequence of cycle pushes. Whenever rotor configurations are equivalent, they eventually route particles identically since positive and negative cycle push correspond to adding or removing closed circuits in trajectories. In particular, it is easy to see that it is always possible to route any (ρ,σ) to a (ρ',σ') such that σ'(u)=0 for all u ∈ V_0. Let us denote by ^∞(ρ,σ) the nonempty set of these configurations. Let (ρ_1,σ_1) ∈^∞(ρ,σ). Then (ρ_2, σ_2) ∈^∞(ρ,σ) if and only if ρ_1 ∼ρ_2 and σ_1=σ_2. First, if ρ_1 ∼ρ_2 and σ_1=σ_2, by definition (ρ,σ) ∼ (ρ_1,σ_1) ∼ (ρ_2,σ_1) = (ρ_2,σ_2), and σ_2(u)=0 for all u ∈ V_0, so that (ρ_2,σ_2) ∈^∞(ρ,σ). Conversely, suppose first that |S_0| = 1. Since (σ_1) = (σ_2), one has σ_1 = σ_2, and by consequence ρ_1 ∼ρ_2. If |S_0| > 1, consider the rotor graph G' obtained from G by merging all sinks into a unique sink s. Let r be a routing vector from (ρ_1,σ_1) to (ρ_2,σ_2) in G. The same routing vector will also lead from (ρ_1,σ'_1) to (ρ_2,σ'_2) in G', where σ'_1(u) = 0 for all u ∈ V_0 and σ'_1(s) = ∑_s' ∈ S_0σ_1(s') (and σ'_2 defined accordingly). We deduce from the case |S_0| = 1 that σ_1' = σ_2', and, by Theorem <ref>, that r corresponds to a sequence of cycle pushes in G' and hence also in G. Since cycle push operations do not modify particle configurations, we have σ_1 = σ_2. If σ≥ 0, then if (ρ',σ') = ^∞_L(ρ,σ) and (ρ_1,σ_1) ∈^∞(ρ,σ), we have σ_1=σ' and ρ' ∼ρ_1. The generalized arrival problem is: given any (σ,ρ), compute σ_1 for any (ρ_1,σ_1) ∈^∞(ρ,σ). Corollary <ref> shows that this problem contains the original arrival problem. On the other hand, the decision version of generalized arrival belongs to NP and co-NP, a certificate being a routing vector r; one may compute efficiently the configuration ^r(ρ,σ) and check that we obtain 0 particles on V_0. §.§.§ Acyclic configurations We say that ρ∈ is acyclic if G(ρ) contains no directed cycles. It amounts to saying that the set of arcs ρ(V_0) forms in G a directed forest, rooted in the sinks of G. Each equivalence class of rotor configurations contains exactly one acyclic configuration. We can deduce from this result that the number of equivalence classes of rotor configurations is the number of rooted forests in G. By Kirchoff's Matrix-Tree Theorem <cit.>, this is exactly the determinant of the Laplacian matrix of G where we remove lines and columns corresponding to sinks; it also follows that this is the cardinal of the Sandpile Group of G (see <cit.> and <ref>). §.§ Equivalence classes of particles Two particle configurations σ, σ' are said to be equivalent, which we denote by σ∼σ', if there is ρ∈ such that (ρ,σ) ∼ (ρ,σ'). In this case, the relation is true for any ρ∈, and it defines an equivalence relation on Σ. Define the Laplacian operator Δ as the linear operator from ^V_0 to Σ, defined for u ∈ V_0 by Δ(u) = ∑_a ∈ A^+(u) ((a) - (a)) The vector Δ(u), when added to a particle configuration σ, corresponds to transferring a total of ^+(u) particles from u to every outneighbour of u. The transformation from σ to σ + Δ(u) is called firing σ at u. This firing is legal if σ(u) ≥^+(u). A firing vector is simply an element of r ∈^V_0, and we can fire simultaneously vertices according to this vector by σ + Δ(r) = σ + ∑_u ∈ V_0 r(u) Δ(u). For any two particle configurations σ, σ' we have σ∼σ' if and only if there exists a firing vector r with σ' = σ + Δ(r). Let r be a routing vector from (ρ,σ) to (ρ,σ'). It follows that for all u ∈ V_0 we have θ^r(u)(ρ(u)) = ρ(u), so that r(u) must be a multiple of ^+(u) and we can write r(u) = r'(u) ·^+(u) with r'(u) ∈. From this follows that σ = σ + Δ(r'). Conversely, firing σ at u corresponds to ^+(u) routings at u, which leaves the rotor configuration unchanged. By analogy with maximal legal routings, define a maximal legal firing as a sequence of legal firings from σ to another particle configuration σ' such that finally σ' is stable, meaning that σ'(u) < ^+(u) for all u ∈ V_0, i.e. no more legal firing are possible. If G is stopping, for all particle configurations σ there is a unique configuration σ' such that every maximal sequence of legal firings leads to σ', and every sequence of legal firings can be continued in such a maximal sequence (in particular, all legal sequences are finite). This stable configuration σ' is the stabilization of σ and denoted σ^∘. §.§ Sandpile Group We point out that the equivalence relation on particles defined in the previous section is not equivalent to the construction of the so-called Sandpile Group. In the case of a stopping rotor graph, the Sandpile Group is obtained from particle configurations equivalence classes by furthermore identifying configurations which have the same value on V_0. More precisely, define a relation ∼_S by σ∼_S σ' ⇔∃σ_1, σ∼σ_1 and ∀ u ∈ V_0, σ'(u) = σ_1(u). It is equivalent to requiring the existence of a firing vector r such that ∀ u ∈ V_0, σ'(u) = (σ + Δ(r))(u). * The quotient of Σ by ∼_S has an additive structure inherited from Σ, and it is a finite abelian group called the Sandpile Group and denoted by SP(G); * the order of SP(G) is equal to the number of acyclic rotor configurations in G. § MAIN RESULTS FOR PATH MULTIGRAPHS In this part, we summarize our results, and the rest of the paper will introduce the tools used to prove them. From now on, we consider only graphs of the family P^x,y_n, and the letter G denotes such a graph. §.§ The case x=y=1 First, let us recall the results obtained about Path Graphs P^1,1_n in <cit.> in order to understand how they compare to the case P^x,y_n when 0<x<y are coprime. Technically, these results were stated only for nonnegative particle configurations but they still hold in the general case. In the case x=y=1, define for any particle configuration σ h(σ) = ∑_i=0^n+1 i ·σ(u_i) and for any rotor configuration ρ, define g(ρ) as g(ρ) = | i : (ρ(u_i)) = u_i-1 |, i.e. g(ρ) is the number of arcs in G(ρ) pointing to the left. The next result completely solves generalized arrival in P^1,1_n for any number of particles and antiparticles. In the case x=y=1, for all (ρ,σ) ∈×Σ, the number of particles on sink u_n+1 in any configuration of ^∞(ρ,σ) is equal to the unique m ∈ such that 0 ≤ g(ρ) -h(σ) + m (n+1) ≤ n, i.e. m = ⌈h(σ) - g(ρ)/n+1⌉. Together with this result, we can describe the structure of the Sandpile Group of P^1,1_n and its action on rotor configurations. Define h̅ and g̅ as h and g modulo n+1. * The Sandpile Group SP(P^1,1_n) is cyclic of order n+1; * the map h̅ : Σ→/(n+1) quotients by ∼_S into an isomorphism between SP(P^1,1_n) and /(n+1); * the map g̅ : →/(n+1) quotients into a bijection between rotor equivalence classes and /(n+1); * the action of the sandpile group on rotor equivalence classes can be understood in the following way: let (ρ,σ) be a rotor-particle configuration and (ρ',σ') ∈^∞(ρ,σ). Then ρ' is in class g̅(ρ') = g̅(ρ) - h̅(σ) . As an example, consider the case P^1,1_3, which is depicted on Fig. <ref>, with the particle configuration σ equal to (-8,5,10,-5,12) from left to right and ρ as depicted. We see that ρ has 2 arcs going left so that g(ρ)=2, while we have h(σ) = -8· 0 +5· 1+10· 2 - 5· 3 + 12· 4 = 58. From Thm. <ref>, we deduce the final configuration σ' of the full routing of (ρ,σ) counts m=14 particles ending on the right sink u_4 and -8+5+10-5+12-14=0 particles on u_0. From Thm. <ref>, we deduce that any final rotor configuration ρ' in the routing will be such that g̅(ρ') = 2 - 58 = 0 4, so that all its arcs will point right, hence ρ' is the acyclic configuration of this class. §.§ Case 0 < x < y coprime We now state our results in the case this paper is concerned about. Compare this with Theorem <ref>. In both theorems, we use F = ∑_i=0^n x^n-i y^i. Suppose that 0 < x < y are coprime and consider the rotor multigraph P^x,y_n. * There exists a linear function h : Σ→ and a function g : → such that, for all (ρ,σ) ∈×Σ, the number of particles on sink u_n+1 in any configuration of ^∞(ρ,σ) is equal to m if and only if g(ρ) -h(σ) +m F ∈ g(); * the set g() is a finite set of nonnegative integers, and membership in g() can be tested in linear time; moreover the unique integer m satisfying the previous condition can be found in time O(n log x), and it satisfies m - ⌈h(σ) - g(ρ)/F⌉∈≪ 0, x-1 . * More generally, if (ρ,σ) and (ρ',σ') are rotor-particle configurations, then (ρ,σ) ∼ (ρ',σ') if and only if g(ρ) - h(σ) = g(ρ') - h(σ') and (σ)=(σ'). Note that, in the case x=1, we have m = ⌈h(σ) - g(ρ)/F⌉ as in the case x=y=1 and no further algorithm is needed. This is now the version of Theorem <ref> in our present case. We define h̅ and g̅ as equal respectively to h and g modulo F. Suppose that 0 < x < y are coprime and consider the rotor multigraph P^x,y_n. * The Sandpile Group of P^x,y_n is cyclic of order F; * The map h̅ : Σ→/F quotients by ∼_S into an isomorphism between SP(P^x,y_n) and /F; * The map g̅ quotients by ∼ into a bijection between rotor equivalence classes and /F; * The action of the sandpile group on rotor equivalence classes can be understood in the following way: let (ρ,σ) be a rotor-particle configuration and (ρ',σ') ∈ routing^∞ (ρ,σ). Then ρ' is in class g̅(ρ') = g̅(ρ) - h̅(σ) . As an example, we consider the Path Multigraph P^2,3_3. The graph is depicted on Fig. <ref>, together with harmonic values (values of h, inside vertices) and arcmonic values (values of g, on arcs). Consider for instance the particle configuration σ = (-8,5,13,-5,12) from left to right such that h(σ) = -8 × 0 + 5 × 8 + 13 × 20 - 5 × 38 + 12 × 65 = 890, and the rotor configuration ρ = (a_1^1, a^2_1, a^3_1) such that g(ρ) = 12 +18 +27 = 57. We have F=65, and g() = { 0, 8, 12, 16, 18, 20, 24, 26, 27, 28, 30, 32, 34, 35, 36, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 66, 67, 68, 69, 70, 71, 72, 74, 75, 76, 78, 79, 80, 82, 84, 86, 87, 88, 90, 94, 96, 98, 102, 106, 114 }. The only value v in g() equal to g(ρ)-h(σ)=-833 65 is 12 = -833 + 13*65. Since (σ)=17, in the end of the routing there are 13 particles on sink u_4 and 4 particles on sink u_0. The final rotor configuration ρ' satisfies g̅(ρ') = g̅(ρ) - h̅(σ) = -833 65 = 12 65 so g(ρ')=12 by looking in g(). § HARMONIC AND ARCMONIC FUNCTIONS IN THE PATH In the rest of the paper, we fix n>0 and coprime integers x,y such that 0 < x < y, and consider the Path Multigraph P_n^x,y as defined in Subsection <ref>. First, let us define the linear function h : Σ→, which will serve as an invariant for the firing operation and enable the characterization of particle equivalence classes. Initially, we define h on vertices and then extend it by linearity to Σ. The linear function h:Σ→ defined by h(u_0)=0 and h(u_k)=∑_i=0^k-1x^n-iy^i for k∈1,n+1 is harmonic on G, i.e. for any u ∈ V_0 we have h(Δ(u)) = 0. For k∈≪ 1,n: y(h(u_k)-h(u_k-1))=y(x^n-k+1y^k-1)=x^n-k+1y^k and x(h(u_k+1)-h(u_k))=x(x^n-ky^k)=x^n-k+1y^k. Hence, y(h(u_k)-h(u_k-1)) =x(h(u_k+1)-h(u_k)) ⇔ (x+y)h(u_k) - yh(u_k-1)-xh(u_k+1) = 0 ⇔ h(Δ(u_k)) = 0. For any particle configurations σ, σ', if σ∼σ' then h(σ) = h(σ'). It turns out that h(u_k) is the number of acyclic configurations in P^x,y_n that contain a directed path from u_k to u_n+1. In particular, h(u_n+1) is the number of rooted forests, which is also the number of particle equivalence classes and rotor equivalence classes <cit.>. In the rest of the document, we denote by F this value, i.e. F = ∑_i=0^n x^n-i y^i. We now define a similar function for rotor configurations, designed to be invariant on equivalence classes of rotors configurations. We introduce the term arcmonic for these functions that correspond to harmonic functions but on arcs. The linear function g : ^A →, defined by g(a^k_j) = ∑_i=0^j-1 (h((a^k_i)) - h(u_k) ) for all k ∈≪ 1, n and j ∈≪ 0,x+y-1 (in particular, g(a^k_0)=0) is arcmonic, i.e. it satisfies for all directed circuits C in G(ρ), g(C)=g(θ(C)), where C is identified with the sum of arcs ∑_a ∈ C a. If j ∈≪ 0,x+y-2 then g(θ(a^k_j)) - g(a^k_j) = g(a^k_j+1) - g(a^k_j) = ∑_i=0^j (h((a^k_i)) - h(u_k) ) - ∑_i=0^j-1 (h((a^k_i)) - h(u_k) ) = h((a^k_j)) - h(u_k) If j=x+y-1, then we use the fact that h is harmonic so that ∑_i=0^j (h((a^k_i)) - h(u_k) ) = 0, g(θ(a^k_j)) - g(a^k_j) = g(a^k_0) - g(a^k_j) = ∑_i=0^j (h((a^k_i)) - h(u_k) ) - ∑_i=0^j-1 (h((a^k_i)) - h(u_k) ) = h((a^k_j)) - h(u_k). Then, for any directed circuit C: g(θ(C)) - g(C) =∑_a ∈ C(h((a)) - h((a)) ) = 0. By identifying a rotor configuration ρ with the formal sum of its arcs, we can define g(ρ) = ∑_u ∈ V_0 g(ρ(v)). If ρ, ρ' are rotor configurations such that ρ∼ρ', then g(ρ) = g(ρ'). The exact values of g are given by: For j∈≪ 0,x+y-1 and k∈≪ 1,n, g(a^k_j)= jd_k if j∈≪0,x (x+y-j)d_k-1 if j∈≪ x+1,x+y-1 where, for every k≥ 0, d_k = x^n-ky^k. Remark that, for every k ∈≪ 0,n: d_k = h(u_k+1) - h(u_k). See Fig. <ref> for an example of harmonic and arcmonic values on P^2,3_3. In this example, d_0 = 8, d_1 = 12, d_2=18, d_3 = 27, and d_4 = 81/2. If (ρ,σ) and (ρ',σ') are rotor-particle configurations, then if (ρ,σ) ∼ (ρ',σ') we have g(ρ) - h(σ) = g(ρ') - h(σ'). Without loss of generality, assume that (ρ', σ') = ^+_u(ρ, σ) for some u∈ V_0. Recall that, by definition of routing operators, σ' = σ + (ρ(u))-u, hence by the linearity of h we obtain: h(σ') - h(σ) = h((ρ(u))) - h(u) . We use the fact that g(ρ')-g(ρ) = (h((ρ(u))) - h(u)) (see the proof of Prop. <ref>). This yields: g(ρ') - h(σ') - (g(ρ) - h(σ)) = (g(ρ') - g(ρ)) + (h(σ) - h(σ')) = (h((ρ(u))) - h(u)) + (h(σ) - h(σ')) = 0. §.§ Stable decomposition of arcmonic values In the light of Prop. <ref>, it becomes important to characterize which integers are of the form g(ρ) for some ρ∈. If ρ∈, by Proposition <ref>, g(ρ) can be decomposed as a sum g(ρ) = ∑_k=0^n c_k d_k, with c_k ∈≪ 0,x+y-1 for all k ∈≪ 1,n; recall that d_k = x^n-k y^k. This decomposition is not unique since all equivalent rotor configurations share the same value. We shall show that to each equivalence class we can assign a special form of decomposition, named stable decomposition thereafter. Every integer v ≥ 0 has unique decomposition of the form v = ∑_k=0^n c_k d_k + c_n+1 d_n+1 with c_k ∈≪0,y-1 for k ∈≪0,n and c_n+1∈ x. Note that d_n+1 = y^n+1/x. A special case is the case x=1 where if v < y^n+1, the stable decomposition of v coincides with the decomposition of v in base y up to the n-th element. We establish the uniqueness of this stable decomposition. The existence relies on the lemmas presented subsequently. Suppose that v admits two stable decompositions c^1 = (c^1_0, …, c^1_n+1) and c^2 = (c^2_0, …, c^2_n+1). Recall that, for i ∈{1,2}, c^i_n+1∈ x. Then: ∑_k=0^n c^1_k d_k + c^1_n+1 d_n+1 = ∑_k=0^n c^2_k d_k + c^2_n+1 d_n+1 y which amounts to c^1_0 d_0 = c^2_0 d_0 y. Since d_0=x^n and y are coprime, and 0 ≤ c^1_0, c^2_0 ≤ y-1, we obtain c^1_0 = c^2_0. Now, consider v' = v - c^1_0 x^n/y, then, for i ∈{1,2}, v' = ∑_k=0^n-1 c^i_k+1 x^n-1-ky^k + c^i_n+1y^n/x and one can apply the same reasoning iteratively on v' to show that c_1^1 = c_1^2, c_2^1 = c_2^2, etc. And finally that c^1 = c^2. To prove the existence of the stable decomposition, we rely on another device named Engel Machine <cit.>. The Engel Machine E^x,y_n is the Multigraph defined on the set {u_0,u_1,⋯, u_n}∪{u_n+1,s}, where every vertex u_i for i ∈≪ 0,n has x arcs going to u_i+1 and y-x arcs going to s. Since we assumed y > x, then y-x>0. Vertices s and u_n+1 are sinks. We say that a particle configuration σ in E^x,y_n is nonnegative if σ(u_i) ≥ 0 for i ∈≪ 0, n (whereas sinks may have a negative value). See Fig. <ref> for an example. We define a function h_E on the vertices of this graph that will turn out to be harmonic on E_n^x,y. This function is defined by h_E(s)=0 and h_E(u_k) = d_k for k in ≪ 0,n+1 and extend it to particle configurations by linearity. We shall be mainly concerned with the h_E value of particle configurations in the Engel Machine. In order to keep notation simple, and since h_E(s)=0, the value of configurations on s never matters and we identify particle configurations in c ∈Σ(E^x,y_n) with words c ∈^n+2. In particular, for any v ≥ 0, the notation c[v] denotes the word corresponding to the stable decomposition of v, as well as a (stable) particle configuration (we can suppose that its value on s is always 0). Note that h_E(c[v]) = v by construction. Conversely, remark that any nonnegative stable configuration c with h_E(c)=v gives the unique stable decomposition of v. The function h_E is harmonic on E^x,y_n. Consider the particle configuration c' obtained from c by firing vertex u_k, k ∈≪ 0, n. Then: h_E(c')-h_E(c) = - y d_k + x d_k+1 =0 . In order to compute a stable decomposition for v, one simply has to find any configuration c with h_E(c)=v and then stabilize c. The proof of the next lemma provides a method for computing such a configuration c. Together with Lemma <ref>, this completes the proof of Theorem <ref>. For any v ≥ 0, there exists a nonnegative configuration c in E^x,y_n with h_E(c)=v. Since x^n+1 and y^n+1 are coprime, by Bezout's theorem there are integers α, β such that α x^n+1 + β y^n+1 = 1 and we can choose α≥ 0. It follows that (α x v) x^n+1 + (β x v) y^n+1 = x v and (α x v) d_0 + (β x v) d_n+1 = v. §.§ Recognizing decompositions of arcmonic values In this subsection, we characterize stable decompositions corresponding to an arcmonic value. For any v ∈, we have v ∈ g() if and only if the regular expression e_d = ≪ 0, y-1^* · 0 ·≪ 1, x ^* · 0 matches c[v]. The proof is split in several lemmas. We define the regular expressions e_a = ≪ 1,y ^* · 0 ·≪ 0,x-1 ^*· 0, and e_d = ≪ 0, y-1^* · 0 ·≪ 1, x ^* · 0 . Let L_a and L_d be the languages described by e_a and e_d respectively, and let L_a^n and L_d^n be the subsets of words of length n+2. There is a bijective function ψ between the set of acyclic rotor configurations of P^x,y_n, and L^n_a, such that for any acyclic rotor configuration ρ, we have g(ρ) = h_E( ψ(ρ) ). Let ρ be an acyclic configuration of P^x,y_n. For such a configuration, there is some k ∈≪ 1,n such that * for i < k, ρ(u_i)=a^i_j with j ∈≪ x,x+y-1 and g(a^i_j) = c_i-1 d_i-1 with c_i-1∈≪ 1, y, * for i ≥ k, ρ(u_i)=a^i_j with j ∈≪ 0,x-1 and g(a^i_j) = c_i d_i with c_i ∈≪ 0, x-1. If we define c_k-1=0 and c_n+1=0, the configuration c=(c_0,c_1,…,c_n,c_n+1) satisfies h_E(c) = g(ρ) and is matched by e_a; we define ψ(ρ) = c. Conversely, for any configuration c matched by e_a it is easy to see that there is a unique acyclic configuration ρ with ψ(ρ)=c; if k-1 is the position of first 0 of c, we can construct ρ as above. If c ∈ L_a, let ϕ(c) be the stable decomposition of c. Then ϕ defines a bijective map between L_a^n and L_d^n that preserves h_E. By definition, if c ∈ L^n_a, then ϕ(c)=c^∘ is the stabilization of c. By Lemma <ref>, h_E is harmonic, hence h_E(c) = h_E(ϕ(c)). We introduce a sequential transducer T, depicted on Fig. <ref>, which computes the stabilization of certain configurations. The notation ≪ a , b | ≪ a + k , b + k represents the substitution of any integer i in ≪ a , b by the integer i+k. This transducer takes as input any word in ≪0,y^* and produces a word in ≪0,y-1^* with the same length. In particular, when given a nonnegative configuration c of E_n^x,y, satisfying c(u) ≤ y for all u ∈ V_0 and c(u_n+1) = 0, the transducer outputs stabilized configuration ϕ(c) (recall that we do not record what happens on sink s), stabilizing c from vertex u_0 to u_n in ascending order. Hence it computes ϕ for configurations in ≪ 0,y ^* · 0. Consider now the automaton A_a depicted on Fig. <ref>, which recognizes the language L_a. From T and A_a, we build the transducer T(A_a) depicted on Fig. <ref> which is the product of T and A. Given a configuration c in ≪0,y^*·, the product transducer accepts it if and only if c ∈ L_a, and in such case outputs ϕ(c). From T(A_a), if we look at the output of every transition as an input, we get an automaton which recognizes exactly ϕ(L_a). This automaton is depicted on Fig. <ref>, and its determinization on Fig. <ref> It is now easy to check that the automaton for ϕ(L_a) depicted on Fig. <ref> recognizes exactly L_d, since this automaton is minimal. Moreover, as ϕ preserves the length of words, we deduce that ϕ(L_a^n) = L_d^n. Additionally both languages L^n_a and L_d^n have the same size, namely F. It follows that ϕ is a bijective map between L^n_a and L_d^n. The value v belongs to g() if and only if there is an acyclic configuration ρ such that v = h_E(ψ(ρ)) = h_E(ϕ(ψ(ρ))) = h_E(c[v]) , (Lemma <ref>), hence if and only if c[v] ∈ L_d^n (Lemma <ref>). Let v = g(ρ) for some ρ∈. Since g is invariant on equivalence classes on , we can as well consider that ρ is acyclic. Then, according to Lemma <ref>, v = h_E(ψ(ρ)), and by Lemma <ref>, if and only if there exists a decomposition of v where c_a ∈ L_a^n with h_E(c_a) = v. Based on this last lemma, it is also equivalent to the existence of c_d ∈ L_d^n with h_E(c_d) = v. The uniqueness of the stable decomposition together with the previous result implies: For ρ, ρ' ∈, we have ρ∼ρ' if and only if g(ρ)=g(ρ'). The forward direction was proved as Corollary <ref>. Conversely, by the same corollary we can suppose that ρ and ρ' are acyclic and satisfy g(ρ)=g(ρ'). It follows by Lemma <ref> that g(ρ)=h_E(ψ(ρ)) and g(ρ')=h_E(ψ(ρ')); then h_E( ϕ(ψ(ρ)) ) = h_E( ϕ(ψ(ρ')) ). By uniqueness of the stable decomposition, it follows that ϕ(ψ(ρ))=ϕ(ψ(ρ')), and since ϕ and ψ are bijective, that ρ=ρ'. For any value v ≥ 0, the value of c[v+kF](u_n+1) is nondecreasing with k. Let c_1, c_2 be two nonnegative configurations in E^x,y_n, define c = (c_1 + c_2)^∘. Then by considering the stabilization mechanism, we have c(u_n+1) ≥ c_1(u_n+1) + c_2(u_n+1) from which the result follows. Let v be an integer. Then: (i) If c[v](u_n+1) < 0, then v-F ∉ g(). (ii) If c[v](u_n+1) ≥ 0, then v+F ∉ g(). Notice that c[F]=(1,1,…,1,0). (i): if v is such that c[v](u_n+1) < 0, then by Lemma <ref> c[v - F](u_n+1) < 0. Hence it is not matched by the regular expression e_d of Theorem <ref>. (ii): by the same argument, if c[v](u_n+1) > 0, then c[v+F](u_n+1) > 0 and v+F ∉ g(). Consider now v such that c[v](u_n+1) = 0. Then c[v] corresponds to a word of length n+2 in the language L = ≪ 0, y-1 ^* · 0. Let us consider c_1 such that c_1(u_i) = c(u_i) + 1 for all i ∈≪ 0, n and c_1(u_n+1) = 0, so that h_E(c_1) = h_E(c[v+F]). Now, we aim to demonstrate that the regular expression e_d in Theorem <ref> does not match the stable decomposition of v+F computed from c_1. We do this by relying on the construction of an automaton that recognizes the set of possible stable decompositions of c_1 for all possible c. Recall the notation: e_d = ≪ 0, y-1^* · 0 ·≪ 1, x ^* · 0 , L_d is the language described by e_d, and L_d^n is the subset of words of length n+2. Moreover ϕ(c) is the stable decomposition of c for any c ∈≪ 0, y ^* · 0, while ϕ is computed by the transducer T described in Figure <ref>. The set of possible configurations c_1 when c varies in the set of stable configurations with c(u_n+1)=0, is exactly described by the regular expression ≪ 1, y ^* · 0, corresponding to a language L_1, which is recognized by the automaton A_1 depicted on Fig. <ref>. Following the steps of the proof of Lemma <ref>, and since words in L_1 are also matched by ≪ 0, y ^* · 0, we construct the product transducer T(A_1) which outputs ϕ(c) if and only if c ∈ L_1. See Fig. <ref>. Finally, the following non-deterministic automaton A_1^ϕ recognizes ϕ(L_1) as shown on Fig. <ref>. It suffices to show that this automaton does not recognize any word in L_d or, equivalently, that ϕ(L_1) ∩ L_d = ∅. To that end, we introduce the automaton A_d that recognizes L_d on Fig. <ref>. By taking the product of automatas A_d and A_1^ϕ, we obtain an automaton that recognizes ϕ(L_1) ∩ L_d, as shown on Fig. <ref>. This automaton does not contain any accepting state which proves that ϕ(L_1) ∩ L_d = ∅. The next Lemma is key to proving our main results and helps in improving the complexity of our algorithm. For every 0 ≤ v ≤ F-1, there is a unique k ∈ℕ such that v+kF ∈ g(), which is the smallest integer k with c[v+kF](u_n+1) ≥ 0. The uniqueness is a consequence of Lemma <ref> and the monotony of c[v+kF](u_n+1) with k. As stated in Proposition <ref>, the function g uniquely identifies rotor classes. Hence, the existence of k such that v+kF ∈ g() follows from the observation that the number of rotor classes is precisely F. In other words, the function g F establishes a bijective correspondence between the set of rotor classes and / F. As an example, consider P^2,3_3 as depicted in Fig. <ref>, and value v = 1. Next table shows the stable decomposition of v+kF, with F = 65, for k ∈≪ 0 , 3. The unique value in g() is 66 whose stable decomposition is matched by the regular expression of Theorem <ref>. k stable decomposition of 1+65k 0 (2,1,0,2,-2) 1 (0,1,0,2,  0) 2 (1,2,1,0,  2) 3 (2,0,1,0,  4) Suppose that c(v+mF)_n+1≥ 0 for a given m. Then by lemma c(v+(m+1)F)_n+1≥ 0, and we show furthermore that v+(m+1)F ∉ g(). To see this, consider the stable decomposition c of v+mF ; the decomposition of F is (1,1,1,⋯,1,0); so we consider the configuration (c_0+1,c_1+1,…,c_n+1,c_n+1) and look at its stabilisation. idea : in the stabilisation, after a 0 there always is something > x, contradicting e_d. This comes from the substitution of e'_1 by e'_2. A TERMINER § PROOFS OF THEOREM <REF> AND <REF> Proof of Theorem <ref> (i): If (ρ',σ') ∈^∞(ρ,σ), then by Proposition <ref> we have g(ρ) - h(σ) = g(ρ') - h(σ') g(ρ) - h(σ) + h(σ') = g(ρ'). Since σ' is zero, except on u_0 and u_n+1 where the value of h is respectively 0 and F, we get g(ρ) - h(σ) + mF ∈ g() where m=σ'(u_n+1). Conversely, suppose that there is another m_1 such that g(ρ) - h(σ) + m_1F = g(ρ_1). for some ρ_1. Then m_1 F - g(ρ_1) = m F - g(ρ') hence g(ρ_1) = g( ρ') F . By Lemma <ref>, it follows that g(ρ_1)=g(ρ') hence m_1=m. (ii): Recall that F = ∑_k=0^n d_k . Since the maximal arcmonic value of an arc in A^+(u_k) is y d_k-1 = x d_k for k ∈≪ 1,n, we obtain that the maximal value in g() is ∑_k=1^n x d_k which is strictly lower than xF. Then: 0 ≤ g(ρ)-h(σ)+mF < xF ⇔ h(σ) - g(ρ) ≤ mF < xF + h(σ) - g(ρ) ⇔ ⌈h(σ) - g(ρ)/F⌉≤ m < x + ⌈h(σ) - g(ρ)/F⌉ ⇔ 0 ≤ m - ⌈h(σ) - g(ρ)/F⌉≤ x -1 If we are given (ρ,σ) and m and want to decide if there are m particles on sink u_n+1 when fully routing (ρ,σ), we can either check: * if c[g(ρ)-h(σ)+mF] is matched by the regular expression e_d, which involves first computing the stable decomposition; * if c[g(ρ)-h(σ)+mF](u_n+1)=0 and c[g(ρ)-h(σ)+(m-1)F)](u_n+1) < 0, which involves computing two stable decompositions. Assuming that elementary arithmetic operations are O(1), we can compute g(ρ) - h(σ) +mF in time O(n), using Prop. <ref> for g. Then, computing a stable decomposition also has computational complexity O(n). We can successively fire all vertices from u_0 to u_n, which can be done by computing a quotient and remainder modulo x+y. If we are given (ρ,σ) and we want to compute m, we can proceed by bissection, using Lemma <ref> to find the minimal m for which c[g(ρ)-h(σ)+mF](u_n+1)=0. The overall complexity of this method is O(n log(x)) since m belongs to an interval of length x. (iii): The forward direction is Prop. <ref>. Conversely, suppose that g(ρ)-h(σ) = g(ρ')-h(σ'). Let m and m' be the number of particles on sink u_n+1 when we fully route (ρ,σ) and (ρ',σ') respectively to sinks; we denote respectively the final configurations of these routings by (ρ_1,σ_1) and (ρ'_1,σ'_1). By Prop. <ref>, g(ρ) - h(σ) = g(ρ') - h(σ') = g(ρ_1) - mF = g(ρ'_1) - m'F, from which we deduce by Lemma <ref> that ρ_1 ∼ρ'_1 and m=m', so that σ_1=σ'_1 (since (σ_1)=(σ'_1)). All in all, we have that (ρ,σ) ∼ (ρ_1,σ_1) ∼ (ρ'_1,σ_1) ∼ (ρ',σ'). Proof of Theorem <ref> (i) and (ii): Suppose that h̅(σ̅_1) = h̅(σ̅_2). Up to adding particles to σ̅_2 on u_n+1 and on u_0 we obtain σ_2 such that h(σ̅_1) = h(σ_2) and (σ̅_1) = (σ_2) respectively. We write σ_1 = σ̅_1. Consider now any ρ∈ . We have h(σ_1) - g(ρ) = h(σ_2) - g(ρ), so (ρ,σ_1) ∼ (ρ,σ_2) by (iii) of Theorem <ref>, and σ̅_1 ∼_S σ̅_2. Conversely, if σ̅_1 ∼_S σ̅_2, we clearly have h̅(σ̅_1) = h̅(σ̅_2). Since h̅(u_1)=x^n F and x^n is coprime with F, we see that the particle configuration with just one particle on u_1 generates all possible values in / F. It follows, by the first isomorphism theorem, that SP(P^x,y_n) is cyclic and isomorphic to / F. (iii): follows directly from Lemma <ref>. §.§.§ Open problems and future works In this paper, we addressed the generalized version of the arrival problem in the Path Multigraph P^x,y_n. Moreover, we investigated the Sandpile Group structure and its action on rotor configurations when x and y are coprime. However, when x and y are not coprime, we observed that the characterization of classes by harmonic and arcmonic functions becomes inadequate, necessitating the inclusion of more comprehensive algebraic invariants. We are currently working on a project that presents a theory of arcmonic and harmonic functions applicable to general graphs, which will be submitted soon to publication. Moreover, it is worth considering other scenarios, such as variations in x and y across different vertices or changes in the rotor order. These cases pose interesting questions that require further investigation. We regard them as open problems that warrant additional research. §.§.§ Acknowledgements Thanks to Chloé and Marwanne for checking examples with their rotor software. This work was supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH. 10 auger2022polynomial Auger, D., Coucheney, P., Duhazé, L.: Polynomial time algorithm for arrival on tree-like multigraphs. In: 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2022) bjorner1992chip Björner, A., Lovász, L.: Chip-firing games on directed graphs. Journal of algebraic combinatorics 1, 305–328 (1992) bjorner1991chip Björner, A., Lovász, L., Shor, P.W.: Chip-firing games on graphs. European Journal of Combinatorics 12(4), 283–291 (1991) dohrau2017arrival Dohrau, J., Gärtner, B., Kohler, M., Matoušek, J., Welzl, E.: Arrival: A zero-player graph game in NP ∩ coNP. In: A journey through discrete mathematics, pp. 367–374. Springer (2017) engel1975probabilistic Engel, A.: The probabilistic abacus. Educational studies in mathematics pp. 1–22 (1975) farrell2016coeulerian Farrell, M., Levine, L.: Coeulerian graphs. Proceedings of the American Mathematical Society 144(7), 2847–2860 (2016) frougny2012rational Frougny, C., Klouda, K.: Rational base number systems for p-adic numbers. RAIRO-Theoretical Informatics and Applications-Informatique Théorique et Applications 46(1), 87–106 (2012) gartner2018arrival Gärtner, B., Hansen, T.D., Hubácek, P., Král, K., Mosaad, H., Slívová, V.: Arrival: Next stop in cls. In: 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2018) gartner_et_al:LIPIcs.ICALP.2021.69 Gärtner, B., Haslebacher, S., Hoang, H.P.: A Subexponential Algorithm for ARRIVAL. In: ICALP 2021. vol. 198, pp. 69:1–69:14 (2021) giacaglia2011local Giacaglia, G.P., Levine, L., Propp, J., Zayas-Palmer, L.: Local-to-global principles for rotor walk. arXiv preprint arXiv:1107.4442 (2011) hoang2022two Hoang, P.H.: On Two Combinatorial Reconfiguration Problems: Reachability and Hamiltonicity. Ph.D. thesis, ETH Zurich (2022) Holroyd2008 Holroyd, A.E., Levine, L., Mészáros, K., Peres, Y., Propp, J., Wilson, D.B.: Chip-Firing and Rotor-Routing on Directed Graphs, pp. 331–364. Springer (2008) pitman2018tree Pitman, J., Tang, W.: Tree formulas, mean first passage times and kemeny’s constant of a markov chain. Bernoulli 24(3), 1942–1972 (2018) povolotsky1998dynamics Povolotsky, A., Priezzhev, V., Shcherbakov, R.: Dynamics of eulerian walkers. Physical review E 58(5),  5449 (1998) priezzhev1996eulerian Priezzhev, V.B., Dhar, D., Dhar, A., Krishnamurthy, S.: Eulerian walkers as a model of self-organized criticality. Physical Review Letters 77(25),  5079 (1996) yanovski2003distributed Yanovski, V., Wagner, I.A., Bruckstein, A.M.: A distributed ant algorithm for protect efficiently patrolling a network. Algorithmica 37(3), 165–186 (2003)
http://arxiv.org/abs/2307.02973v1
20230706131844
Pruning vs Quantization: Which is Better?
[ "Andrey Kuzmin", "Markus Nagel", "Mart van Baalen", "Arash Behboodi", "Tijmen Blankevoort" ]
cs.LG
[ "cs.LG" ]
Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network for Remote Sensing Image Super-Resolution Yuting Lu, Lingtong Min, Binglu Wang†, Member, IEEE, Le Zheng, Senior Member, IEEE, Xiaoxu Wang, Member, IEEE, Yongqiang Zhao, Member, IEEE, and Teng Long, Fellow, IEEE Yuting Lu, Xiaoxu Wang and Yongqiang Zhao are with School of Automation, Northwestern Polytechnical University, Xi’an 710072, China (e-mail:lyt1996@mail.nwpu.edu.cn, woyaofly1982@nwpu.edu.cn, zhaoyq@nwpu.edu.cn). Lingtong Min is with School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China (e-mail:minlingtong@nwpu.edu.cn). Binglu Wang, Le Zheng and Teng Long are with the Radar Research Laboratory, School of Information and Electronics, Beijing Institute of Technology,Beijing 100081, China (e-mail: wbl921129@gmail.com, le.zheng.cn@gmail.com, longteng@bit.edu.cn). †Corresponding author: Binglu Wang. This work is supported by the Postdoctoral Science Foundation of China under Grant 2022M710393, the Fourth Special Grant of China Postdoctoral Science Foundation (in front of the station) 2022TQ0035 and the Shaanxi Science Fund for Distinguished Young Scholars 2022JC-49. August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Neural network pruning and quantization techniques are almost as old as neural networks themselves. However, to date only ad-hoc comparisons between the two have been published. In this paper, we set out to answer the question on which is better: neural network quantization or pruning? By answering this question, we hope to inform design decisions made on neural network hardware going forward. We provide an extensive comparison between the two techniques for compressing deep neural networks. First, we give an analytical comparison of expected quantization and pruning error for general data distributions. Then, we provide lower bounds for the per-layer pruning and quantization error in trained networks, and compare these to empirical error after optimization. Finally, we provide an extensive experimental comparison for training 8 large-scale models on 3 tasks. Our results show that in most cases quantization outperforms pruning. Only in some scenarios with very high compression ratio, pruning might be beneficial from an accuracy standpoint. § INTRODUCTION Recent advances in deep learning led to exceeding human-level performance in many tasks, including computer vision, machine translation, voice recognition, and language understanding. Real-world applications of DNNs rely heavily on their efficiency. Both mobile and cloud platforms greatly benefit from reduced latency and energy efficiency achieved by some form of model compression. In this work, we consider two mainstream techniques used in practice; pruning and quantization. Pruning methods remove individual weights <cit.>, or sometimes groups of weights <cit.>. This procedure can reduce the memory footprint. Furthermore, not having to perform the computations with weights that are zeroed out can make network inference more efficient. On the other hand, quantization reduces the bit-width used for both the weights and the computation used in networks, leading to both predictable memory savings and reductions in the necessary compute. In both scenarios, the hardware used for making use of these optimization schemes needs to take them into account. Depending on the availability of training data and computing budget, most methods for pruning and quantization fall into one of two families. The first family includes fine-tuning approaches, namely quantization-aware training (QAT) and fine-tuning with pruning in the loop. The second family includes post-training approaches such as post-training quantization (PTQ). Previously, pruning techniques primarily relied on fine-tuning; however, some post-training pruning methods appeared recently as fine-tuning is not desirable for large language models <cit.>. Despite the importance of model efficiency and the plethora of approaches for pruning and quantization, the two fields are mostly disjoint. The literature presents little insight into which of the two techniques is more accurate. In practice, there is only limited time to compress a network and limited energy to spend on making deep learning inference hardware. For this reason, we ask the question: Should one focus on quantization or pruning for compression? We present an extensive study comparing pruning and quantization in equal settings. First, we consider different data distributions and analyze the conditions under which each method is preferable. We match our findings with real weight tensors from pre-trained models. Second, we consider a post-training scenario and evaluate single-layer output errors for both methods. Because the comparison might depend on the specific choice of optimization method, we compare the two with theoretical bounds that apply regardless of the optimization method. Finally, we provide a full-model comparison for the most common scenario of fine-tuning networks after either pruning or quantization. In our comparison, we intentionally avoid considering the hardware aspects of pruning and quantization. Instead, we focus solely on the accuracy of both methods, given similar theoretical compression ratios. A coarse discussion on the hardware necessary for both methods can be found in section <ref>. § ASSUMPTIONS In our work, we assume FP16 as the basic data type and measure any gains in compression with respect to it. Using FP16 for inference generally does not lead to a loss in accuracy. Neural networks are also very commonly trained with FP16, making it a common baseline. Thus, we compare 50% pruning sparsity to INT8 quantization, 75% sparsity to INT4 quantization and so forth. We also assume no overhead on storing the sparsity mask for pruning and relegate such hardware-specific implementations to section <ref>. For the pruning experiments, we consider magnitude pruning. It is common to do fine-tuning after or during pruning <cit.>. Several works have independently shown that despite its simplicity, it is tough to improve upon magnitude pruning and fine-tuning <cit.>. To our knowledge, no pruning algorithm exists that consistently outperforms this method. For the quantization experiments, we use symmetric uniform quantization, which is defined by just the quantization scale factor and the bit-width. The scale is represented as a floating-point number and is used to map floating-point values to the integer grid. Further details on symmetric uniform quantization can be found in <cit.>. Uniform quantization is the standard in the quantization literature, and symmetric quantization is mostly employed for the weights. In all our experiments, we use a quantization range estimator minimizing the mean-squared error on weights by grid search <cit.>. § COMPARISON ON STATISTICAL DISTRIBUTIONS Before diving into comparison results, we first describe theoretically what the quantization error and pruning error are. Looking at this with a theoretical lens helps with understanding the later experimental difference between the two methods. We start off by describing and analyzing both methods on simple data distributions. In order to compare the error of pruning and quantization, we will frequently use the signal-to-noise ratio measure defined in the log scale: SNR_dB=10log_10(𝔼[ W^2]/ 𝔼[ (W-F(W))^2 ]), where F(W) is the quantization or pruning function. This measure is the same as a scaled logarithm of an MSE measure. Both are often employed to analyze the sensitivity of neural network layers to quantization, and they are theoretically well-founded to correlate with network performance <cit.>. §.§ Quantization error For quantization, we consider symmetric uniform quantization, which is also called integer quantization. Given a bit-width b and the scale δ, the grid nodes are defined as q_i = δ i, i ∈{-2^b, …, 0, 2^b-1}. The quantization operation rounding-to-nearest Q(w) and the corresponding quantization error R(w) are defined as: Q(w) = q_i, i=_i|w-q_i|, R(w) = Q(w) - w. Following <cit.> we model neural network weights as a random variable W ∼ p(w). The expected value of the quantization MSE can be expressed as follows: 𝔼[(Q(W)-W)^2)]= ∫_q_min^q_maxR^2(w)p(w)dw + ∫_-∞^q_min(w-q_min)^2p(w)dw + ∫_q_max^∞(q_max-w)^2p(w)dw, where q_min=min_iq_i and q_max=max_iq_i are the quantization range limits. The left term corresponds to the rounding error, and the right two terms correspond to the clipping error. We use this analytic formulation for our distribution results below, the details are given in appendix <ref>. §.§ Pruning error We consider magnitude pruning T(x)= x ·1_-t ≤ x ≤ t. This simply sets the values closest to zero to actual zero. Given this, the expected error of pruning is expressed as follows: 𝔼[T(W)^2]= ∫_-t^tw^2p(w)dw, where t is the threshold value that controls how much is pruned. Given the compression ratio c∈(0,1), we find the threshold value which satisfies P(-t ≤ W ≤ t)=c. In case of a symmetric zero-mean distribution, the threshold can be expressed as t=F_W^-1(1/2+c/2), where F(w)=P(W≤ w) is the CDF function and F^-1(p) is its inverse. The expected pruning error in equation <ref> is similar to the clipping error for quantization (see the second and the third term in equation <ref>), and can also be computed analytically. We also use this formulation for our results below. §.§ Analytical comparison Standard normal distribution. Let us first look at a standard normal distribution. As many weights in neural networks are roughly Gaussian-shaped, this distribution is useful for our understanding of the comparison. As we can see from figure <ref> (middle), the errors for both methods have very different behavior. The quantization error oscillates between the quantization nodes and has a moderate range. The pruning error effectively corresponds to rounding many weights to zero and thus has a higher error. As we can see in figure <ref> (right), this results in a higher SNR for quantization, e.g. 19.1 dB for INT4 quantization versus only 5.6 dB for 75% pruning. We see similar results for different compression ratios. For this distribution, quantization achieves a much higher signal-to-noise ratio. Distributions with heavy tails. The trade-off is expected to change when more significant outliers are introduced. The quantization grid is expected to be effected strongly by outliers as it increases the quantization grid in size, whereas the pruning method is expected to be hardly effected with outliers as it only affects weights around zero. We thus analyze both quantization and pruning errors in the presence of many outliers. To simulate a distribution with outliers, we use a truncated Student's-t distribution with ν=2, and a symmetric range (-r,r) (the PDF is defined in appendix <ref>). This distribution is nice as it gives a non-trivial weight to the tail ends of the distribution close to r. The wider the range r is, the heavier are the tails of the distribution. In order to introduce a quantitative measure of the number of outliers, we will use the distribution's kurtosis given by Kurt[X]=𝔼[(X-μ)^4]/(𝔼[(X-μ)^2])^2, where μ is the mean. We will see later that this kurtosis measure is predictive of quantiation and pruning performance for real layers. To increase the number of outliers, we will increase the range r. The results are given in figure <ref>. The kurtosis range is chosen so that it includes most of the weights from the model zoo. We see that despite the significant outliers and high kurtosis, quantization still has higher SNR in most of the cases for moderate compression. Pruning is better however in the region of high clipping range and very high compression rate, e.g. 2-3 bits per value (see figure <ref> on the right). §.§ Experiments on real weight tensors The previous discussion was mostly theoretical. We set out to see happens when we do a similar analysis on real neural network weights. In order to investigate this, we compare the pruning and quantization SNR on the weight tensors for all the pre-trained models from the PyTorch model zoo[ <https://pytorch.org/serve/model_zoo.html>.] (46 models in total, the details are give in appendix <ref>). Each tensor is quantized using an integer grid of bit widths from 2 to 8. The results are shown in the figure <ref> (left). We see a similar trend to our previous discussion that pruning becomes more beneficial for lower bitwidth/higher sparsity ratios. In order to match the analytical results from figure <ref>, we consider the sample kurtosis of every weight tensor given by k=1/n∑_i=1^n(x_i-x)^4 / [1/n∑_i=1^n(x_i-x)^2]^2. See figure <ref> (right). We consider a range of kurtosis values for every quantization bit-width. Using a kernel density estimator, we compute the probability density of encountering a tensor for which pruning has higher SNR than quantization SNR. We compare the PDF to that for quantization and thus determine the region where each method is preferable. The results are given in figure <ref> on the right. We see that the results from the previous theoretical section (figure <ref> on the right) hold very nicely. We can also see that as predicted, the kurtosis is indeed a good metric for predicting if a tensor should be quantized or pruned for optimal accuracy. § PER-LAYER COMPARISON Most PTQ methods compress the model layer by layer. Given one layer, we use the mean-squared error of the output activations as an objective for optimization. As <cit.> shows, minimizing per layer MSE on the output activations of each layer is a computationally affordable second-order approximation of the loss function. The local MSE objective correlates well with the task loss and is often used in practice in DNN compression and quantization literature <cit.>. Our experiments in appendix <ref> confirm this. For the experiments in this section, we will use SNR as it represents a normalized version of MSE. As opposed to section <ref> where we used SNR on weights, in this section, we will use SNR on the output activations instead. The goal of a PTQ method is to minimize the error in the output activations of the compressed layer by optimizing over the quantized weights subject to integer range constraints. Similarly, for pruning, the weights are optimized subject to a sparsity constraint. As the underlying combinatorial optimization problem for both methods is NP-hard <cit.>, in practice, each method relies on some form of heuristic providing a reasonably good solution given a realistic compute budget. This means that any practical comparison between pruning and quantization would depend on the choice of the method for both and would be open to debate of the optimality of the algorithm. In order to eliminate this dependence, we provide a tight lower bound on the output errors for quantization. For pruning we provide a way to solve the problem exactly for moderate dimensionalities. This way, we can provide a comparison that holds regardless of the algorithm used for each method. §.§ Post-training quantization We set out to formulate a way by which we can get relatively tight bounds for comparison when quantizing a single layer with the MSE as the objective. The higher bound is simple to obtain by using a solution with a heuristic quantization algorithm, but for the lower bound, we have to reformulate the problem. The mean-squared error of the output activations of a quantized layer can be expressed as: min_wE(w) = Xδw-Xw_orig_2^2 s.t. w∈ℤ^n, w_min≤ w_i ≤ w_max, where X is the input data in an unfolded form, and w_orig are the floating point weights. The quantized weights are computed as the product of the quantization scale δ, and the integer weights w. w_min and w_max are the integer limits. We ignore the averaging operation to simplify the notation, as it is not important for optimization. We also note that this problem can be solved independently for each output channel of a convolution or every row of a fully-connected layer weight. This problem is an instance of a mixed-integer quadratic program: Ẽ(w)= 1/2w^T Pw-q^T w, s.t. w∈ℤ^n, w_min≤ w_i ≤ w_max, where P=2δ^2 X^TX, q = 2(w_orig^TX^T)Xδ. In order to simplify the objective, we can omit the constant term that is irrelevant for the optimization c=Xw_orig_2^2, i.e. Ẽ(W)=E(W)-c. In order to find the lower bound of the objective, we follow <cit.> and relax the integer constraint to w_i (w_i - 1) ≥ 0, which allows the weight to take values within the interval from 0 to 1. In order to obtain the lower bound, we will consider the dual version of the relaxed problem: L(λ)= max -γ, s.t. [ P-𝐝𝐢𝐚𝐠(λ) q + 1/2λ; (q + 1/2λ)^T γ ]≽ 0, λ≥ 0, where λ∈ℝ^n, γ∈ℝ. The dual problem is convex, and its solution can be used as a lower bound on the solution of the original problem, i.e., Ẽ(w) ≥ L(λ). The dual has a semi-definite constraint which can be solved with a semi-definite programming (SDP) solver with 𝒪(n^3) complexity. In our work, we used CVX solver  <cit.>. As discussed in <cit.>, this bound is a computationally efficient alternative to branch-and-bound approaches, while tightness is better than that for the alternative methods introduced in <cit.>. We use this approach for estimating the lower bound for MSE on the output activations for PTQ below. §.§ Post-training pruning We also need a similar lower bound for pruning for comparison. To the best of our knowledge we are not aware of the ways to provide a tight lower bound for this problem, therefore we formulate a way to solve a problem for moderate dimensionalities exactly. Similar to quantization, post-training pruning of one layer of the network can mathematically be expressed as solving the following optimization problem: E = min_ŵXŵ-Xw_orig_2^2 s.t. ŵ_0 ≤ s, where the number of non-zero elements s in the solution is theoretically constrained by using the L_0 norm, which is non-convex and not smooth. In order to solve the problem, we introduce the sparsity mask m ∈ℝ^n: E(w) = min_w,mX(m⊙w) -Xw_orig_2^2, s.t. m_1 = s, -m⊙ l ≤ŵ≤m⊙ u l,u > 0, m_i ∈{0,1}, where ⊙ is an element-wise product operation, and l,u ∈ℝ are constants chosen such that any solution satisfies the constraint -m⊙ l ≤ŵ≤m⊙ u. We solve this problem using the branch-and-bound method implemented in the Gurobi solver <cit.> that gives the global solution. §.§ Experiments With our algorithms in the bag, we can now compare quantization versus pruning in the post-training settings with theoretical bounds. In each case, we analyze individual layers of several networks. Given a batch of input data, we optimize the pruned or quantized weights to minimize the error between the output activations and the output of the uncompressed layer. We provide a range between two SNR values for each method in each case. The performance of the heuristic method gives the first value, and the second value is given by the error lower bound or the global solution, which translates into SNR upper bound. As a heuristic method for pruning, we use magnitude pruning with a fixed sparsity mask m and data-optimized weights w given by w = wargminX(m⊙w) -Xw_orig_2^2. This is a convex problem and has a unique solution. As a heuristic method for quantization, we use the mixed-integer solver introduced in <cit.>. We clip every sample in order to satisfy the integer quantization range constraint. We chose a representative set of 10 layers, including 9 convolutional layers (one 3x3 convolutional layer and 8 point-wise convolutions) from MobileNet-V2, EfficientNet-lite, and Resnet-18, and one fully-connected layer from ViT. The full details for reproducing the experiments are given in appendix <ref>. Due to the high computational complexity of the global solution for pruning, the layers had to be split into chunks. The slice of 4 input channels over all output channels was used for 3x3 convolutions. In the case of linear layers and point-wise convolutions, slices 36 input features over all the output features were used. The results are shown in figure <ref> grouped by bit-width. The rectangles indicate the full range of the pruning and quantization methods between the heuristic solution and the error lower bound or the global solution. Whenever a rectangle for each chunk intersects the diagonal line, the ranking of the two methods could depend on the optimization method, while in cases below or above the diagonal, the ranking is guaranteed regardless of the optimizer. We see that quantization mostly outperforms pruning for moderate compression, while methods become more comparable for higher compression ratios. § FULL-MODEL COMPARISON Now that we have seen the comparison between the methods in the PTQ setting, we turn to fine-tuning quantized and pruned models. This is the setting where pruning is applied in most, and it is possible that fine-tuning can change the models significantly enough that the performance between the two methods changes. In order to provide a fair comparison of pruning and quantization, we chose the two most commonly used methods with performance competitive to state-of-the-art. For quantization-aware training, we used the widely adapted LSQ method suggested in <cit.>. Following this approach, we jointly learn the weights and quantization scales, keep the batch norm layers unfolded, and re-estimated the batch norm statistics after training to avoid wrong running estimates due to oscillations <cit.>. We use the method suggested in <cit.> for pruning, which gradually increases the sparsity during fine-tuning and re-estimates batch norm statistics after training. In our experiments we used a set of 8 models trained for 3 tasks including Resnet18, Resnet50 <cit.>, MobileNet-V2 <cit.>, MobileNet-V3-small <cit.>, EfficientNet-lite <cit.>, and ViT <cit.> trained on ImageNet classification <cit.>; DeepLab-V3 <cit.> with MobileNet-V2 backbone trained for semantic segmentation on Pascal VOC <cit.>; EfficientDet <cit.> trained for object detection on MS COCO <cit.>. For a fair comparison, we used the same amount of epochs of fine-tuning for each method (full details on hyperparameters are given in appendix <ref>). The results given in table<ref> suggest that pruning almost never leads to higher accuracy than quantization if an equal compression rate is considered. The differences are sufficiently large enough that the small purported improvements by some methods <cit.> will likely not close the gap. § DISCUSSION Other types of pruning While we solely focused in our comparison on unstructured pruning in which individual weights are removed, our results translate to semi-structured and structured pruning. Unstructured pruning has more degrees of freedom and is a strict superset of what can be represented by (semi-)structured pruning. Therefore, unstructured pruning gives an upper bound of the accuracy for all pruning methods. This means that for the cases in which quantization is better than unstructured pruning, quantization will also be better than (semi-)structured pruning. However, we can not make any claims for (semi-)structured pruning for the few scenarios in which pruning is better than quantization. Natural sparsity in quantized tensors In our comparison, we used a theoretical compression ratio for quantization, which depends on the bitwidth. However, we also observe that quantized tensors naturally contain many zeros; for example, 8-bit tensors from PyTorch model zoo have an average sparsity of 13% while 4-bit tensors are 35% sparse. We give more details on this in appendix <ref>. Representations learned in the compressed models To provide insights into representations learned during pruning or QAT, we studied the evolution of models during fine-tuning. We found that fine-tuning after pruning tends to recover the original representation, while quantization-aware training leads to learning completely new representations. We provide further details on these experiments in appendix <ref>. Hardware implications So far, we have deliberately avoided discussing the hardware implementations of pruning and quantization and focused solely on the accuracy of both methods at the same ideal compression rates. However, in practice, the hardware considerations do matter for the usability of the methods. The analysis above assumed an idealistic case for pruning in terms of memory size and data transfer. Since the pruning is unstructured, in order to achieve memory savings in practice, one would need at least 1 bit of information for each weight indicating whether a weight is pruned or not. On top of 16-bit weights, this gives a 6.25% storage overhead at a minimum. Quantization does not have this overhead, as INT8 is just 8 bits smaller than 16 bits, and the only storage overhead is a single scaling factor per tensor (or channel). Also, in terms of the cost of computations done by the hardware, there is a difference between the two methods. For pruning, any hardware would have to take the densely stored weights and mask and either decompress them to the dense format with all weights and many 0s or take the pruning into account in the compute itself. No compute benefits are gained in the former, as the dense calculations are done in the uncompressed number format. In the latter, dedicated hardware to take into account the 0s is necessary. The overhead for this is generally non-trivial, leading vendors to implement more semi-structured pruning schemes <cit.>. Similarly, it is rare to see unstructured activation compression for the same reason that this needs to happen algorithmically on-the-fly. In contrast, quantization gives quadratic improvements in the compute. Going from INT8 to INT4 theoretically improves the compute performance by a factor 4, although practical gains depend on the memory overhead (which improves by only a factor 2x) and the existence of other formats in the same hardware compute unit. Impact Using pruning or quantization leads to power reduction on many architectures and enables new applications on mobile platforms. We see only a positive impact from this on the whole. Limitations First, our work has not extensively considered the hardware implications of pruning or quantization. Second, we do not study combinations of pruning and quantization apart from analyzing the inherent sparsity due to pruning. We leave this for future work. Finally, we consider only uniform quantization and ignore the other formats, such as low-precision floating or logarithmic quantization, although these are not likely to change the results presented in this paper. § RELATED WORK Quantization Integer quantization, or fixed-point quantization, is one of the most widely used techniques for inference, allowing to reduce the latency and improved energy efficiency. There are two main families of methods for model quantization. The first family includes post-training quantization (PTQ) methods <cit.>, which improve the model accuracy based on per-layer optimization of the quantized weights in a data-optimized fashion. The second family includes quantization-aware training methods <cit.> which usually fine-tune the model with quantization in the loop using straight-through estimator (STE) for computing the gradient of rounding operations. A more comprehensive overview of quantization methods can be found in <cit.>. Pruning Neural network pruning is one of the oldest methods to compress neural networks <cit.>. A central problem in pruning is how to choose which weights to prune. Approaches published in the literature include: binary gating, in which a binary gate is learned on each individual weight <cit.>; sensitivity-based methods <cit.> in which sensitivity, based on a weights' gradient or hessian diagonal value, is used, and magnitude pruning <cit.>. While conceptually simple, magnitude-based methods have been shown to consistently outperform more intricate methods at scale <cit.>. Weight re-initialization schemes <cit.> or mask-reinitialization <cit.> yield additional minor improvements. While most pruning approaches require fine-tuning and yield unsatisfactory results in post-training scenarios, recent adaptations of Hessian-based sensitivity approaches <cit.>, in which the Hessian of a layerwise reconstruction loss is used instead of the task loss Hessian, show good pruning results in post-training pruning of large language models <cit.>. Combining pruning and quantization A number of works study combinations of pruning and quantization with different levels of granularity <cit.>. Comparing pruning and quantization Despite the large amount of work on pruning, quantization, and combining them, there is little literature comparing the two methods. To the best of our knowledge, there is only one work that performs a comparison of pruning versus non-uniform quantization  <cit.>. The work considers only small-scale models and provides only an empirical comparison with no further analysis. § CONCLUSION We have seen in this paper that in several settings, unstructured pruning only performs better than quantization in rare cases. In our theoretical analysis of distributions and on the real-layer-data, pruning is only better than quantization, compressing the network to an equivalent of 2 or 3 bits. This amount of compression comes with such a degree of a drop in performance it is rarely used in practice. The post-training quantization results are also informative. In the setting without fine-tuning, we have shown with theoretical bounds on many layers in neural networks that quantization is almost always provably better than pruning. Our hypothesis is that quantized layers are more accurate than pruned ones, as shown in the theoretical and PTQ setting, and fine-tuning a network is still highly dependent on that. This is in line with fine-tuning results, in which for many networks trained under the same conditions, quantization always has higher performance than pruning. The conclusion is clear: Quantization generally outperforms pruning for neural networks. Taking into account the unfavorable hardware implications for pruning described, it could be argued that the conclusion holds even stronger. Based on this research, we recommend quantizing neural networks when efficiency is required before pruning is explored. plain § EXPECTED QUANTIZATION ERROR COMPUTATION The expected quantization error is a sum of two terms, the rounding error E_r and the clipping error E_c: 𝔼(W-Q(W))^2=E_r+E_c, E_r=∫_q_min^q_maxR_q^2(w)p(w)dw, E_c=∫_-∞^q_min(w-q_min)^2p(w)dw + ∫_q_max^∞(q_max-w)^2p(w)dw. The rounding error E_r can be split into two sub-intervals for each interval (q_i,q_i+1) where the first sub-interval corresponds to rounding up and the second sub-interval corresponds to rounding down: E_r= ∑_i=1^|q|∫_q_i^q_i+1R^2(w)dw= ∑_i=1^|q|∫_q_i^(q_i+q_i+1)/2 (w-q_i)^2 p(w)dw + ∑_i=1^|q|∫_(q_i+q_i+1)/2^q_i+1 (q_i+1-w)^2 p(w)dw. In order to simplify the computation, we introduce the following function: I(a,b,w_0) ∫_a^b (w-w_0)^2 p(w)dw. Thus we can redefine the rounding error as: E_r = ∑_i=1^|q|[ I(q_i,(q_i+q_i+1)/2,q_i) + I((q_i+q_i+1)/2,q_i+1,q_i+1) ]. We note that the clipping error E_cw can also be expressed using I_w(a,b,w_0): E_c= I(w_min,q_min,q_min) + I(q_max,w_max,q_max). where w_min and w_max are the limits of a truncated distribution. The analytical expressions for I(w_min,q_min,q_min) for different distributions are given in the Appendix of <cit.>. Thus, given the explicit definition of the quantization grid and the probability density function, we can analytically compute the rounding error for different distributions, for example, the Gaussian, Uniform, or Student's t-distribution. § TRUNCATED STUDENT'S-T DISTRIBUTION The PDF of a truncated t-distribution with zero mean and unit variance is given by: f(x,ν,l)=p(x,ν)/Φ(l)-Φ(-l)1_-l ≤ x ≤ l, where -l and l are the truncation limits. p(x,ν) are the PDF and the CDF of the non-truncated t-distribution given by: p(x,ν)=1/√(ν)B( 1/2, ν/2)(1+x^2/ν)^-ν+1/2, Φ(x)= 1/2+xΓ(ν+1/2)×2F_1(1/2, ν+1/2; 3/2, -x^2/ν)/√(πν)Γ(ν/2), where F_1 is the hypergeometric function. Kurtosis value of this distribution depending on the symmetric truncation range l is plotted in figure <ref>. § SPARSITY IN QUANTIZED TENSORS As we noted in section <ref>, quantized tensors naturally have some sparsity. The sparsity ratio tends to become higher if lower quantization bit-widths are used. Below we give a table with the average sparsity for all PyTorch model zoo tensors depending on the bit-width: As we can see, the sparsity values become very significant, especially for low bit-width values. § CORRELATION BETWEEN MODEL ACCURACY AND PER LAYER SNR In this section, we measure the correlation between SNR at individual layers of a network and the final model accuracy. It is important to study to which degree the two measures are correlated as we used SNR for our experiments in section <ref>. We note that in this section we use linear scale SNR which is a normalized MSE in contrast to log-scale SNR used in section  <ref>. The results are given in figure <ref>. We pruned and quantized single layers of Resnet-18 and plotted activations SNR versus the full model accuracy drop. We observe a strong correlation between SNR and accuracy which confirms our assumption made in section <ref>. § DETAILS OF PYTORCH MODEL ZOO TENSORS EXPERIMENTS We quantized and pruned all the PyTorch model zoo weights tensors. All the convolutional and fully-connected layers were considered, the list is given below (45 models in total). Classification models: alexnet, resnet18, resnet34, resnet50, resnet101, resnet152, resnext50-32x4d, resnext101-32x8d, wide-resnet50-2, wide-resnet101-2, vgg11, vgg11-bn, vgg13, vgg13-bn, vgg16, vgg16-bn, vgg19-bn, vgg19, squeezenet1-0, squeezenet1-1, inception-v3, densenet121, densenet169, densenet201, densenet161, googlenet, mobilenet-v2, mobilenet-v3-large, mobilenet-v3-small, mnasnet0-5, mnasnet1-0, shufflenet-v2-x0-5, shufflenet-v2-x1-0. Object detection models: fasterrcnn-resnet50-fpn, fasterrcnn-mobilenet-v3-large-320-fpn, fasterrcnn-mobilenet-v3-large-fpn, maskrcnn-resnet50-fpn, keypointrcnn-resnet50-fpn, retinanet-resnet50-fpn, ssd300-vgg16, ssdlite320-mobilenet-v3-large. Semantic segmentation models: lraspp-mobilenet-v3-large. Video classification models: r3d-18, mc3-18, r2plus1d-18. § DETAILS OF PER-LAYER EXPERIMENTS In section, we provide details on per-layer experiments we performed in section <ref>. In table <ref> we give the names of the models and the layers we used along with the sub-problem dimensionality we considered for each chunk. Depending on the layer, the experiment took from approximately an hour up to six CPU weeks. § DETAILS OF THE FULL-MODEL EXPERIMENTS In table <ref> we provide details of the full-model experiments. As optimal learning for quantization and pruning depends on the compression ratio, we performed a grid search with the step size corresponding to multiplying the basic learning rate above by negative and positive powers of 0.33. For pruning of all the models except for DeepLab-V3 we gradually increase sparsity during the first 15 epochs of fine-tuning and we use the remaining 5 epochs to recover the accuracy with fixed sparsity. A similar scheme is used for DeepLab-V3 with 150 epochs of gradual sparsity increase and 50 remaining epochs of fine-tuning. For quantization experiments, we use per-tensor quantization and Adam optimizer with a learning rate of 1.0e-5 for quantization scales optimization. We compress weights only and do not prune or quantize activations. § ANALYSIS OF REPRESENTATIONS LEARNED DURING FINE-TUNING IN QAT AND PRUNING As fine-tuning significantly improves the accuracy after pruning or quantization, it is interesting to investigate whether fine-tuning recovers the original model. To answer this question, we study how representations at each layer change during the course of fine-tuning by comparing them to the original model representations. For this purpose, we sample activations from two models, Resnet18 and ViT, after each epoch of fine-tuning, and also directly after quantization/pruning. We measure distances between the original activations and the corresponding activations of the quantized and pruned models. To measure the distance between two feature maps we consider two distance metrics, log-scale SNR and the central kernel alignment (CKA) distance (Kornblith, et al. PMLR, 2019). We show the results in figure <ref>. We observe qualitative agreement between both metrics. Curiously, the results show different trends for pruning and quantization. For pruning, the representations tend to become closer to the original representation during fine-tuning. However, for quantization the fine-tuning rather learns a representation that is different from the original one. In order to provide more convenient visualizations, we show one-dimensional plots of both distance metrics at the last non-classifier layer in figure <ref> (a-b). We can see that even in the cases of larger distances fine-tuning after pruning tends to recover the representations while it is not the case for QAT. For ViT we observe qualitatively the same behavior, see figure <ref> (c). We omit SNR plots as this measure rapidly becomes negative both for pruning and quantization in ViT. However, CKA evolution follows the same pattern as in the case of Resnet-18 and confirms similar observations. As we see, fine-tuning during QAT effectively tends to training different representations, while fine-tuning after pruning has a tendency towards recovering the original model.
http://arxiv.org/abs/2307.00328v1
20230701124751
Solar Radio Imaging at Arecibo: The Brightness Temperature and Magnetic Field of Active Regions
[ "P. K. Manoharan", "C. J. Salter", "S. M. White", "P. Perillat", "F. Fernandez", "B. Perera", "A. Venkataraman", "C. Brum" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.IM", "physics.space-ph" ]
addressref=aff1,corref,email=mano.rac@gmail.com]P. K.Periasamy K. Manoharan0000-0003-4274-211X addressref=aff1,email=csalter@naic.edu]C. J.Christopher J. Salter addressref=aff2,email=stephen.white.24@spaceforce.mil]S. M.Stephen M. White addressref=aff3,email=phil@naic.edu]P.Phil Perillat addressref=aff3,email=felix.omar.fernandez@gmail.com]F.Felix Fernandez addressref=aff1,email=bhakthiperera@gmail.com]B.Ben Perera addressref=aff3,email=arun@naic.edu]A.Arun Venkataraman addressref=aff1,email=cbrum@naic.edu]C.Christiano Brum [id=aff1]Arecibo Observatory, University of Central Florida, Puerto Rico 00612, USA. [id=aff2]Space Vehicles Directorate, Air Force Research Laboratory, Albuquerque, NM, USA. [id=aff3]Arecibo Observatory, Yang Enterprises Inc., Arecibo, Puerto Rico 00612, USA Manoharan et al. Solar Mapping at X-band and Properties of Active Regions Strong solar magnetic fields are the energy source of intense flares and energetic coronal mass ejections of space weather importance. The key issue is the difficulty in predicting the occurrence time and location of strong solar eruptions, those leading to high impact space weather disturbances at the near-Earth environment. Here, we report regular solar mapping made at X-band (8.1 – 9.2 GHz) with the Arecibo 12-m radio telescope. This has demonstrated its potential for identifying active regions, about one half to a day in advance, when they rotate on to the central meridian of the Sun, and predicting the strongest flares and coronal mass ejections directed towards the Earth. Results show (i) a good correlation between the temporal evolution of brightness temperature of active regions and their magnetic configurations; (ii) the ability of the mapping data to provide a better picture of the formation sites of active regions and to accurately track their evolution across the solar disk, giving forewarning of intense solar eruptions leading to severe space weather consequences; (iii) the importance of long-term monitoring of the Sun at X-band for understanding the complex three-dimensional evolution of solar features as a function of solar activity. The key point in this study is the identification of the magnetic properties of active regions on the solar disk to aid in improving forecast strategies for extreme space-weather events. § INTRODUCTION Solar magnetic fields play a critical role in a wide variety of phenomena occurring on the Sun, ranging from slowly evolving structures such as coronal holes, sunspots, and coronal loops, to highly dynamical phenomena such as the acceleration of the solar wind, acceleration of charged particles, solar flares, and coronal mass ejections (CMEs). The magnetic field in the solar atmosphere essentially controls the plasma structure, storage of free magnetic energy, and its release at times of flares and/or mass ejections (e.g., ). The areas of strong magnetic field concentration on the surface of the Sun form active regions, which are embedded in groups of sunspots of the same magnetic polarity, followed by groups of sunspots of opposite polarity. Specifically, active regions coupled to sunspot groups of complex polarity of γ or γ-δ configuration, as per the Hale or Mount Wilson scheme (e.g., ; ), are prone to produce significantly intense flares and CMEs. However, such spots are limited in number to <1% of the total number of spots compared to the numerous β spots of bipolar characteristic, which produce flares of lower intensity (e.g., ). Therefore, the key issue is to predict the occurrence time and location of strong solar eruptions, i.e., those leading to the high impact space weather disturbances in the near-Earth environment. The solar magnetic fields are observed directly at the photospheric level, whereas direct field measurements are very difficult in the dynamic corona due to its low density. The inference of the coronal field is largely via data-driven models, which are limited by the basic assumption that the coronal magnetic field remains in static equilibrium. However, the X-ray and extreme ultraviolet (EUV) emissions from the optically-thin corona above an active region, originating at the top of the complex magnetic field network, relate to the inhomogeneous, hot, and dense plasma and they provide a remarkable view of the magnetic activity above the active region (). In a close resemblance, but in the extended solar atmosphere, the radio opacity decreases with increasing observing frequency and the effective radio emission height, from meter to centimeter wavelengths, moves from the corona to the chromospheric region. The radio signatures of an active region in the frequency range of 5 – 10 GHz provide a powerful diagnostic of the gyro-synchrotron radiation from high-energy electrons trapped in small-scale magnetic field loops and the observed bright features are gyro-resonance emitting regions where the field strength exceeds 600 G (e.g., ; ; ; ). Typically, the gyro-resonance spectrum peaks in the frequency range of 5 – 10 GHz and provides an indirect measure of magnetic fields above the photosphere (). Moreover, observations in this frequency range do not wholly resemble those at the soft X-ray and/or EUV bands, but they do largely imitate the photospheric magnetograms (; ; ). Since the magnetic complexity of an active region crucially determines the occurrence of intense flares and energetic CMEs (; ), the present study emphasizes the importance of regular mapping of the Sun at ∼8.1 – 9.2 GHz in revealing the magnetic characteristics of eruptive regions analogous to the magnetogram data. In Section 2 of this paper, we provide a brief description of the Arecibo 12-m radio telescope. In the following sections, we discuss our X-band (8.1 – 9.2 GHz) solar mapping observations and report significant results on, (i) tracking the formation and evolution of active regions leading to strong solar eruptions of space weather importance, (ii) understanding the relationship between the radio emission brightness and the magnetic field properties of the quiet Sun and flaring regions, (iii) the global view of the solar features over several solar rotations, and (iv) the “solar flux density – brightness temperature” relationship. Finally we give the concluding remarks in Section 8. § THE ARECIBO 12-M RADIO TELESCOPE The Arecibo Observatory operates a 12-m diameter parabolic reflector radio telescope. It was manufactured by Patriot Antennas Inc, Albion, Michigan, USA. The 12-m telescope was installed in 2011 on a hill-top within the observatory campus. It is a fully steerable alt-azimuth-mount telescope with a primary focal length to diameter ratio of 0.375. The telescope can cover an elevation range of ∼5^∘ – 88^∘ and provides a coverage in declination, between ∼-65^∘ and +90^∘. The hill-top view of the 12-m telescope is shown in Figure <ref>. The geographic coordinates of the telescope site and the mechanical specifications of the antenna are given in Table 1. §.§ The Room-temperature Receiver Systems The 12-m radio telescope operated with room-temperature receiver systems until 10 April 2023, which covered the frequency ranges of 2.21 – 2.34 GHz (S-band) and 8.1 – 9.2 GHz (X-band) and recorded dual polarization signals using the FPGA-based Mock Spectrometer, which contains seven boxes and each box handles bandwidth up to 172 MHz (<http://www.naic.edu/ astro/mock.shtml>). The 12-m telescope takes advantage of RFI protection from the Puerto Rico Coordination Zone (PRCZ) at frequencies below 15 GHz, which covers Puerto Rico and nearby Puerto Rican islands. (<https://www.naic.edu/ao/scientist-user-portal/science-tools/pr-coordination-zone>). As a stand-alone telescope, the observing time of the 12-m antenna is mostly shared among the mapping of the Sun at the X-band (e.g., ; ) and the monitoring of pulsars and FRBs at the S-band (e.g., ; ). The observing programs of the telescope also include mapping of large angular-size continuum radio sources, monitoring of selected AGNs and blazars, spectral line studies, etc. Additionally, the 12-m telescope provides strong support for student programs, such as the NSF-funded Research Experience for Undergraduate (REU) and Partnerships in Astronomy and Astrophysics Research and Education (PAARE) programs. The properties of the X-band room-temperature receiver are given in Table 2. The table lists the Frequency range, Half Power Beam Width (HPBW), the typical System Temperature (T_ sys), and System Equivalent Flux Density (SEFD). It is to be noted that the system temperature (T_ sys) varies as a function of elevation angle as shown in Figure <ref>, which shows each band's T_ sys normalized to its average value of 123 K at elevations of ∼80-85 degrees. Its average functional form at X-band is given by, T_ sys = 0.9523 + 0.0477 ·sin(E)^-0.85, where E is the elevation angle. Correspondingly, the SEFD also varies as a function of elevation angle. The X-band, 8.1 – 9.2 GHz, is divided into seven bands of width 172 MHz, each covering a box of the Mock spectrometer. In the case of solar mapping, the system is calibrated before and after each map, by taking `on – off' cal (i.e., a value of ∼30 K) at a position away from the Sun. Since even the quiet Sun is strong at X-band (refer to Figures <ref> and <ref>), the solar attenuator is needed and the uncertainty in the measured brightness temperature is rather small. However, the important point is that the performance of the X-band system can be considerably affected (i.e., attenuated) by rain (i.e., T_ sys increases with the presence of rain, or even thick cloud coverage, along the line of sight to the radio source) and such observations are not considered in the analysis. More extensive details of performance of the system can be seen at <http://www.naic.edu/ phil/hardware/12meter/sysperf>. § SOLAR RADIO MAPPING Solar mapping with the Arecibo 12-m Radio Telescope was initiated in mid-December 2021. `East-west' raster scans of the Sun were taken covering a range of ± 1^∘ respectively in right ascension and declination, with respect to the center of the Sun. Each set of scans provides a calibrated map of the two-dimensional distribution of brightness temperature over the Sun. Figure <ref> shows examples of full images of the Sun at 8647 MHz, a frequency close to the center of the recorded band of 8.1 – 9.2 GHz, observed on 16 May 2022 and 03 January 2023, along with the near-simultaneous EUV images of the Sun observed by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) in the wavelength band of 19.3 nm, and the photospheric magnetograms recorded by the Helioseismic Magnetic Imager (HMI) on board SDO (; ). The white circle plotted on the radio images indicates the size of the optical disk of the Sun. In a day, typically 5 to 10 maps are made to monitor the evolution of the brightness temperature distribution of the Sun. Since the 12-m radio telescope covers the frequency band of 8.1 – 9.2 GHz, at a given time seven simultaneous maps are made at frequency intervals of ∼172 MHz. Inter-comparison between these maps provides a useful handle on the identification and elimination of radio frequency interference, should this be present. Each radio map represents the brightness distribution of the Sun, in a sense, the `average' characteristics of active and quiet regions on the Sun over the scan duration of ∼30 min. However, the other space-based images compared are snapshots of the Sun with shorter exposure times. The spatial resolution of the 12-m telescope at 8.1 – 9.2 GHz is limited to ∼10 arcmin. Nevertheless our maps provide a clear view of the emission brightness temperatures of active and quiet regions on the Sun. For example, in the radio map observed on 16 May 2022 (top row, left image; Figure <ref>), the presence of an active region is identified with an enhanced brightness temperature, ∼10,500 K, whereas quiet regions are at an average temperature of about 8000 K. The mid-latitude coronal hole (i.e., open magnetic field region of low density), as indicated by the low emitting region of the EUV image, is also associated with a low-brightness temperature of about ≲6000 K. Likewise the map on 03 January 2023 (bottom row, left image; Figure <ref>) shows the presence of three active sunspots groups, one of them located in the southern hemisphere and the others in the northern, in correspondence with the EUV image and the magnetogram. The brightness temperatures of these active regions lie between ∼11,000 and 13,000 K. §.§ Peak Radio Brightness Temperature In Figure <ref>, the peak brightness of the Sun (in kiloKelvin), observed with the 10-arcmin beam of the 8.6-GHz images is displayed in the top panel. This covers the period between 13 December 2021 and 31 July 2022, which is in the onset of the ascending phase of the current solar cycle 25. The gradually building up activity of the Sun is evident in the brightness, which includes the signatures of active regions responsible for X- and M-class flares, as well as quiet days of lesser activity. In the central and bottom panels of the figure respectively, hourly-averaged EUV (0.1 – 7 nm) irradiance of the Sun observed by the Ultraviolet Variability Experiment (EVE) on board SDO () and X-ray (0.1 – 0.8 nm) flux by the NOAA Geostationary Operational Environmental Satellite (GOES-16) (<http://www.swpc.noaa.gov/Data/goes.html>) are plotted for comparison. In the radio brightness plot, each point indicates the typical average peak brightness on the disk of the Sun and each intense peak is associated with an isolated active region on the Sun. In addition, when the mapping time coincided (or partly overlapped) with a flare, such data point recorded the brightening corresponding to the particular phase of the flare. For example, one of the maps made on day number 89 (30 March 2022) included the rising phase of the X-1.3 class flare, the first intense flare of the current solar cycle, and recorded a brightness temperature of ∼42,000 K (see top panel of Figure <ref>). Beside several systematic peaks, the radio brightness profile reveals the interesting result that the 8.6-GHz brightness temperature of the quiet Sun in the absence of activity, is ∼8000 K, as indicated by the dotted horizontal line in the top panel. This is likely the representative temperature of the upper chromosphere of the quiet Sun at ∼3000 km above the photosphere (e.g., ; ). The strong radio emission from the Sun between day numbers 90 and 140, three peaks (indicated by letters `a', `b', and `c' in the top panel of Figure <ref>) show systematic increase and decrease that are much more prominent than are seen in the EUV and X-ray fluxes. These correspond to emission from multiple-pole magnetically-active, `β-γ-δ', regions developed on the Sun, from where intense flares and energetic CME eruptions were observed (e.g., ; ), and respectively correspond to active regions AR#2975, ARs#2993/2994, and AR#3014. In fact, the `b'-peak's ARs#2993/2994 were the return of AR#2975, which appeared at the east limb of the Sun on 15 April 2022, and during the subsequent rotation decayed to a less active state. For each peak, when the magnetic configuration of the active region attained β-γ configuration, the brightness temperature increased to a level of ∼13,000 K. As the peak was approached, it reached ∼19,000 – 21,000 K and the magnetic configuration developed to β-γ-δ. The notable point is that all the M-class flares were produced when the brightness temperature was ≥13,000 K, whereas X-class flares occurred close to the peak of ∼20,000 K. It is valuable to detect rarer intense flares and CMEs, with the help of the 12-m radio mapping, when an active region attains the β-δ configuration. Figure <ref> shows typical examples of Arecibo radio images made at 8.6 GHz on 29 March, 24 April, and 17 May 2022, after the development of β-γ magnetic configuration, of peak brightness temperatures in the range of 13,000 – 16,000 K. Alongside each radio image is shown the same day's complex magnetogram of the bright emitting region from the HMI on board the SDO space mission (). Such an active region while crossing the central meridian of the Sun would release Earth-directed CMEs/flares, which are liable to cause severe space-weather impacts at the near-Earth space. However, the occurrence of this type of complex active region is infrequent and represented by only ∼1% of the sunspot population (e.g., ). Thus the great advantage of these measurements is that they can identify an eruptive region when it rotates close to the central meridian of the Sun, about one half to a day in advance, and predict the strongest flares and CMEs. § EVOLUTION OF X-BAND BRIGHTNESS TEMPERATURE The 12-m radio telescope observations presented in this study cover the period between 13 December 2021 and 09 April 2023, which is in the ascending phase of the current solar cycle #25. The solar mapping observations were terminated at ∼18 UT on 09 April 2023 and the 12-m telescope was released to the telescope engineering team on 10 April 2023 for the installation of the wideband cooled receiver system (refer to Section 8). The regular radio mapping observations so far accomplished have been useful to (i) locate and track several active regions between their appearance at the east limb of the Sun and disappearance (i.e., rotation to the backside of the Sun) at the west limb, (ii) compare the evolution of active regions with their daily magnetic properties, such as estimated area, number of sunspots within the active group, and magnetic configurations, and (iii) study the brightness temperature evolution of the Sun as a function of the phase of the current solar cycle #25. Several active regions crossed the solar disk without significant change, whereas a small fraction evolved from a simple magnetic configuration (e.g., α or β configuration) to complex β-δ and/or β-γ-δ configurations, and their brightness temperature profiles showed a remarkable increase to a maximum value, followed by a gradual decrease, or stayed at the increased level until they rotated off the visible disk of the Sun. In Figure 6(a) to (d), daily X-band peak-brightness temperature measurements are plotted with red dots for the active regions, #2993, #3089, #3153, and #3190, during their crossing of the solar disk. The cartoons included in Figure 6(c) depict the typical locations of the active region #3153 on the solar disk at three different epochs. In the cases of active regions, i.e., #2993, #3089, and #3153, during their passage on the solar disk, although each one of them showed an increase in brightness temperature to a peak value, followed by a gradual decrease, their temperature profiles vary in shape, width and the intensity during the rotation from the east to the west limb of the Sun. Thus, the differences in the formation, development, and decay of active regions indicate the challenges involved in predicting the intense flare/CME space-weather events and emphasize the importance of the regular monitoring of active regions, as demonstrated in the current study. §.§ The Magnetic Configuration of Active Regions A daily solar active region summary report is compiled and prepared based on the analysis of individual observatory reports from the Solar Optical Observing Network (SOON) by the NOAA Space Weather Prediction Center (SWPC) and US Air Force (USAF) (<https://www.swpc.noaa.gov/products/solar-region-summary >). The report released at the start of each day typically includes the location of an active region on the face of the Sun, its area, the number of visible sunspots within the region group, and the type of its magnetic configuration. The results of Figure 6(a) to (d) are compared with the daily magnetic properties of the active regions. For instance, close to the maximum of these brightness temperature profiles, the magnetic configurations of active regions, #2993, #3089, #3153, and #3190, developed to β-γ, β-δ-γ, β-γ, and β-δ-γ, respectively. The maximum brightness temperatures ranged from ∼16,200 to ∼27,000 K, and each active region went through different types of evolution. Additionally in Figure 6(a) to (d), for each active region, along with the measurements of the brightness temperatures, the daily estimates of the number of visible sunspots within the active region group are plotted with blue star symbols, with the corresponding scale shown on the right side of the vertical axis of each plot. The correlation between the brightness temperature of an active region and the development of sunspots (i.e., magnetic activity) is obvious from these plots. However, a comparison between the number of sunspots and brightness temperature near the maximum suggests a likely range of temperatures for a given number of sunspots, or vice versa. The results are further discussed in Section 5, which examines the observations of flare activity in the period between 13 December 2021 and 09 April 2023 (also refer to Figure <ref>). Figure 7(a) shows the X-band temperature profile of active region #3055, which attained a maximum of ∼12800 K, which is less than the maximum values measured for the other active regions shown in Figure 6. Moreover, the magnetic configuration of this active region also evolved from the α type at the east limb of the Sun to the β type around the peak of the temperature profile, but not to a complex state. It returned again to the initial state close to the west limb, where the measured temperature was around 8000 K, which is in agreement with the long-term quiet Sun (or background) emission (see Figure <ref>). Similar to the temperature profile, the number count of sunspots in the region also evolves (or increases) from a count of 4 at the east limb to a maximum of 27 and then decays to a count of 3 at the west limb. In Figure 7(b), temperature measurements of the active region are compared with the area estimates of the active region, given in millionth of the solar hemisphere area (MH), available from the daily solar region summary reports (<https://www.swpc.noaa.gov/products/solar-region-summary>). The good agreement observed between “brightness temperature and active group area” has also been confirmed for other active regions. In the next section, we examine the relationship between the number of sunspots in an active group and its area for a large number of flaring sites located close to the central meridian of the Sun. § THE MAGNETIC PROPERTIES OF FLARE-PRODUCING ACTIVE REGIONS In continuation of the above results on the relationship between “the brightness temperature and magnetic property of active region”, we also carefully examined the magnetic characteristics of moderate to intense solar flare producing regions. Flares are classified according to their X-ray intensity in the wavelength range 0.1 to 0.8 nm. The weakest flares are A- and B-class, followed by C-, M- and X-class, the increase in X-ray intensity from one class to the next being by a factor of 10, where the intensity of a C1 flare is 10^-6 W m^-2 and the X1 is at 10^-4 W m^-2. In the current period of study, 13 December 2021 to 09 April 2023, a total number of 3295 X-ray flares of intensity C1 and above were observed, their distribution being shown in Figure <ref>. These flares were associated with active regions of different magnetic configurations of less evolved and simple β types to evolved and complex β-γ-δ stages. A great fraction of them, i.e., 2967 events (∼90%), are in the weak-to-moderate intensity C-class category, 313 events (∼9.5%) in the M-class, and only the 15 remaining events (≤0.5%) in the intense X1 and X2 classes. Thus, in the ascending phase of solar cycle #25, intense flares were limited to a small fraction. In the above set of 3295 flare events, 1028 events that originated close to the central meridian of the Sun, i.e., within ±30^∘ of the longitude and latitude of the Sun's center, have been selected. For each one of these, the magnetic properties of the flaring site are compared. Figure <ref> displays the scatter plot between the number of visible sunspots in the flare region and its area, which is given in millionth of the solar hemisphere area (MH). In spite of the large scatter in the plot, there is an overall increasing trend between the number of sunspots and the area, with a correlation coefficient of +0.72. The active regions which were responsible for intense flares of M and X classes are marked with circles. The diameter of the circle indicates the flare intensity as given in the legend at the right-hand side of the vertical axis. An observed active region, due to its possibility of non-evolving or evolving condition, may be represented in the plot by a single point or more than one point. Moreover, on some occasions, an active region might have been the source of more than one flare of the same or varying strength, which are indicated by concentric circles on a point. The above results of magnetic properties of flaring sites indicate that for an active region, at the time of its evolution or at the developed complex state (e.g., β-δ or β-γ-δ), it is a great challenge to accurately pinpoint its relationship to either the sunspot count in the group or the area. In contrast, as demonstrated in Figures <ref> and <ref>, the X-band solar mapping likely provides a better picture of active regions as they evolve and helps to track their evolution across the solar disk, forewarning of intense solar eruptions. § SOLAR SYNOPTIC MAPS §.§ Brightness Temperature Distribution Apart from the daily radio maps of the Sun, displaying its brightness temperature distribution over a solar rotation is an efficient way to represent the full surface features of the Sun during a rotation, yielding a global view of emission structures. Moreover, it is a valuable tool for assessing the conditions of the emission characteristics in relationship to the solar magnetic field evolution. Figure <ref> shows an example of a solar synoptic map, obtained using the continuous data from the 12-m telescope at 8647 MHz for the month of April 2022. This includes the full Carrington rotation #2256, spanning from 03 to 30 April 2022 and end part of the previous rotation. To construct the synoptic map, a central meridian cut of width ∼5 arcmin is taken from the daily radio images, covering from the south to the north poles. Since the spatial resolution of the 12-m telescope at X-band is ∼10 arcmin, the strip of ∼5 arcmin at the center of the Sun corresponds to the peak of the beam. The vertical central meridian strips are assembled horizontally in chronological order and deprojected to get a uniform latitudinal view of the solar features. The synoptic map representation allows us to examine the temporal evolution of surface characteristics of the Sun. For example, in Figure <ref>, two bright regions, one in the southern hemisphere, centered around 03 April (due to the combination of active regions ARs#2978 and #2981, both of which developed to the complex β-γ configuration), and the other in the northern hemisphere, centered around 21 April 2022 (due the active regions ARs#2993 and #2994 of β-γ configuration) are evidently seen. Although both of them have a similar sort of magnetic configuration, the later active region's brightness temperature was much higher than that of the earlier one. This indicates that the small scale magnetic structures favorable in accelerating particles to high energy are likely present in the bright emitting region (e.g., ). Other interesting structures (or features) are the low emitting strips (i.e., <7000 K) extending from the north and south poles to mid latitudes, which are due to the small to large-size transient coronal holes (represented by red color code). Examples are at around 05 April and 22 April 2022. Consistently, the stability of the long lived low-emitting coronal holes in the high-latitude regions are brought out on the map. However, at the south and north edges of the map, the effect of the sky dilution can tend to reduce the temperature at a constant level and about 7-degree portion of the polar region edges are avoided while preparing the map. !in the map display. The bright emitting belt of about ±35^∘ caused by the presence of sunspots (represented by the green color code) is apparent in the map. There are also a number of dark green features seen on the equatorial belt, indicating the brightening of low-latitude sunspots. Due to the limited resolution of the 12-m telescope, the fine structures in the map have been smoothed to the beam size. Nevertheless, most of the large-scale features and low- and high-emitting regions are revealed. §.§ The Photospheric Magnetic Field Distribution The X-band brightness temperature distribution of the Sun has been compared with the magnetogram map of the observed photospheric magnetic field by HMI on board SDO. Figure <ref> shows the synoptic map prepared with the near-central-meridian data from magnetograms acquired everyday around at ∼12 UT in the month of April 2022, which corresponds to the ascending phase of the current cycle #25 just after its minimum phase. The positive and negative polarities are respectively shown in white and black shades. At the initial phase of a solar cycle, sunspots appear at the high latitudes in the southern and northern hemispheres as is clearly revealed by two striking horizontal strips on the synoptic map. Moreover, the concentrated magnetic fields above two prominent active regions, corresponding to the bright emitting regions on the radio map (see Figure <ref>), can also be easily identified. Such a map is extremely useful to study the global distribution of solar features as a function of solar rotation. § ASCENDING PHASE OF SOLAR CYCLE #25 §.§ Latitudinal Distribution of the Sun's Brightness Temperature The 12-m telescope observations covered a good portion of the ascending phase of the current solar cycle #25, between 13 December and 09 April 2023. More than 2000 solar images have been obtained. Since the data covered a large number of solar rotations, Carrington rotations from #2252 to #2268 (plus the beginning portion of rotation #2269; i.e., more than 17 rotations), the daily maps include the characteristics of repeating active regions, which lived for more than one solar rotation. In order to visualize the radio features of the Sun over these rotations, the same procedure used in preparing the radio synoptic map (see Figure <ref>) is employed and a central meridian strip of width ∼5 arcmin is taken from each day's image and the “Latitude – Time” plot of the brightness temperature is constructed as shown in Figure <ref>. This plot includes observations taken during the entire period of 13 December 2021 – 09 April 2023. In a broad sense, the “Latitude – Time” plot resembles the “sunspot butterfly” diagram and reveals the changes of the major large-scale magnetic and brightness structures and the development of activity occurring in the ascending phase of solar cycle #25. Despite several observational gaps, the brightness distribution shows a number of interesting features and the evidence of a steady increase in solar activity. For reference, the sunspot number and the strength of the smoothed polar magnetic fields are also displayed in Figure <ref> with the help of data sets from the Wilcox Solar Observatory (<http://wso.stanford.edu/>) and the NASA/GSFC’s OMNI database (<http://omniweb.gsfc.nasa.gov/>). In Figure <ref>, the presence of an active region is identified with enhanced brightness temperature, >8000 K. Some of the active regions of large area (i.e., extended in longitude) have taken many days to cross the central meridian line and their signatures are wider along the time axis. The quiet Sun regions are at an average temperature of about 8000 K. In the case of a well-developed active region of complex magnetic configuration, a much higher brightness temperature is observed. For instance temperatures ≥13,000 K serve to identify active regions susceptible to intense flares and energetic Earth-directed CMEs leading to severe space-weather impacts. The notable point is that all the M-class flares were produced when the brightness temperature was ≥13,000 K, whereas X-class flares occurred close to the peak at ∼20,000 K (see Figures <ref> and <ref>). In Figure <ref>, the increase in the number of active regions in association with the sunspot count is shown by the number of bright emitting regions. The development of solar activity is also clearly indicated by the gradual increase in the latitudinal width of the brightness distribution as the sunspot number increases from mid-December 2021 to April 2023. In general, as the solar activity increases, the progressive shrinking of the low-density as well as low-emitting coronal holes towards the poles is observed (e.g., ; , ). In the corresponding phase of solar cycle #25, the brightness temperature distribution is consistent with the structural change of the coronal holes in the polar regions respectively shown the gradual decrease in polar field strength at latitudes greater than 60 degrees (Figure <ref>) and the density distribution inferred from the white-light coronagraph image (Figure <ref>). But, as discussed in Section 6.1, the sky dilution can add a constant reduction to the brightness temperature at the south and north edges of the map. The presence of mid- and low-latitude coronal holes at the central meridian of the Sun, shown by X-ray and EUV images of the Sun, is also observed as vertical low-emitting strips on the latitudinal distribution of Figure <ref>. The “Latitude – Time” plot, not only presents a clear picture of the development of solar activity, but also provides the three-dimensional view of radio emission at chromospheric heights in association with the solar surface features. §.§ Density Evolution in the Near-Sun Region The above results for brightness temperature distribution observed with the 12-m telescope are also consistent with the latitudinal distribution of Thomson-scattered brightness observed by the LASCO-C2 coronagraph on board the Solar and Heliospheric Observatory spacecraft (e.g., ; <https://lasco-www.nrl.navy.mil/>). Figure <ref> shows the “Latitude – Time” image of the white light, which is associated with the density of free electrons, measured at 4 solar radii above the east limb of the Sun by the LASCO-C2 coronagraph for the period of the X-band plot shown in Figure <ref> (e.g., ; ). The gradual increase in the latitude width of high density features, associated with closed-field active regions, is consistent with the distribution of X-band brightness temperature. Moreover, the decrease in the width of the low-density coronal hole regions at the poles are consistent with the brightness temperature distribution. Another remarkable feature seen in the LASCO plot is the low-density vertical patch near the south pole centered around 11 - 15 April 2022. This was associated with a large coronal hole located at the south pole of the Sun and extended from the pole to a latitude of about 50 degrees. The LASCO-C2 observed the low-density feature at the east limb of the Sun. As this coronal hole rotated to the center of the Sun in about 7 days, it was recorded by the 12-m telescope around 21 - 22 April 2022 (see Figure <ref>). §.§ Radio Flux Density as a Function of Observing Frequency The latitudinal manifest of the brightness temperature over about 17 solar rotations is apparent in Figure <ref>. The bright patches are regions of gyro-resonance (or gyro-synchrotron) emission, high magnetic field and significant plasma heating (i.e., due to high number density). This emission has a spectral peak in the range of 5 – 10 GHz (e.g, ; ) and it dominates the emission at frequencies, below 3 GHz (optically thick part of the solar atmosphere). It will be useful to compare the X-band temperature distribution with the time series analysis of radio flux measurements of the Sun at different wavelengths (e.g., in the meter to centimeter wavelength range). Such measurements include (i) background emission from the quiet Sun which does not vary with time (e.g., )), (ii) the slowly varying component, caused by free-free thermal emission (e.g., ), and (iii) transient or sporadic emission caused by non-thermal electrons accelerated at flare and/or CME sites (e.g., ). Figure <ref> shows the daily average solar flux densities observed in the frequency range of 245 – 8800 MHz by the Learmonth Solar Observatory, which is one of the stations of the Radio Solar Telescope Network (RSTN). The solar flux density is normally expressed in solar flux units (sfu) (1 sfu = 10^4 Jansky (Jy), where 1 Jy = 10^-26 Wm^-2 Hz^-1). The solar radio flux data, recorded by four sites around the globe, are made available by the NOAA SWPC (<https://www.ngdc.noaa.gov/stp/space-weather/solar-data/>). In Figure <ref>, a gradual increase in the solar activity can been seen in most of the flux density profiles, between 13 December 2021 and 10 April 2023. §.§ Radio Flux Density and Brightness Temperature It will be effective to compare the solar flux density observed at 8.8 GHz at the Learmonth Solar Observatory (Figure <ref>) with the Arecibo daily average radio brightness temperature at the same frequency. This provides a relationship between the brightness temperature and solar flux density measured by two independent telescopes. Figure <ref> (top panel) shows the daily average flux density measured at 8.8 GHz by the 2.4-m telescope at Learmonth Solar Observatory for the period between 13 December 2021 and 09 April 2023 (the flux-density profile at 8.8 GHz is reproduced from Figure <ref>). In the bottom panel of the figure, the daily average brightness temperature at 8.8 GHz from the Arecibo 12-m telescope is plotted. The detailed profiles from the two telescopes show one-to-one correlation. In particular, the overall trend of increasing intensity of the emission from the start of the profiles to the end, in association with the increasing solar activity, also shows the agreement between these two observations. Figure <ref> shows the correlation plot between the daily average flux density at Learmonth and the daily average Arecibo brightness temperature at 8.8 GHz. (In this plot, at brightness temperatures >10000 K, the flux densities tend to deviate moderately towards lower values, which may require a through investigation of the linearity of the flux density measurements at high levels of activity.) For a given brightness temperature, mean-to-peak fluctuations of ≤10% in the radio flux densities are observed. An overall linear correlation coefficient of 81% between the two measured parameters indicates the close agreement between them. This correlation was further checked and confirmed, in the Rayleigh-Jeans approximation to Planck's radiation law, by converting the observed X-band brightness temperature to flux density over the disk of the Sun. In this phase of solar cycle #25, the average flux density at 8.8 GHz was ∼245 sfu. The daily ratios between the brightness temperature and the flux density at 8.8 GHz, (T_ B/S_8.8 GHz), vary between ∼28 and ∼50 and have an average value of 38.2 K/sfu. § CONCLUDING REMARKS In this paper, we present X-band solar mapping observations made with the Arecibo 12-m radio telescope in the frequency range of 8.1 – 9.2 GHz. These observations have revealed the highly complex and variable brightness distribution of solar features over several solar rotations of the current solar cycle #25, covering a fair portion of the ascending phase between 13 December 2021 and 09 April 2023. The solar maps have been used to locate and track active regions of space-weather importance. The radio signatures of an active region in the frequency range of 8 – 9 GHz provide a powerful diagnostic of the gyro-synchrotron radiation from high-energy electrons trapped in small-scale magnetic field loops (e.g., ) and the temporal brightness temperature changes of active regions have been compared with their area and magnetic configuration. Although a correlating tendency has been observed between the brightness temperature and the area of the active region, (also with the number of sunspots within it), understanding the differences in the formation, development, and decay of an active region in terms of its area and/or magnetic configuration alone poses a great challenge. Thus, a given area of an active group, (or the number of spots within it), may not be a precise parameter when deciding on the actual capacity of an eruption site. The results of the present study demonstrates that the X-band solar mapping provides an improved picture of the formation location of an active region and helps to accurately track its evolution across the solar disk, forewarning of intense solar eruptions leading to severe space-weather consequences. Additionally, the “Latitude – Time” plot over several solar rotations provides the evolution of quiet regions on the Sun, coronal holes, and eruptive sites. This also provides the three-dimensional picture of radio emission (i) above sunspots, where the magnetic field is stronger, but less divergent, (ii) high- and mid-latitude coronal holes of largely unipolar and open field regions, and (iii) large-scale emitting structures, which can be linked to the flux emergence rate from regions below the photosphere. The present analysis emphasizes the importance of the long-term monitoring of the Sun at X-band for understanding the complex three-dimensional evolution of solar features as a function of solar activity. The agreement obtained between the daily radio flux density and brightness temperature data provides a typical scaling factor involved. However, this has to be further thoroughly checked for slowly varying emission and sporadic bursts caused by flares and CMEs with improved spatial resolution measurements. During the X-band solar mapping observations, several CME events were also detected. These observations are useful towards understanding hot magnetized plasma conditions at the initial stages of CMEs at the chromospheric height. For academic interest, we also tracked a few active regions and recorded the brightness temperatures at the high temporal resolution. Results of the CME-event study and emission profile analysis will be presented elsewhere. The 12-m radio telescope is currently being upgraded with a wideband, 2.3 – 14 GHz, cooled front-end system, which will considerably enhance its sensitivity, as well as its frequency coverage. Since the new receiver also allows measurement of full-Stokes parameters, its extended bandwidth will provide highly accurate temporal measurements of polarization and dynamic spectra of the solar emission. This will be valuable for studying the evolution of the magnetic-field configurations and plasma conditions in the magnetic current sheets of solar eruptions, which are essentially required for understanding the origin of space-weather events. Moreover, the mapping of the Sun over a wide frequency band will also provide the temporal and spatial evolution of the eruptive active regions at different layers between the photosphere and the lower corona. In addition, the upgraded 12-m system will allow interplanetary scintillation (IPS) observations of compact background radio sources that can probe the ambient solar wind and structures within propagating CMEs in the three-dimensional inner heliosphere, for regions inaccessible to spacecraft, between the solar wind acceleration region and to about the middle Sun-Earth distance (∼10 – 100 solar radii) (e.g., ). Indeed the wide bandwidth made available will allow us to probe solar-wind density structure of different scale sizes, i.e., Fresnel radii, which are useful towards understanding the plasma properties associated with propagating space-weather events (e.g., ; ). A set of solar observations at high- and low-radio frequencies respectively with the Arecibo 12-m radio telescope and the Arecibo Callisto Radio Spectrometer (<https://www.e-callisto.org/>), combined with IPS measurements, will be an asset for a detailed understanding of space-weather events in the Sun-Earth space. The Arecibo Observatory is operated by the University of Central Florida under a cooperative agreement with the National Science Foundation (AST-1822073), and in alliance with Universidad Ana G. Méndez and Yang Enterprises, Inc. PKM wishes to thank Tapasi Ghosh for the numerous useful discussions and suggestions during the stages of analysis and the preparation of the manuscript. We acknowledge the EUV data from the UVE and images from the AIA and HMI on board the Solar Dynamics Observatory. The X-ray data sets have been obtained from the Geostationary Operational Environmental Satellite (GOES-16). We also acknowledge OMNIdata of NASA/GSFC’s Space Physics Data Facility. The Wilcox Solar Observatory provided the source surface magnetic field data. The SOHO/LASCO is a project of international cooperation between ESA and NASA. We are grateful to the Learmonth Solar Observatory, one of the stations of the Radio Solar Telescope Network (RSTN), for the provision of the solar flux density data. spr-mp-sola
http://arxiv.org/abs/2307.01659v1
20230704113740
Pre-main sequence stars
[ "Evgeni Semkov" ]
astro-ph.SR
[ "astro-ph.SR" ]
Pre-main sequence stars E. Semkov Evgeni Semkov Institute of Astronomy and National Astronomical Observatory, Bulgarian Academy of Sciences, Sofia, Bulgaria esemkov@astro.bas.bg Dissertation summary. Accepted on 31.05.2023 Pre-main sequence stars Evgeni H. Semkov1 ^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia ===================================================================================== § INTRODUCTION The dissertation presents result from study of Pre-main sequence (PMS) stars that are in the earliest stages of stellar evolution. These young stellar objects are still in the process of formation, and the energy they emit is produced only by gravitational contraction. The main results were obtained with the telescopes at the National Astronomical Observatory Rozhen, as well as with the use of archival photographic observations and spectral observations obtained in collaboration with colleagues from abroad. Our results are obtained by studying the photometric and spectral variability of PMS stars. Our main goal is to study the processes of star formation, the formation of circumstellar disks, the structure of the circumstellar environment and the interaction of the star-disk system. The results of our research have been published in 65 scientific papers that have been cited over 300 times. The photometric and spectral variability of PMS stars is of great importance for modeling star formation processes. On the one hand, photometric variability allows us to easily detect young objects, since they are characterized by rapid variability and in many cases with large amplitudes. On the other hand, stars form in groups and are physically located in the same geometric space, and several variable objects that are at approximately the same stage of evolution can be observed simultaneously. Comparing star systems of different ages can be used to trace the stages of stellar evolution. § PHOTOMETRIC AND SPECTRAL VARIABILITY AS A METHOD TO STUDY PHYSICAL PROCESSES IN PMS STARS The physical processes taking place during the formation of stars are extremely important for the next stages of their evolution. The final mass of stars accumulates gradually, and accretion from the circumstellar disk can take several million years. Meanwhile, the disk is replenished by circumstellar matter left over from the formation of the protostar's core. About half the mass of newly formed stars is accreted by the time they become observable in the optical region as PMS stars. This process takes place during episodes of enhanced accretion, when the accretion rate from the circumstellar disk increases by 2-3 orders of magnitude over hundreds of years (Hartmann & Kenyon 1996). Circumstellar disc masses are typically less than 1% of the stellar mass, and the disc cannot replenish rapidly with new portions of circumstellar matter. Therefore, outbursts resulting from increased accretion probably recur every few thousand years. In the era of all-sky photometric monitoring conducted at multiple wavelengths, the literature is replete with reports of recorded objects with large-amplitude photometric variability. Automated techniques have been developed to identify bursts of young stellar objects in real time, using machine learning efforts to distinguish them from other similar events. But due to the wide variety of variability patterns in PMS stars, these objects do not have distinct light curve patterns that are similar from object to object. Therefore, many variable phenomena may turn out to be unclassified cases. The reason is that different physical mechanisms can cause variability with similar amplitudes and time scales. High-resolution spectra are needed to probe the physical state of the circumstellar disc and accretion processes. They allow us to distinguish, for example, between FUor outbursts and other types of variability of similar amplitudes (Semkov & Peneva 2011). In the optical and near-infrared ranges, the spectra of young stars provide the main information about the temperature of the gas envelope. For PMS stars that have low accretion rates, the optical and infrared spectra typically show a stellar photosphere in absorption. As the accretion rate increases, the star's spectrum is dominated by emission components, so it becomes impossible to separate the photospheric absorption component. In extremely strong outbursts, with very high accretion rates, implying a very high luminosity of the star, the observed spectrum becomes that of the hot inner disk, which is in absorption, i.e. spectrum of a hot supergiant. Another main research method that complements spectral observations is interferometry in the near and mid-infrared regions, which allows very precisely to determine the parameters of circumstellar disks. This technique can help to reveal the relationship of the structure of the inner part of the disc with the reasons that give rise to the outbursts. Such photometric and spectral monitoring of the youngest protostars, which show signs of enhanced accretion, is not yet possible, and this complicates the study of the physics of accretion. The possibility of spectral observations is improving with the development of observational methods, but the necessary information about the most obscured or deeply embedded young stellar objects is still very insufficient. A third very important method of studying PMS stars is the study of archival spectral and photometric data. Studying the history of variable objects can contribute significantly to explaining the causes of various forms of variability, such as outbursts, eclipses, or periodic phenomena of variability. It is also important to study the variability over the entire available range of the electromagnetic spectrum and to look for correlation between the processes in the individual areas. § FU ORI AND EX LUPI TYPE OF OUTBURSTS The registered objects that have shown eruptions of the FUor or EXor type are relatively small in number. In many cases, it is still disputed exactly what type of outburst was observed. And there are also cases where an object is reported as a FUor or EXor, based on just a few observations, and then turns out to actually be another type of object altogether, such as a long-period variable star or an eclipsing system. The total number of known PMS stars in which large-amplitude outbursts have been observed is several dozen. There are several publications in which an attempt has been made to present a list of objects classified as FUor stars. For example, the paper by Reipurth & Aspin (2010) presents a list of 10 FUor objects and 10 FUor-like objects that have some FUor spectral and photometric characteristics but outburst have not been observed. In the paper of Audard et al. (2014), a list containing 10 FUor objects and 16 FUor-like objects is presented. And in the paper by Connelley & Reipurth (2018), a list of 14 FUors and 10 FUor-like objects is presented. A comparison of these and similar papers shows that there is no consensus on the classification of FUor type of objects. Some objects in one publication are classified as FUors and in another as FUor-like objects. Any case where a large-amplitude outburst of a PMS star is observed raises the question: FUor or EXor? (Ibryamov & Semkov 2021). The main differences between these two types are the spectral (the presence or absence of certain spectral lines, their profiles and intensity) and photometric properties (the duration and amplitude of the burst and the shape of the light curve). The spectral variability is explained by the different sizes of the star and circumstellar envelope: the absorption regions are significantly larger than the star itself, but the expanding circumstellar envelope is also significantly larger than the absorption regions around the disc. Strong photometric variability at maximum brightness is typical of EXors but not for FUors. However, similar short-term dips in brightness have been recorded for several FUor objects. One of the most famous such events was the minimum in the light curve of V1515 Cyg in 1980, a sharp decrease in brightness of about 1.5 mag. (B band) for several months. This minimum in the brightness of V1515 Cyg is explained by obscuration from dust material ejected from the star (Kenyon et al. 1991). A short dip in brightness (0.4 mag. in I band) in the light curve of the FUor object V733 Cep was observed in 2009 (Peneva et al. 2010). Evidence for a strong brightness variability (ΔV=1.2 mag.) during the time of maximum light during the period from 1986 to 1992 is documented in our photometric study of another FUor object V1735 Cyg (Peneva et al. 2009). The large-amplitude variability of FUors may result from the superposition of the two processes, variable accretion rate and time variable extinction (Semkov & Peneva 2012). In recent papers, such a scenario has been used to explain the brightness variability of two objects with characteristics intermediate between FUors and EXors V1647 Ori (Aspin et al. 2009, Aspin 2011) and V2492 Cyg (Hillenbrand et al. 2013, Kóspál et al. 2013). The time variable extinction appears to be characteristic not only of some Herbig Ae/Be stars (UXor type variables), but also a common phenomenon during the evolution of all types of PMS stars. In the case of FUor object V582 Aur (Semkov et al. 2013), we have direct evidence from multicolor photometry indicating the presence of dust around the star. One possible cause of the variable accretion could be fragmentation of the circumstellar disk. Since FUor phenomenon is likely to be repeatable, it is assumed that almost every protostar goes through several episodes of enhanced accretion, in which the initial mass of the star increases. Stamatellos et al. (2012) suggest that periods of episodic increased accretion may have triggered the initial fragmentation of the circumstellar disc. In the early stages of stellar evolution, disk fragmentation is not possible and accretion onto the stellar surface is assumed to proceed at a constant rate. After several episodic increases in the accretion rate, the circumstellar disk gradually fragments and thus prevents new outbursts of FUor type, or at least changes their parameters. Therefore, it can be assumed that the outbursts of FUors, during different periods of the stellar evolution, can vary in amplitude, duration and shape of the light curve, caused by the different fragmentation state of the disk. Strong accretion bursts can also trigger a mechanism to form planets of various masses inside the circumstellar disk. The photometric data we collected confirm the diversity in the shape and type of light curves of FUor objects. Our knowledge of the processes occurring during FUor type of outbursts is still incomplete, and more data need to be collected from regular photometric and spectral monitoring. Attempts to classify FUor objects based on their photometric properties have so far been unsuccessful due to the small number of objects of this type. A comparison of the light curves of known FUors show that they are very different from each other and very rarely repeated. Even the first three 'classical' FUors (FU Ori, V1057 Cyg and V1515 Cyg) show very different rates of brightness increase and decrease (Clarke et al. 2005). The variety of light curves increases even more with the number of well-studied FUors, to which our work also contributes. As a rule, the light curves of FUors are usually asymmetric, with a rapid increase and gradual decrease in brightness. Some objects show a very rapid increase in brightness over several months or a year, such as FU Ori, V1057 Cyg and V2493 Cyg (Semkov et al. 2010, 2012, 2021b, Clarke et al. 2005, Kopatskaya et al. 2013). But in other cases, such as V1515 Cyg, V1735 Cyg, V733 Cep and V900 Mon, the brightness increase can last for several years and even reach 20-30 years (Clarke et al. 2005, Peneva et al. 2009, 2010, Semkov et al. 2021a). Usually, the decline in brightness takes several decades and is very likely to reach a century. But there are objects where a relatively rapid decrease in brightness is observed. For example, V960 Mon, where the brightness decreases by 2 mag. in V band over a period of about five years (Takagi et al. 2020). In our study of the FUor object V582 Aur, we have observed three deep minimums of the star's brightness of about 3 mag. (R band), separated by periods of about five years (Semkov et al. 2013, Ábrahám et al. 2018). But there are also objects that for long periods of time, practically do not change their brightness, as in the cases of V1735 Cyg (Peneva et al. 2009) and Parsamian 21 (Semkov & Peneva 2010). In this respect, the light curve of the FUor object V733 Cep is unique, with its roughly symmetrical shape (Peneva et al. 2010). This variety of photometric properties strongly supports the assumption that FUor objects are not a homogeneous group and that the causes of this phenomenon may be several mechanisms of a different nature (Vorobyov et al. 2021). § ECLIPSES OF UX ORI TYPE The results of our observations of PMS stars strongly suggest that UXor type of variability is a widespread phenomenon (Semkov et al. 2015, 2019). It is typical not only of Herbig Ae/Be stars and T Tauri stars from early spectral types, but also of T Tauri stars of late spectral types and stars with relatively lower masses (Semkov et al. 2017). This result can be explained by the low efficiency of star formation where most of the mass of molecular clouds does not participate in star formation. This matter remains in the vicinity of the protostars, in a number of cases forming inhomogeneous dust clouds moving in orbit around them. Light curves over long periods of observations provide strong evidence that the deep minimums in the brightness of GM Cep is caused by obscuration of the star by circumstellar dust structures (Semkov & Peneva 2012). The inhomogeneity of the dust clouds may indicate an advanced evolution of the protoplanetary disk in the transition from micron-sized dust particles to the formation of kilometer-sized planetesimals (Chen et al. 2012). Accretion, combined with viscous light scattering, heats the circumstellar disk of the young stellar object. As accretion slows and the size of the dust particles grows, the disk becomes passive, in the sense that the dust absorbs starlight, heats up and re-radiates in the infrared region. It can be argued that both the light transit time and the observed obscurations by dust particles provide a reasonable model of the absorbing medium. Around the stars there is a region with dimensions of several tens of astronomical units, forming a circumstellar disk, which is in the form of a ring or a spiral structure. This region is tens of AU from the star and consists mostly of particles around 10 μm or larger. § WEAK LINE T TAURI STARS, CLASSICAL T TAURI STARS AND HERBIG AE/BE STARS The most common variability in T Tauri and Herbig Ae/Be stars is periodic or aperiodic variability with small amplitudes. In many cases, more than one type of variability can be observed on the same star. Periodic variability in most cases is explained by the presence of spots of reduced temperature, which are located on the star surface. The presence of such spots, by analogy with the Sun, is the result of magnetic activity. By examining the amplitude of the observed variability, we can determine the intensity of the magnetic field and the location of the spots. The results obtained for some of the objects in this dissertation show that such cool spots can persist for several years, as we do not observe a change in ephemeris, or a large change in amplitude, between individual rotation periods (Poljančić Beljan et al. 2014, Ibryamov et al. 2015). Large-amplitude periodicity in PMS stars is usually observed in very few cases and is usually associated with the presence of hot spots due to accretion from the circumstellar disk. Usually, instabilities in the disc lead to the formation of a flow directed towards the surface of the star and oriented along the magnetic field lines. Spots on the stellar surface that have a decreased or increased temperature can migrate along the stellar coordinates and change their area and temperature. This process is demonstrated by examples of several T Tauri stars with weak lines in the IC 348 region, for which periodicity is available, but the light curves for different rotation periods have a different shape (Nordhagen et al. 2016). Based on our observations, we would not be able to register such a change in the shape of the light curve, and the programs we used would show a lack of periodicity. The reason is that our data do not have a dense coverage of several hours of observations on consecutive nights, but are scattered over long periods of time. Acknowledgments: This work was partly supported by the Bulgarian Scientific Research Fund of the Ministry of Education and Science under the grants DN 08-1/2016 and DN 18-13/2017. The authors thank the Director of Skinakas Observatory Prof. I. Papamastorakis and Prof. I. Papadakis for the award of telescope time. This research has made use of the NASA's Astrophysics Data System Abstract Service, the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. Ábrahám, P., Kóspál, Á., Kun, M., et al. 2018, ApJ, 853, 28 Aspin, C., Reipurth, B., Beck, T. L. et al. 2009, ApJ Let., 692, L67 Aspin, C. 2011, AJ, 142, 135 Audard, M., Ábrahám, P., Dunham, M. M., et al. 2014, in Protostars and Planets VI, ed. H. Beuther et al. Tucson, AZ: Univ. Arizona Press), p. 387 Chen, W. P., Hu, S. C. -L., Errmann, R. et al. 2012, ApJ, 751, 118 Connelley, M. S., Reipurth, B. 2018, ApJ, 861, 145 Clarke, C., Lodato, G., Melnikov, S. Y., Ibrahimov, M. A. 2005, MNRAS, 361, 942 Hartmann, L., Kenyon, S. J. 1996, ARA&A, 34, 207 Hillenbrand, L. A., Miller, A. A., Covey, K. R., et al. 2013, AJ, 145, 59 Ibryamov, S. I., Semkov, E. H., Peneva, S. P. 2015, PASA, 32, e021 Ibryamov, S. I., Semkov, E. H. 2021, BlgAJ, 35, 54 Kenyon, S. J., Hartmann, L. W., Kolotilov, E. A. 1991, PASP, 103, 1069 Kopatskaya, E. N., Kolotilov, E. A., Arkharov, A. A. 2014, MNRAS, 434, 38 Kóspál, Á., Ábrahám, P., Acosta-Pulido, J. A., et al. 2013, A&A, 551, A62 Nordhagen, S., Herbst, W., Rhode, K. L., Williams, E. C. 2006, AJ, 132, 1555 Peneva, S. P., Semkov, E. H., Stavrev, K. Y. 2009, Ap&SS, 323, 329 Peneva, S. P., Semkov, E. H., Munari, U., Birkle, K. 2010, A&A, 515, A24 Poljančić Beljan, I., Jurdana-Sepic, R., Semkov, E. H., Ibryamov, S., Peneva, S. P., Tsvetkov, M. K. 2014, A&A, 568, A49 Reipurth, B., Aspin, C. 2010, in Evolution of Cosmic Objects through their Physical Activity, eds. H. A. Harutyunian, A. M. Mickaelian, Y. Terzian (Yerevan: Gitutyun), 19 Semkov, E., Peneva, S. 2010, IBVS, 5939, 1 Semkov, E. H., Peneva, S. P., Munari, U., Milani, A., Valisa, P. 2010, A&A, 523, L3 Semkov, E., Peneva, S. 2011, BlgAJ, 17, 88 Semkov, E. H., Peneva, S. P. 2012, Ap&SS, 338, 95 Semkov, E. H., Peneva, S. P., Munari, U. et al. 2012, A&A, 542, A43 Semkov, E. H., Peneva, S. P., Munari, U. et al. 2013, A&A, 556, A60 Semkov, E. H., Peneva, S. P., Ibryamov, S. I. 2015, A&A, 582, A113 Semkov, E., Ibryamov, S., Peneva, S. 2017, BlgAJ, 27, 75 Semkov, E. H., Ibryamov, S. I., Peneva, S. P. 2019, SAJ, 199, 39 Semkov, E. H., Peneva, S. P., Ibryamov, S. I. 2021a, SAJ, 202, 31 Semkov, E., Ibryamov, S., Peneva, S. 2021b, Symmetry, 13, 2433 Stamatellos, D., Whitworth, A. P., Hubber, D. A. 2012, MNRAS, 427, 1182 Takagi, Y., Honda, S., Arai, A., Takahashi, J., Oasa, Y., Itoh, Y. 2020, ApJ, 904, 53 Vorobyov, E. I., Elbakyan, V. G., Liu, H. B., Takami, M. 2021, A&A, 647, A44
http://arxiv.org/abs/2307.00226v1
20230701050246
S-Omninet: Structured Data Enhanced Universal Multimodal Learning Architecture
[ "Ye Xue", "Diego Klabjan", "Jean Utke" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Causal Structure Learning by Using Intersection of Markov Blankets Yiran Dong 12235035@zju.edu.cn School of Mathematical Sciences Zhejiang University Hangzhou 310027, China. Chuanhou Gao gaochou@zju.edu.cn School of Mathematical Sciences Zhejiang University Hangzhou 310027, China. August 1, 2023 ========================================================================================================================================================================================================================================================================== Multimodal multitask learning has attracted an increasing interest in recent years. Singlemodal models have been advancing rapidly and have achieved astonishing results on various tasks across multiple domains. Multimodal learning offers opportunities for further improvements by integrating data from multiple modalities. Many methods are proposed to learn on a specific type of multimodal data, such as vision and language data. A few of them are designed to handle several modalities and tasks at a time. In this work, we extend and improve Omninet, an architecture that is capable of handling multiple modalities and tasks at a time, by introducing cross-cache attention, integrating patch embeddings for vision inputs, and supporting structured data. The proposed Structured-data-enhanced Omninet (S-Omninet) is a universal model that is capable of learning from structured data of various dimensions effectively with unstructured data through cross-cache attention, which enables interactions among spatial, temporal, and structured features. We also enhance spatial representations in a spatial cache with patch embeddings. We evaluate the proposed model on several multimodal datasets and demonstrate a significant improvement over the baseline, Omninet. § INTRODUCTION Deep learning yielded great success in the last decade on tasks across many domains with various types of unstructured data such as images and text. Specific models are carefully designed and tailored for a particular type of data. For example, image recognition models <cit.> are built with convolutional neural networks (CNNs), while recurrent neural network (RNN) has shown success for sequential data. Although some recent works use CNN in language tasks and transformer on image tasks <cit.>, these tasks are still single-modal tasks. In recent years, multimodal learning attracted an increasing interest, as the single modal models are advancing rapidly and achieving astonishing results on various tasks across multiple domains, such as vision <cit.> and language <cit.>. Multimodal tasks are more common in complex tasks and multimodal learning has been shown more effective than models that used only single modalities in several fields, such as healthcare, where models are trained to utilize medical images and electronic health records <cit.>, and autonomous driving where intelligent systems are built to process various signals in different modalities <cit.>. However, existing multimodal learning models are still usually tailored to specific tasks or modalities and are not easily extended to accommodate new types of data. Different models for different tasks need to be separately designed and maintained. In business, new tasks may be added over time as a business grows and new types of data may also need to be considered to best utilize the extra information. One architecture that handles multiple modalities and multiple tasks becomes increasingly appealing. MultiModel <cit.> is probably the first attempt at building a single model that can solve tasks across multiple domains. It consists of several modality nets, an encoder, and a decoder. Each modality net handles one type of input data. A mixer module gathers encodings from modality nets and feeds them to the decoder. However, for a single task, MultiModel does not have support for inputs having more than one modality, such as Visual Question Answering (VQA). In order to address this challenge, another multi-model multi-task architecture, Omninet, is proposed <cit.>. Similar to modality-nets, Omninet has a visual peripheral to encode images and videos and several language peripherals to encode text data in different languages. Inputs in different modalities are further encoded into temporal and spatial caches, which are fed into a transformer-based decoder. Omninet is shown to have competitive performance in several tasks compared with state-of-the-art models. However, there are still a few shortcomings of Omninet. First, each modality in Omninet is encoded in a separate stream. Existing works <cit.> have shown that it is beneficial for multimodal learning models to encode one modality with the information of other modalities. Second, it lacks support for structured data. In many real world applications structured data play an important role, even in a multimodal scenario. For example, similar medical image findings may suggest different diagnoses given different laboratory test results <cit.>. Unfortunately, many of the practical applications are built on proprietary data sets making this specific topic less accessible for academic research. However, there are countless cases where full context, in the form of structured and unstructured, is critical for making accurate decisions. A straightforward way of extending Omninet to utilize structured data is concatenating structured features with the final encoding vector of the other modalities. However, such a late fusion mechanism ignores the potential informative interactions between structured and unstructured data. Furthermore, it would not deal with a varying number of structured data sources, because naive concatenation fixes the number of structured data sources in the network setup. In this work, we extend Omninet with a design of a structured peripheral and structured cache. Instead of common encoding methods that encode categorical structured features into one vector, such as one-hot encoding, we encode structured data using entity embeddings <cit.>. We store encodings of structured data into the structured cache. It interacts with other caches through a cross cache attention mechanism, which we propose to enhance the encodings by considering those from other caches. We also modify the image peripheral to produce lower-level representations and divide the encoded images into patches before interacting with other caches. High-level representations used in Omninet might lose spatial signals which can be very useful to help encode other caches. Our main contributions are summarized as follows. * We extend Omninet to handle structured data effectively and to deal with a various number of structured data sources. * We enhance its encoding process with cross cache attention and incorporate the idea of patches to enable cross cache interactions on lower level image representations. * The proposed model is evaluated on several multimodal datasets, which cover a wide range of modalities, including images, textual inputs, structured data, and videos. It demonstrates a significant improvement against Omninet on all datasets. The source code is will be disclosed once the paper is accepted. In Section <ref>, we discuss related work. The proposed model is described in Section <ref>. The datasets and experimental setup are described in Section <ref>. Section <ref> discusses the computational results and the conclusions are drawn in Section <ref>. § RELATED WORK Most existing works in multimodal learning focus on specific tasks with a fixed set of modalities, such as images and text. Many works concatenate image and text inputs and encode them together with Transformer <cit.>. VideoBERT <cit.> and VisualBERT <cit.> extend such a transformer-based model to video data. ViLBERT <cit.> keeps a separate stream for each modality and enables cross-modality connections to encode one modality with the other. Another category of works extends a multimodal learning model in the context of multiple tasks. With the encoder-decoder architecture, MultiModel <cit.> and UniT <cit.> are able to handle multiple tasks, such as classification and sequence prediction, with just one model. For a single task, MultiModel does not have support for inputs having more than one modality. UniT addresses this issue by encoding images and text with transformers and concatenating encodings from both modalities. It is still limited to only image and language modalities. Perceiver IO <cit.> is able to handle tasks with different modalities. However, it does not support multiple tasks of various modalities at the same time. To the best of our knowledge, Omninet <cit.> is the most general architecture for multimodal and multi-task learning. Multiple tasks with inputs in different modalities can be trained together in one model. However, it lacks support for structured data. In addition, the lack of interactions between caches limits its ability in the encoding process. To accommodate structured data, Omninet can be easily extended using late fusion, which is widely used in combining structured data with other features <cit.>. For example, we can encode structured data in one-hot encoding with a few linear layers and concatenate the output with the unstructured feature vector. However, it has a few limitations. The dimension of concatenated structured data depends on the number of structured data sources, which means the model's dimension needs to be changed to adapt to new use cases when the number of structured data sources are different. In addition, the information in the structured data may help in encoding other modalities but late fusion mechanisms do not provide such interactions. Entity embeddings <cit.> have been used to encode structured data <cit.>. The encoded structured features are usually concatenated into one feature vector and then combined with other features using late fusion techniques <cit.>. Our proposed cross-cache attention mechanism falls into the category of attention-based fusion <cit.>. Previous works using attention-based fusion in multimodal learning focus on vision-language interactions <cit.>. This kind of attention-based fusion has not yet been studied on structured data and image patches in the multimodal multitask learning scenario. § MODEL Figure <ref> shows the architecture of the proposed model. A single sample input can be any combination of zero or more of the following modalities: an image X_image∈ℝ^H × W × C, video frames X_video∈ℝ^F × H × W × C, textual input (i.e., sentences of Q words) and structured data X_structured∈ℝ^M. For example, a sample can be one or more images, one or more sentences and a video. The number of video frames F, length of textual input Q and the number of structured features M can be different across samples. Omninet has a peripheral for each different modality. A peripheral produces intermediate encodings of the corresponding modality. There are 2 lists. Modalities of spatial matrix are added to the spatial and those corresponding to sequences to the temporal cache list. A sequence of spatial matrices (e.g., video frames) has both spatial and temporal aspects. Omninet encodes each spatial matrix through the spatial peripheral and also stores encodings in the spatial cache. Each matrix encoding is also aggregated through a pooling layer and stored in the temporal cache. A self-attention based temporal encoder is used to calculate embeddings of the temporal cache. Both caches are fed to the Decoder, which is based on the decoder architecture from Transformer <cit.> with an additional gated-attention layer to handle the spatial cache. There is also a domain/task encoding. The domain encoding is appended to intermediate encodings from a peripheral to distinguish among different modalities. The task encoding is used as the input of the Decoder to identify different tasks. There are 3 aspects that Omninet does not capture: (1) structured data are neither spatial nor temporal and thus cannot be directly modeled; (2) in the embedding phase the two caches are treated independently of each other while often there is interaction, and (3) the spatial cache is used at the pixel level with no notion of locality. In the following subsections, we introduce in detail the encoder of S-Omninet, including the new structured stream (structured peripheral and structured cache), the enhancement of spatial cache through patching, the cross-cache attention modules and additional self-attention modules on caches other than temporal cache. §.§ Structured Peripheral and Cache We propose to use a structured peripheral to encode structured data and put them into the structured cache. In order for the structured cache to effectively communicate with the other two caches, we use entity embeddings to encode categorical features in the structured peripheral. Let us assume there are C categorical features, denoted as s_1, s_2, ..., s_C. Each state of a categorical feature is mapped to a vector as s_i →𝐬_i ∈ℝ^D through a trainable embedding layer of dimension D. It functions as if we “tokenize” all possible states of each feature. For example, a color feature can have different embedding vectors for the value `red' and `blue.' Our model may learn similar embeddings for structured value `red' and textual input `red.' We argue that this helps the model match similar concepts between structured and unstructured data. As the entity embeddings encode each category separately, we may lose useful patterns that can be learned from all structured features as a whole. Therefore, we also keep the encoding of the whole structured data including continuous features besides entity embeddings. The structured peripheral transforms the whole structured sample X_structured using one-hot encodings and encodes them through linear layers. The encoded features are denoted as 𝐬∈ℝ^D. We then append structured domain encoding to entity embedding features and the traditional whole structured feature and project them back to the dimension D before inserting them into the structured cache. Multiple structured data sources can be encoded and handled by the structured cache. We denote the structured cache containing N_s encodings as X_s∈ℝ^N_s × D. The structured cache is further encoded in the cross-cache attention modules and self-attention layers, which we introduce later. We use the first encoding of the output from self-attention layers as a representation of the whole structured data and concatenate it with the decoder output before the final prediction. §.§ Spatial Cache §.§.§ Patches Spatial components of a sample are encoded by a vision peripheral, which produces an H' × W' × d_m feature map for an image and F such feature maps for a video, for example. Instead of directly flattening the feature maps and storing them in the spatial cache as done by Omninet, we divide the feature maps into a sequence of 2D patches, similar to ViT <cit.>. Compared with Omninet, the patches in our model preserve more spatial information than the highly encoded feature maps. A patch of a matrix feature map at location (i,j) with height p_h and width p_w is the rectangle area of the matrix with two diagonal coordinates (i,j) and (i+p_h,j+p_w). We obtain patches at each valid location of a matrix feature map with a certain stride. A location is valid if we can obtain a patch without stepping out the boundary. We denote a sequence of patches of a matrix feature map as 𝐳_p∈ℝ^T_p × d_p, where T_p = (H'W')/(p_hp_w) is the number of patches and d_p = (p_h· p_w) · d_m. Each patch is mapped to the desired dimension D that matches the size of latent vectors in our attention layers in the spatial cache. All encoded patches of a matrix feature map are denoted as 𝐱^0_p = [𝐳^1_p𝐄,𝐳^2_p𝐄, ..., 𝐳^T_p_p𝐄], where 𝐄∈ℝ^d_p × D. For sequences of matrices (videos), we divide each matrix's feature map into patches and store patches of all matrices in the spatial cache. §.§.§ Positional Embeddings A position embedding is added to each patch to retain position information. A sequence of encoded patches is denoted as 𝐱_p = 𝐱^0_p + 𝐄_pos, where 𝐄_pos∈ℝ^T_p × D. We use learnable positional embeddings as it shows a better performance than the fixed positional embeddings in ViT <cit.>. In the model, the patches may also come from videos, which means temporal relations may exist among patches. In order to capture both the spatial and temporal aspects, two sets of embeddings are learned, each for one of the aspects and each with size T_p ×D/2. The spatial embedding 𝐄^p_pos∈ℝ^T_p ×D/2 retains a patch's position information within an image or a video frame. We encode 2D positional information in the spatial embedding and two sets of embeddings are learned, one for each axis. Specifically, we learn X-embedding 𝐄^p_posx∈ℝ^T_p ×D/4 and Y-embedding 𝐄^p_posy∈ℝ^T_p ×D/4. Based on the patch's coordinates, we concatenate the X-embedding and Y-embedding to obtain its spatial embedding. The temporal embedding 𝐄^f_pos∈ℝ^F ×D/2 captures the position of the frame. Then, based on a patch's index of the frame and its position inside the frame we concatenate 𝐄^f_pos and 𝐄^p_pos to get the final temporal-spatial embedding 𝐄_pos. Patches are further encoded with the corresponding domain encoding before added to the spatial cache. We denote the spatial cache with a total of N_p encoded patches as X_p∈ℝ^N_p × D. §.§ Temporal Cache The temporal cache consists of encodings from domains that have a temporal dimension, such as textual inputs and videos. Textual inputs are encoded first by the corresponding language peripheral and domain encoding. Then they are further encoded by a temporal encoder before put in the temporal cache. For videos, the temporal cache stores frame-level features, which are obtained through pooling from patches. We denote the temporal cache with N_t encodings as X_t∈ℝ^N_t × D. §.§ Cross-cache Attention We define the cross-cache attention of cache X_α∈ℝ^T_α× d_α and X_β∈ℝ^T_β× d_β as CCA (X_α, X_β) ∈ℝ^T_α× D. The destination cache X_α provides queries and the source cache X_β provides keys and values. Our model consists of 3 streams of cross-cache attention: Y_s := concat (CCA(X_s, X_p), CCA(X_s, X_t)), Y_t := concat (CCA(X_t, X_p), CCA(X_t, X_s)), Y_p := concat (CCA(X_p, X_s), CCA(X_p, X_t)) . Different from other cross-modality attention models <cit.>, our model consists of a stream for the structured modality. Unstructured inputs are broken down into spatial and temporal caches instead of encodings of each modality separately. Additionally, instead of performing cross-modality attention on the pixel-level embeddings, our model captures spatial information during cross-cache attention by taking advantage of the patch features. §.§ Late Self Attention As shown in Figure <ref>, we add self-attention layers after cross-cache attentions. In the original design of Omninet, modalities with the temporal dimension are encoded with self-attention based temporal encoder before being put into caches. We initially applied cross-cache attentions after the self-attention layers. However, we found that cross-cache attentions cannot capture interactions effectively. As shown in Figure <ref>, we see that all spatial cache encodings have almost the same attention pattern on temporal cache encodings. The attention scores are all very close, ranging from 0.058 to 0.067. The attention scores on each row sum up to 1. For each spatial cache encoding, its attention scores on all 16 temporal cache encodings are almost the same, close to the average 1/16 or 0.0625. The reason is that self-attention makes the encodings in temporal cache similar, i.e., having large cosine similarity. By moving the self-attention layers after the cross-cache attentions, we observe a different pattern, as shown in Figure <ref>. It shows that cross-cache attentions pay various attentions to different spatial-temporal encoding pairs. The most relevant pair gets an attention score around 0.7, while the scores of irrelevant pairs stay well below 0.3. One encoding getting an attention score of 0.7 means the other 15 encodings together get only 0.3. The attention scores vary much larger than the previous case, see more discussions in Section <ref>. Note that, MulT <cit.> also put self-attention layers after their cross-modality attentions. The authors empirically argue that this design benefits cross-modality attention without further explanations. In Omninet, the whole frame encodings are also stored in the temporal cache, so it can learn temporal correlations between frames. We preserve this design in our model. However, this prevents cross-cache attention from learning cross-cache correlations effectively. The reason is that the patch encodings are much closer, in terms of cosine similarity, to the frame encodings than other temporal cache encodings coming from other modalities. This results in high attention scores on the frame encodings, which overshadow the interactions between spatial cache and other temporal cache encodings. Therefore, we exclude the frame encodings in the cross-cache attention. §.§ Residual Connections Residual connections are commonly used in transformers <cit.>. We also add residual connections after self-attention blocks to mitigate the vanishing gradient problem as our network is even deeper than Omninet. Additionally, since we put self-attention blocks after cross-cache attentions, the model may lose some cross-cache signals learned previously, which are critical to the predictions in many cases. The residual connections keep the cross-cache signals and merge them with the outputs of self-attention blocks. We observe a significant improvement by adding residual connections. We also verify that the residual connections are active in inference by comparing the weights between the residual connections and the outputs of the self-attention blocks. §.§ Universal Architecture and Configurations S-Omninet is a universal architecture, i.e., we have one and only one model for all tasks. The configuration, such as the sizes of dense layers and the number of attention layers, is also the same across tasks. The dimension of embeddings D in both encoder and decoder is 512. The vision peripheral produces 14 × 14 feature maps and the patch size is 2 × 2. The self-attention blocks on the temporal cache have 6 layers and 8 heads, the same as Omninet. In other attention blocks, including self-attention blocks on other caches and all cross-cache attention blocks, we use 3 layers and 4 heads. Vision and language peripherals are pre-trained and fixed during training of S-Omninet. We train the structured peripheral along with the main architecture of the model. The reason is that different structured data can have very different patterns and it does not make much sense to use a structured peripheral that is pre-trained on totally different features. In addition, the structured peripheral is light and relatively easy to train along with the main model. For classification tasks, we add dense layers after the decoder as the prediction head. For frame generation tasks, we use the generator model in DCGAN <cit.> and modify the configurations to fit our feature and frame dimensions. Excluding the pre-trained peripherals and prediction heads, S-Omninet and Omninet has 125.4 million and 96.6 million trainable parameters, respectively. § DATASETS Social-IQ Social-IQ <cit.> is a video question answering dataset that contains 1,250 annotated videos, 7,500 questions and 52,500 answers. Each question is provided with 4 correct answers and 3 incorrect answers. All answers are sentences. The task is to predict whether an answer is correct given a video with a question. We extract video frames at 1fps <cit.> and each video sample consists of 55 frames on average. CMU-MOSI CMU-MOSI <cit.> is a multimodal human sentiment dataset. It consists of 2,199 video clips of faces during conversations. Each video clip is labeled with a sentiment score between -3 and 3. We evaluate the model performance on two tasks. One is MOSI-Sen, a binary classification task to predict whether the sentiment is positive or negative using video frames and transcripts. The transcripts, text translated from video's audio, are provided in the dataset. The other task is MOSI-Gen where we generate the next frame of a video clip, given previous frames and the transcript of this clip. We use accuracy for the classification task and Mean Absolute Error (MAE) for the frame generation task. Visual Question Answering The VQA v2.0 dataset <cit.> is a large visual question-answering dataset consisting of approximately 1.1 million (image, question) pairs with 13 million answers on MSCOCO images. Every question is associated with two similar images that result in two different answers. We evaluate the model performance on the provided test-dev set same as Omninet <cit.>. Structured & Visual Question Answering Due to the lack of a public multimodal dataset that contains structured data and multiple unstructured data, we create the S-VQA dataset which contains images, text and structured data. In practical business processes, this type of data is very often encountered which motivates this study. A single model for a given process serves different specific use cases. Unfortunately, the lack of established multi-modal approaches, combined with such data sets being proprietary means no well-curated data sets are available. Therefore, we create the S-VQA that contains a large number of multimodal samples. The key part is to create structured data that have a meaningful interaction with other modalities to simulate the real-world cases. The structured data are composed of both numerical and categorical features. We create it in such a way that the numerical features are correlated with the labels and categorical features are correlated with both the labels and the unstructured data. For numerical features, we randomly sample n_c vectors with the desired dimension and treat them as the centroids. The value of n_c is the same as the number of classes in the VQA v2.0 dataset. Given a covariance matrix, we sample random vectors around each centroid from Gaussian distributions. Vectors sampled from the same centroid are assigned with the same class label. A total of 3,500 clusters are generated to be matched with all classes. For categorical features, we first identify important elements (words in a sentence or regions of an image) in existing modalities. The importance of each element is determined by the attention score produced by the decoder of the vanilla Omninet during training. Then, we create categorical features by clustering the important elements. Each cluster is considered a categorical feature. In addition, we perturb the generated structured features to introduce correlations to the labels. More details are in Appendix <ref>. § RESULTS We build S-Omninet on top of Omninet by adding implementations of the cross-cache attention module, the patching module and the structured peripheral. For the vision peripheral, we use a pre-trained ResNet-152 model <cit.> with the last pooling layer and a few convolution layers removed to get the desired 14 × 14 feature maps. We use the same language peripheral as Omninet. We run experiments on NVIDIA GeForce RTX 2080 Ti GPUs. Table <ref> shows the performance comparison between our model and the baseline. On the VQA and S-VQA datasets, we observe an improvement of 1.83% and 1.01% on the accuracy, respectively. On the social-IQ dataset, our model achieves an accuracy of 66.9, which is 3.31% better than Omninet. The accuracy scores of both our model and Omninet are higher than 63.91, which is a baseline performance as reported in the original paper <cit.>. On MOSI-Sen and MOSI-Gen, S-Omninet is 4.22% better in accuracy on the sentiment prediction task and 2.73% better in MAE on the frame prediction task. Figure <ref> shows the validation accuracy curves on VQA and validation loss curves on MOSI-Gen. S-Omninet has a similar converging behavior as Omninet. The curves on the other datasets demonstrate a similar pattern. However, since our model is slightly larger than Omninet, the actual training time is 23% longer per epoch than Omninet. §.§ VQA As shown in Table <ref>, our model shows significant improvements over Omninet. Omninet's performance on the VQA dataset was first reported as 55.3 <cit.>. With more tuning, we achieve an accuracy of 56.3 and use this model as the baseline. Our model improves the baseline by 1.83%, which demonstrates the benefits of adding cross-cache attention. We also test our model in a more challenging task, “two-question VQA,” which demonstrates the effectiveness of cross-cache attention more clearly. In this task, we randomly add an irrelevant question before or after the original question. For example, the textual input of the example in Figure <ref> is “What is this a collection of? Is the girl potty-trained?” The second sentence is the original question for this image and the first sentence is an irrelevant question. On this “two-question” VQA dataset, we argue that Omninet cannot tell which question is the relevant one due to the lack of interactions between caches. Since each cache is encoded separately, Omninet cannot use spatial information to identify the relevant question. Therefore, the performance of Omninet is substantially impacted and the accuracy drops from 51% to 41%. In contrast, our model with cross-cache attention shows a good ability to identify the relevant question. Figure <ref> shows an example of a VQA input and Figure <ref> shows the attention maps in cross-cache attention on this example. In the cross-cache attention CCA(X_p, X_t), image patches are encoded with the textual features. As shown in Figure <ref>, words in the irrelevant question, indexed from 0 to 8 on the x-axis, get low attention scores. As a result, our model shows an 8% improvement in accuracy over Omninet on this challenging task. Besides paying more attention to the correct sentence, cross-cache attention also stresses more relevance to more relevant words. We can see that in all 4 heads, the words “girl” and “potty” have a high attention score in many cases. Especially in the second head, the word “girl” and “potty” are linked with the regions that show the girl and potty. Note that, we mark grids in the original image to ease the visualization, but the grid indices are not strictly mapped to patch indices on the y-axis. The reason is that the patch embeddings used in cross-cache attention come from CNN-encoded feature maps. Each patch also contains information in its surrounding patches due to the convolution operations. §.§ Cross-cache Attention on Structured Data Omninet does not handle structured data. In order to train Omninet on S-VQA, we encode the structured data with one-hot encodings followed by a linear layer. Then the encoded structured data is concatenated with the output of the decoder. On the S-VQA dataset, Omninet achieves an accuracy of 61.1%, higher than the accuracy on the VQA dataset, as the structured data provides extra information about the labels. Our model further improves Omninet by 1.01% relatively. In the structured cache, the first encoding is the embedding of the whole structured feature vector including numerical and categorical features. The second encoding is the embedding of categorical features generated from important spatial input signals. The rest are encodings of categorical features of important temporal inputs. Figure <ref> shows the attention maps on cross-cache attention CCA(X_p, X_s) and CCA(X_t, X_s), where the model encodes each cache with more attention on corresponding structured encodings. For example, the spatial cache is encoded with more attention on the second structured encoding and the temporal cache is encoded with more attention on the third and fifth structured encodings. It demonstrates the effectiveness of cross-cache attention in integrating correlated structured and unstructured data. §.§ Video Tasks Social-IQ and CMU-MOSI are both video datasets with different tasks. On classification tasks, we observe significant improvements against Omninet. Our model is 3.3% and 4.2% better than Omninet on Social-IQ and MOSI-Sen, respectively. The cross-cache attention modules enable caches to interact with each other, which helps the model locate related encodings in different caches as we see in the “two-question VQA” example. This module can be more useful on videos than images because there are multiple frames in a video clip and the model needs to figure out in which frame encodings are more relevant. Although Omninet has the Gated Multihead Attention, which makes the model focus more on important frames by increasing the attention scores on them, there are several limits to this mechanism. First, the importance of a frame is calculated from an attention module on the temporal cache, which includes the frame and word embeddings. The frame-level embedding is a highly encoded image feature, where detailed spatial information, which can be critical to deciding whether a frame is important, is missing. Second, once a frame is identified as an important frame, all encodings in this frame get a higher weight in the next attention. Because of that, a less important encoding in an important frame can have a higher weight than an important encoding in a less important frame. The cross-cache attention overcomes these problems by performing attentions in a finer granularity, i.e., directly on the encoding level. On the frame generation task, our model shows a 2% improvement on MAE and we observe that our model generates more eye appealing images, as shown in Figure <ref>. The image generated by our model is darker on the eyes, nose and mouth, which shows clearer boundaries of facial features. Both frames that are generated by Omninet and our model are not very crisp. This is often seen on transformer-based models with a simple image generator <cit.> (this work's focus is not on generating high resolution and sharp images). In spite of that, we still see that the image from our model has additional facial details than Omninet. § CONCLUSION In this work, we extend and improve Omninet by introducing cross-cache attention, integrating patch embeddings for vision inputs, and supporting structured data. We discuss the design choice of putting cross-cache attention before self-attentions. In addition, we study the impact of this design and demonstrate reasons it works. The proposed S-Omninet is shown capable of learning structured data of various lengths effectively with unstructured data. It demonstrates the effectiveness of cross-cache attention by showing a significant improvement over Omninet on several multimodal datasets. named § SYNTHETIC STRUCTURE DATA §.§ Categorical Features We start from an existing multimodal dataset, e.g., VQA 2.0 <cit.>, and create structured samples with categorical features so that they are correlated with samples of the existing modalities. We impose correlations between structured and unstructured samples as follows. First, we identify important elements of each existing modality. The importance of each element is determined by the attention score produced by the decoder of the vanilla Omninet during training. For each text input, we identify P most important words. For each image, we find Q most important regions (i.e., low-resolution pixels of the feature map produced by the peripheral). Then, we create categorical features given the important elements. For text data, we cluster important words and consider each cluster as a categorical feature. Each word is mapped to a category of a feature. Similar to text data, we cluster importance regions and each cluster is considered a categorical feature. For each cluster, we further find sub-clusters and then map each sub-cluster to a category of a feature. For each VQA sample, we create structured features by assigning a category to each feature according to the important regions/words of the text data and image. If multiple words or regions belong to the same feature, we use the one with a higher importance score. A special value is assigned to represent an empty category. We generate 5 categorical features, one feature is correlated with spatial inputs, and the rest of the 4 features are correlated with the text inputs. We limit the number of categorical features to 5 in order to control the rate of an empty category in a low range (less than 20%). §.§ Perturbing Structured Features We perturb structured features to introduce correlations with labels. We consider samples D of class k and feature X with N categories. We have the categorical distribution of X: x = (p(X=1), p(X=2), ..., p(X=N)), where p(X=j)=Count(j)/|D|. First, we generate a Dirichlet prior Dir(f(k,N)) depending on classes, where f(k,N) is a function that produces a unique category distribution over N categories for class k. For example, if we have 2 classes and a feature with 4 categories, we can have f(0,4)=(0.7,0.1,0.1,0.1) and f(1,4)=(0.1,0.7,0.1,0.1). If we have more classes than categories, we use one or two major categories to differentiate each class. For example, if we have 4 classes and a feature with 3 categories, we can have f(0,3)=(0.8,0.1,0.1), f(1,3)=(0.1,0.8,0.1), f(2,3)=(0.1,0.1,0.8) and f(3,3)=(0.45,0.45,0.1). We create a pool of combinations from N categories, which contains a 1-combination and a 2-combination. For each (k,N) pair, we randomly select one combination of categories that has not yet been selected and mark them as the major categories for this pair. We first select single category combinations and then select combinations of two categories. Then we set f(k,N) based on categories. We set 0.8 for the major categories and 0.1 for others, and then normalize them so they sum up to one. Then we draw a sample y=(y_1,y_2,...,y_N) from Dir(f(k,N)) with ∑_i=1^N y_i = 1. Next, we perturb x with y as z = x+y/2. Finally, for each sample i in D, we draw a category given probability z and assign it to feature X.
http://arxiv.org/abs/2307.02546v1
20230705180003
Isospin-breaking effects in the three-pion contribution to hadronic vacuum polarization
[ "Martin Hoferichter", "Bai-Long Hoid", "Bastian Kubis", "Dominic Schuh" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-lat", "nucl-th" ]
http://arxiv.org/abs/2307.00771v1
20230703062105
Resistive memory-based zero-shot liquid state machine for multimodal event data learning
[ "Ning Lin", "Shaocong Wang", "Yi Li", "Bo Wang", "Shuhui Shi", "Yangu He", "Woyu Zhang", "Yifei Yu", "Yue Zhang", "Xiaojuan Qi", "Xiaoming Chen", "Hao Jiang", "Xumeng Zhang", "Peng Lin", "Xiaoxin Xu", "Qi Liu", "Zhongrui Wang", "Dashan Shang", "Ming Liu" ]
cs.ET
[ "cs.ET" ]
DifFSS: Diffusion Model for Few-Shot Semantic Segmentation Bo Yan August 1, 2023 ========================================================== empty § INTRODUCTION The human brain is a highly efficient and adaptable system that effectively integrates and learns from diverse sensory inputs, often through the generalization of existing knowledge to new tasks. This so-called zero-shot transfer learning is incredibly energy-efficient and parallel, due to the coexistence of memory and computing within the extensively interconnected synapses and neurons, and the propagation of event-type spikes throughout the neural network <cit.>. Inspired by the human brain, neuromorphic hardware, including spiking neural network (SNN) accelerators and event-based dynamic vision and audio sensors <cit.>, aims to emulate the functionality of the brain and sensory neural networks, such as the retina and cochlea <cit.>. Despite these efforts, achieving a level of energy efficiency and zero-shot crossmodal intelligence comparable to the human brain remains a considerable challenge. This is due to obstacles encountered in both hardware and software domains. Hardware-wise, transistor scaling is close to its physical limit, which has slowed down Moore's law that has driven the past development of complementary metal-oxide-semiconductor (CMOS) chips for decades. In addition, digital neuromorphic hardware faces the von Neumann bottleneck, characterized by frequent and massive data transfers between off-chip memory and processing units, resulting in large energy and time overheads <cit.>. From a software standpoint, training SNNs has historically been a challenging issue. The asynchronous and complex dynamics of spiking neurons are known for their non-differentiability. While surrogate gradients can approximate this non-differentiability at considerable training costs <cit.>, the performance typically does not surpass that of training an artificial neural network (ANN) and mapping weights to the corresponding SNN. Besides, the latter <cit.> is difficult to parallel the performance of original ANNs, due to the absence of neural dynamics in ANN training <cit.>. Moreover, unsupervised local learning rules, such as spike-timing-dependent plasticity (STDP) practiced by biological synapses, have been found to be ineffective for deep SNNs <cit.>. Furthermore, existing SNNs for event data predominantly depend on supervised learning with a large number of sample-label pairs, rather than capitalizing on pre-training and generalization from prior experiences to accomplish zero-shot transfer learning, a technique popularized by recent large-scale language and vision ANN models <cit.>. As such, the challenges in both hardware and software necessitate a novel neuromorphic computing paradigm for learning crossmodal event-driven signals. To address these challenges, we propose a hardware-software co-design that employs a hybrid analogue-digital system for a combined SNN-ANN model. On the hardware side, we develop a hybrid system that integrates analogue random resistive memory with a digital computer. The inherent stochasticity of resistive switching is harnessed to physically generate fixed, random and nanoscale resistors (resistive switches or memristors) and compute with simple physical laws. This method naturally overcomes the von Neumann bottleneck and achieves improved efficiency  <cit.>. Concurrently, the digital hardware enables fast and precise real-time learning. From a software perspective, the LSM <cit.> encoder is an SNN variant of reservoir computing that comprises fixed, random and recurrent synaptic connections, which can be naturally implemented on the analogue random resistive memory to directly process multimodal (e.g., visual and audio) event data. These spiking features are then accumulated as real-valued vectors and aligned by trainable ANN projection layers in a shared latent space using contrastive learning. This effectively brings matched visual-audio embedding pairs closer together while pushing non-matched embedding pairs apart <cit.>, addressing the SNN training difficulty and achieving zero-shot transfer learning. The synergistic combination of analogue random resistive memory, multimodal LSM encoders, and contrastive learning not only improves energy-area efficiency through in-memory computing but also leverages the intrinsic stochasticity of dielectric breakdown in generating random weights. This leads to the development of low-cost, nanoscale neuromorphic hardware capable of zero-shot learning for crossmodal event-driven data at substantially reduced learning complexity. In this article, we physically implement our hardware-software co-design with a 40nm resistive computing-in-memory macro. We showcase the effectiveness of LSM encoders using linear probing on representative event datasets N-MNIST <cit.> and N-TIDIGITS <cit.>, followed by illustrating the zero-shot transfer capability of the co-design on the multimodal event dataset. We achieve classification performance on par with software models, while showing a 23.32-fold improvement in energy efficiency compared to state-of-the-art digital hardware. Moreover, thanks to the random projections in LSM, the backward pass complexity is reduced by about 152.83-fold compared to the state-of-the-art spiking recurrent neural network (SRNN)-based CLIP <cit.> and Prototypical networks <cit.>. Our hardware-software co-design not only introduces a high-efficiency, fast, and precise learning solution for future compact edge neuromorphic systems but also enables zero-shot learning of crossmodal events in a brain-inspired manner. § HARDWARE-SOFTWARE CO-DESIGN Fig.<ref> depicts the hardware-software co-design, which utilizes a hybrid analogue-digital system to physically implement the SNN-ANN model. Software-wise, the SNN-ANN model primarily comprises a fixed and random multimodal LSM encoder and trainable projections, as illustrated in Fig.<ref>(a-b). The LSM encoder is an SNN with an input layer and a recurrent layer, both featuring biologically plausible leaky integrate-and-fire (LIF) neurons <cit.> to directly handle event data. These spiking neurons are randomly interconnected with fixed synaptic weights, which map inputs to a high-dimensional state space trajectory. This process generates diverse input signal representations, typically achieving greater linear separability when the trajectory is at the edge of chaos, also known as the "separation property" of LSM. Two ANN projection layers are utilized to map the distinct modalities' features, extracted by the LSM encoders, to the same dimension and measure their cosine distance. The projection layers' weights are optimized according to the contrastive loss, which prompts the model to distinguish between positive pairs (i.e., matching image-audio pairs) and negative pairs (i.e., non-matching image-audio pairs). This is accomplished by maximizing the similarity between positive pairs and minimizing the similarity between negative pairs, drawing inspiration from the success of CLIP model<cit.>. Hardware-wise, the hybrid analogue-digital computing platform (see Fig.1 and Table 5 in Supplementary Information for the system design) has an analogue core, the resistive memory-based in-memory computing macro (Fig.<ref>c). The macro consists of emerging nanoscale TaN/TaO_x/Ta/TiN resistive switches (Fig.<ref>d-e) integrated with CMOS using the backend-of-line process on a 40nm technology node tape-out, forming a 512×512 crossbar array. This analogue core implements both the input and recurrent layers of the LSM (Fig.<ref>f), accounting for the majority of computational cost. The LSM is interfaced with ANN projection layers for contrastive learning that are implemented digitally. The inherent stochasticity in resistive memory programming was harnessed to create random conductance matrices for representing synaptic weights in the LSM. Specifically, all as-deposited cells in a resistive memory array receive a uniform programming voltage, subject to current compliance enforced by selecting transistors to prevent hard breakdown. The resulting differential conductance map of a 456×201 subarray is depicted in Fig.<ref>g. The random synaptic weights are partially shared by the event-driven audio and image LSM encoders (refer to Fig.2 in Supplementary Information for details). Fig.<ref>h shows the corresponding variance of 30,000-cycle array conductance reading. The reading variance for most devices remains below 0.14μS, indicating decent data retention. Fig.<ref>i illustrates that the conductance of resistive memory cells follows a quasi-normal distribution, which can be tailored by adjusting electrical operation parameters, thus enabling flexible hardware implementations of the LSM-based backbone (see Fig.3 in Supplementary Information for the distribution of weights). The stability of such random resistive memory is highlighted by the repeated reading of 40 randomly selected cells within the resistive memory, as shown in Fig.<ref>j. § SUPERVISED N-MNIST CLASSIFICATION First, we evaluate the performance of the LSM encoder using a supervised learning task on the N-MNIST dataset <cit.> (see Table 2 in Supplementary Information for details of the dataset), which consists of spiking versions of handwritten digits represented by positive and negative events. First, we center crop the 34×34 frames to 16×16 frames, as illustrated by the events for digit "7" in Fig.<ref>a. This represents the asynchronous and sparse spike stream fed to the LSM encoder at various time steps. The LSM encoder maps the input spike stream to the spike trains of a large population of recurrent neurons as shown in Fig.<ref>b. Due to the near-chaotic dynamics, the event stream becomes more linearly separable. Fig.<ref>c visualizes the membrane potentials of selected LSM neurons upon the input spike stream, resulting from the synergy of both excitatory and inhibitory synaptic connections. LSM neuron spikes are aggregated by digital counters and interfaced with a downstream ANN classification head implemented digitally. Fig.<ref>d displays the distribution of spiking number embedding of test samples using t-distributed stochastic neighbor embedding (t-SNE) for dimensionality reduction, demonstrating discriminability for most input classes due to LSM's separability. This is further evident in the confusion matrix shown in Fig.<ref>e, which is characterized by dominant diagonal elements and high class-wise classification accuracy. Fig.<ref>f presents the comparison study, where the hardware classification accuracy (89.66% with the LSM encoder and ANN classification head, or LSM-ANN) is close to the software simulation (89.11%). Additionally, Fig.<ref>f shows that the model's performance is comparable to the fully trainable counterparts in software, such as those with spiking/non-spiking recurrent neural network (SRNN/RNN) encoders and ANN/SNN classification heads (see Notes 3-1 and 3-2 in Supplementary Information for the details of SRNN and RNN). Moreover, LSM's performance scales well with the hidden dimension and the input size, and is robust to noise disturbance (refer to Fig.4 in Supplementary Information for the hyperparameter impact, Fig.6 in Supplementary Information for noise impact). To demonstrate the advantages of LSM over training complexity, we count the multiply-accumulate (MAC) operations of different layers of the model during training, as shown in Fig.<ref>g. The LSM's training complexity is considerably lower than that of SRNN-SNN (by a factor of 817.95) and SRNN-ANN (by a factor of 802.18) primarily due to the fixed and random weights of the LSM encoder. Additionally, the training cost of the ANN classification head is lower than that of the SNN one, mainly because a simple accumulator is used for the SNN-ANN interface. The corresponding energy estimations are depicted in the two panels of Fig.<ref>h. The overall energy consumption is estimated to be around 4.4μJ for a conventional digital system and approximately 0.14μJ for a projected hybrid analogue-digital system using 40nm technology node. The breakdown of energy consumption for different layers is depicted in the left panel. It is observed that the energy consumption of the conventional digital system is mainly attributed to the LSM, accounting for approximately 4.39μJ, which is 31.29-fold that of the hybrid analogue-digital system. Right panel shows the energy consumption of analogue and digital circuits. Overall, our hybrid analogue-digital system demonstrates a 29.91-fold improvement in energy consumption compared to the state-of-the-art digital hardware (see Table 6 in Supplementary Information for the detailed energy breakdown), thanks to the energy efficient resistive in-memory computing. § SUPERVISED N-TIDIGITS CLASSIFICATION In addition to event-based vision tasks, we also evaluate the performance of the LSM encoder on a representative audio classification problem using the N-TIDIGITS dataset <cit.> (see Table 2 in Supplementary Information for details). This dataset consists of audio recordings of spoken digits, which are represented by events across 64 frequency bands, as shown in Fig.<ref>a. For simplicity, each sample is divided into 129 time steps and input into the LSM encoder, as illustrated in Fig.<ref>b. The LSM encoder then outputs a high-dimensional event stream, which is more linearly separable, as depicted in Fig.<ref>c. The membrane potentials of selected LSM neurons are displayed in Fig.<ref>d, exhibiting different trajectories and thus spiking patterns between neurons. The 3D visualization of the distribution of spike number embeddings of test samples, using t-SNE for dimensionality reduction, is presented in Fig.<ref>e. Samples from the same category are clearly clustered, resulting from the nonlinear dynamics of LSM. Fig.<ref>f shows the experimental confusion matrix, which, similar to the previous case, is dominated by diagonal elements, confirming high classification accuracy. The comparison study in Fig.<ref>g shows the hardware and software classification accuracy (70.11% and 70.79%), as well as the performance of SRNN/RNN encoders with SNN/ANN classifiers, which are close. Also, LSM's performance, like in the previous case, is robust to both input noise and read noise, while scales with the hidden dimension (refer to Fig. 6 in Supplementary Information for noise impact simulation and Fig.5 in Supplementary Information for hyperparameter analysis). Fig.<ref>h displays the training-related MAC operations for different layers of the model. Similar to the previous case, LSM's training complexity, with about 13K operations, is significantly lower than that of SRNN-SNN (by a factor of 1102.92) and SRNN-ANN (by a factor of 1061.60). This is due to the fixed random weights of the LSM encoder and the use of simple accumulators for the LSM-ANN interface. Fig.<ref>i compares the inference energy consumption across various hardware platforms, including a projected hybrid analogue-digital system on 40nm technology node and the latest digital hardware (refer to Table 7 in Supplementary Information for the detailed energy breakdown). Notably, the estimated inference energy for the digit "7" on the hybrid analogue-digital system is 0.37μJ, which is about 22.07-fold smaller compared to a fully digital implementation. This is because resistive in-memory computing significantly reduces the matrix multiplication cost of the LSM from 8.28μJ down to 0.36μJ as demonstrated in the left panel. § ZERO-SHOT LEARNING MULTIMODAL EVENT DATA We then develop the zero-shot learning model for multimodal event data by combining resistive memory-based analogue LSM encoders with digital ANN projection layers. The model is trained using contrastive learning for event visual and audio signals. As shown in Fig.<ref>a, for simplicity, the same resistive memory-based LSM encoder receives two input streams: one for N-MNIST images and the other for corresponding N-TIDIGITS audios. The encoded spiking features are accumulated by counters, producing latent vectors of the same dimension. The ANN projection layers are then optimized using contrastive loss. We use images "1" to "7" from N-MNIST and audios "One" to "Seven" from N-TIDIGITS for training. After that, we test the zero-shot transfer capability by querying the model with unseen audios ("Eight" and "Nine") as well as images ("8" and "9") to reveal the generalization capability without finetuning the projection layers. The t-SNE distributions of query samples of both seen and unseen classes after LSM and projection are shown in Fig.<ref>b. The projected features of different classes are distinctively clustered for query samples from both seen and unseen classes. Additionally, the same class is effectively clustered across different modalities, as a result of multimodal contrastive learning. Fig.<ref>c presents the accuracy of audio-search-image for both query samples of unseen ("8" and "9") and seen classes ("1" through "7") (refer to Table 3-4 in Supplementary Information for details of the dataset). The LSM-ANN model, which does not receive additional training, achieves similar performance as the SRNN-based fully trainable model. Specifically, although the LSM-ANN model has lower accuracy than the fully trainable model on the training task ("1" through "7"), it achieves a zero-shot transfer classification accuracy 88% (87.5%) in simulation (experiment), which is parallel to that of the trainable SRNN-based CLIP and Prototypical networks (see Fig.10 in Supplementary Information for details on training the Prototypical network). The zero-shot transfer performance of the LSM-ANN is also corroborated by the dominant diagonal elements of the confusion matrix, as shown in Fig.<ref>d. Fig.<ref>e showcases the training costs for SRNN-based CLIP and Prototypical networks, which require approximately 22.7 million and 0.678 million MAC operations, respectively. In comparison, our co-design demonstrates a remarkable 152.83-fold reduction of training complexity, thanks to the fixed random weights of LSM and efficient SNN-ANN interface. Fig.<ref>f showcases the inference energy of the LSM-ANN model when processing a single event image and audio sample. In contrast to digital computers, the inference energy on the projected hybrid analogue-digital platform is 0.545μJ (0.383 μJ and 0.162 μJ for image and audio inputs, respectively), resulting in a 23.34-fold enhancement in energy efficiency thanks to the resistive in-memory computing. § DISCUSSION In summary, we demonstrate a hardware-software co-design for the sparse and asynchronous event data learning on supervised and zero-shot transfer tasks. Hardware-wise, the stochasticity of resistive switching is leveraged to produce low-cost and scalable random resistive memory that physically implements the weights of an LSM, featuring in-memory computing with large parallelism and high efficiency that overcomes the von Neumann bottleneck and slowdown of Moore’s law. This is augmented by digital hardware for precise and fast ANN weight tuning. Software-wise, contrastive learning with LSM-ANN model not only takes advantage of the physical random projections enabled by random resistive memory arrays in performing multimodal event data embedding, but also substantially reduces the training difficulty thanks to the simple ANN projections in implementing zero-shot transfer learning by generalizing existing knowledge. § METHODS §.§ Fabrication of Resistive Memory Chips The fabricated resistive memory array has a 1T1R structure using the 40nm technology node. Each resistive memory cell is built between the metal 4 and metal 5 layers of the backend-of-line process, consisting of bottom and top electrodes (BE and TE) with a transition-metal oxide dielectric layer. The via of BE is patterned by photolithography and etching, filling with TaN by physical vapor deposition. Above the polished BE via is a 10nm TaN buffer layer. Afterward, a 5nm Ta is deposited and oxidized to form an 8nm TaOx dielectric layer. Finally, the 3nm Ta and 40nm TiN are sequentially deposited by physical vapor deposition to form the TE. After fabrication, the remaining interconnection metals are deposited using the standard logic process. The cells in the same row share BE connections, while those in the same column share TE connections, comprising a 512×512 crossbar array. After 30 minutes of post-annealing at 400 in a vacuum environment, the 40nm resistive memory chip shows a high yield and strong endurance performance. §.§ The Hybrid Analogue–Digital Computing System The hybrid analogue-digital computing system integrates a 40nm resistive memory computing-in-memory chip and a Xilinx ZYNQ system-on-chip (SoC) on a printed circuit board (PCB). For signal inputs, the system offers parallel 64-way analogue voltages generated via an 8-channel digital-to-analog converter (DAC80508, TEXAS INSTRUMENTS, 16-bit resolution), ranging from 0 V to 5 V. For signal collection, the convergence current will be converted to voltages by trans-impedance amplifiers (OPA4322-Q1, TEXAS INSTRUMENTS) and read out by an analog-to-digital converter (ADS8324, TEXAS INSTRUMENTS, 14-bit resolution). Both the analogue and digital conversions are integrated onboard. When performing vector-matrix multiplications, a DC voltage is applied to bit lines of the resistive memory chip through a 4-channel analogue multiplexer (CD4051B, TEXAS INSTRUMENTS) with an 8-bit shift register (SN74HC595, TEXAS INSTRUMENTS). The multiplication result carried by the current from the source line is converted to voltages and passed to the Xilinx SoC for later processing. §.§ LSM-based Supervised Classification The crossbar array is logically partitioned into two groups conductance matrices (see Fig.2 in Supplementary Information for details), which are employed for N-MNIST and N-TIDIGITS recognition and share the same resistive memory array area. Each group of conductances can be divided into two matrices, G_I ∈ℝ^h× U and G_R ∈ℝ^h × h, where U and h denote the dimension of the input feature vector and the number of recurrent neurons, respectively. Specifically, U=256 for N-MNIST, U=64 for N-TIDIGITS, and h=200 for both datasets. Conductances G_I and G_R are used to implement the weight values of the input weights W_I and the recurrent weights W_R of the LSM in Eq. (<ref>). As depicted in Fig.1, the model comprises an LSM backbone and an ANN readout head. LSM is a spiking variant of recurrent neural networks with random weights, first proposed by Maass et al <cit.>. LSMs combined with biologically plausible LIF neurons have demonstrated state-of-the-art performance in addressing vision, audio, and control problems <cit.>. §.§.§ LSM Backbone The LSM consists of an input layer and a recurrent layer to extract spiking features from raw event inputs using fixed and random synaptic weights. This is achieved experimentally by utilizing the random conductance values of the resistive memory. At time t, the incoming synaptic current I to the i-th recurrent neuron is the weighted summation of the U input spikes θ_α and the h recurrent input spikes θ_β, I_i(t) = ∑_α=1^U w_(α, i)∗θ_α(t) + ∑_β=1^hw_(β, i)∗θ_β(t) , where w_(α, i) and w_(β, i) are the randomly initialized synapses connecting the i-th recurrent neuron with the α-th input channel and the β-th recurrent neuron. These synapses remain fixed during training. According to the LIF neuron model, the dynamics of the membrane potential follows, d u(t)/d t = u_rest-u(t)/τ_mem+I(t)/c_mem, where τ_m, c_m, and u_rest are constants representing the membrane's leaky time, capacitance, and resting potential, respectively. Once the membrane potential of the i-th recurrent neuron exceeds the firing threshold u_th, the neuron generates an action potential θ_i(t), θ_i(t)= 1 if u(t)≥ u_th 0 otherwise . §.§.§ Counter and ANN Readout Map The counter accumulates asynchronous spiking features and produces a synchronous signal o_i for the i-th neuron over a time window T, o_i = ∑^t=T_t=1θ_i(t) . The ANN readout map receives the accumulated neural action potentials from counters and infers labels at a predefined time. Structurally, the readout map is a simple classifier, typically comprising a fully connected layer. In contrast to the SNN part, the weights and biases of the ANN layer are optimized using gradient descent. The functions of the LIF neuron and the ANN readout map are implemented on a computer. All hyperparameters (e.g., firing threshold, decay, time window) are optimized by grid searching the hyperparameter space to maximize hardware performance (see Fig.4 and Fig.5 in Supplementary Information for hyperparameter analysis). §.§ LSM-based Contrastive Learning for Zero-Shot Transfer Learning on Multimodal Event Data Contrastive learning utilizes a shared LSM structure for both N-MNIST and N-TIDIGITS datasets, with input node sizes u set to 256 and 64, respectively, and a recurrent feature size h of 200. Features are processed through LIF neurons with various hyperparameters and subsequently mapped to the contrastive learning space using a single-layer ANN-type projection layer. The projection dimension size is configured to 64, as depicted in Fig.<ref>. The LSM structure implementation relies on a shared resistive memory array, while the projection layer function is executed on a commercial digital hardware. The projection layer parameters are updated by the contrastive loss. Specifically, a minibatch of N pairs of vision and audio inputs is randomly sampled. This establishes the contrastive prediction task on LSM encoder pairs of vision features z_v and audio features z_a, resulting in pairwise similarities, s_v,a=z^T_vz_a/z_vz_a, where v ∈{1, … , N} and a ∈{1, … , N} are indices of projected vision and audio features. The contrastive loss L_c can be defined as, L_c = 1/2(∑_v=1^Nt_vlog(p_v,a) + ∑_a=1^Nt_alog(p_a,v) ) , where p_v,a (p_a,v) denotes the probabilities of s_v,a (s_a,v) after the softmax function, and t_v(t_a) represents the target label term for vision (audio) embedding. For the target label t_v of the vision feature, mathematically, t_v= { 1,2,…,N }. The target label t_a of the audio feature operates similarly to that of the vision feature. § ACKNOWLEDGEMENT This research is supported by the National Key R&D Program of China (Grant No. 2018YFA0701500), the National Natural Science Foundation of China (Grant Nos. 62122004, 61874138, 61888102, 61821091), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB44000000), Hong Kong Research Grant Council (Grant Nos. 27206321, 17205922), the Innovation and Technology Commission of Hong Kong (Grant. No. MHP/066/20). This research is also partially supported by ACCESS – AI Chip Center for Emerging Smart Systems, sponsored by Innovation and Technology Fund (ITF), Hong Kong SAR. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2307.02291v1
20230705134231
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
[ "Junwen Chen", "Yingcheng Wang", "Keiji Yanai" ]
cs.CV
[ "cs.CV" ]
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising Junwen Chen Yingcheng Wang Keiji Yanai Department of Informatics, The University of Electro-Communications, Tokyo, Japan chen-j@mm.inf.uec.ac.jp, wang-y@mm.inf.uec.ac.jp, yanai@cs.uec.ac.jp August 1, 2023 =========================================================================================================================================================================================================== Recent one-stage transformer-based methods achieve notable gains in the Human-object Interaction Detection (HOI) task by leveraging the detection of DETR. However, the current methods redirect the detection target of the object decoder, and the box target is not explicitly separated from the query embeddings, which leads to long and hard training. Furthermore, matching the predicted HOI instances with the ground-truth is more challenging than object detection, simply adapting training strategies from the object detection makes the training more difficult. To clear the ambiguity between human and object detection and share the prediction burden, we propose a novel one-stage framework (SOV), which consists of a subject decoder, an object decoder, and a verb decoder. Moreover, we propose a novel Specific Target Guided (STG) DeNoising strategy, which leverages learnable object and verb label embeddings to guide the training and accelerates the training convergence. In addition, for the inference part, the label-specific information is directly fed into the decoders by initializing the query embeddings from the learnable label embeddings. Without additional features or prior language knowledge, our method (SOV-STG) achieves higher accuracy than the state-of-the-art method in one-third of training epochs. The code is available at <https://github.com/cjw2021/SOV-STG>. § INTRODUCTION Recent Human-Object Interaction (HOI) detection studies are mainly built on the object detection framework. The most widely used datasets, HICO-DET <cit.> and V-COCO <cit.>, share the same object categories as the MS-COCO dataset <cit.>. Following the definition of the HOI instance { B_s, (B_o, O), V }, which is a tuple of the subject (human) box B_s, the object box B_o with class O, and the verb class V, detecting methods are split into one-stage and two-stage methods. In the beginning, a multi-stream architecture built on top of a CNN-based object detector is commonly adopted in the two-stage methods <cit.>. Multi-stream methods resolve the HOI detection problem in split parts and have a great potential to improve. By introducing the human pose information <cit.>, the language priors <cit.>, or graph structure <cit.>, CNN-based two-stage methods achieve considerable accuracy. On the other hand, CNN-based one-stage methods <cit.> leverage interaction points to detect possible interaction between the subject and object and achieve promising performance. The attention mechanism of the transformer is more flexible than the CNN architecture in handling the relationships of features at different locations in the feature map and extracting global context information <cit.>. At first, the transformer-based methods <cit.> show the advantage of the attention mechanism by adopting DETR <cit.> in the HOI detection task. QPIC <cit.> and HOITrans <cit.> follow the same training pipeline as the DETR by viewing the HOI detection problem as a set prediction problem. Without the matching process in one-stage and two-stage CNN-based methods, QPIC and HOITrans adopt a compact encoder-decoder architecture to predict the HOI instances directly. However, the compact architecture with a single decoder binds the feature of the subject and object localization and interaction recognition together. As a result, even leveraging the DETR model pretrained on the COCO dataset, the finetuning for QPIC and HOITrans still need 150 and 250 epochs. Following one-stage methods <cit.> improve the single decoder design by disentangling the object localization and the interaction recognition in a cascade manner. Specifically, GEN-VLKT <cit.> improves the cascade decoder design of CDN <cit.> by introducing two isolated queries of humans and objects in an instance decoder and fusing the human and object features layer by layer in an interaction decoder. Even though GEN-VLKT leverages the language model <cit.> to guide the training and achieves a notable improvement, the subject and object detection are still tangled in the instance decoder. Consequently, the training is still hard and slow, which needs 90 epochs. On the other hand, the two-stage transformer-based methods <cit.> stack additional interaction pair detection modules on top of the object decoder without modifying the subject and object detection part. Thus, compared with one-stage methods, two-stage methods can focus on filtering the interaction pairs and achieve higher accuracy than the one-stage transformer-based methods with fewer training epochs. To fill the gap between one-stage methods and two-stage methods, we focus on improving in two aspects: 1) how to concatenate on decoding specific targets and 2) how to accelerate the training convergence. For the first aspect, we revisit the decoding pipeline of the transformer-based method. One-stage methods always redirect the decoding target of the decoder pretrained from the object detection task, which leads to slow training convergence. To this end, according to the definition of the HOI instance, we propose a new framework (SOV), which fully splits the decoding process into three parts: Subject detection, Object detection, and Verb recognition. Specifically, the object decoder, subject decoder, and verb decoder are assigned to decode the object, subject, and verb class, respectively. By doing so, the object decoder maintains the object detection capability from the beginning of the training and accelerates the training convergence. Furthermore, we introduce a Subject-Object (S-O) attention module for the verb decoder to fuse the subject and object information and improve the verb representation learning capabilities. In Figure <ref>, we compare the training convergence with recent SOTA methods. From the results, SOV takes advantage of the balanced decoding pipeline and achieves a notable high accuracy at the early stage of the training. To accelerate the training convergence, we introduce a novel Spcific Target Guided (STG) denoising strategy for HOI detection with our proposed framework, which explicitly reconstructs the ground-truth HOI instances and enables the model to learn what to train. Specifically, we explicitly define the subject anchor box and object anchor box as the anchor box priors. The learnable anchor boxes are used as the input of the decoder to guide the feature extraction, by adding noise to the anchor box priors, the subject and object decoder directly obtain clear positional denoising targets from the input. Moreover, we introduce a new Adaptive Shifted Minimum Bounding Rectangle (ASMBR) to generate the verb anchor box for the verb decoder from the output boxes of the subject and object decoder. For the label denoising, we define two kinds of learnable label embeddings for the object label priors and the verb label priors. In this way, the model acquires label-specific information from the label priors both from the training and inference stage. In Figure <ref>, we illustrate the training convergence of SOV and QPIC with STG, and the results show that our STG strategy effectively accelerates the training convergence before the learning rate drops and finally improves the performance. In summary, our contributions are mainly in two aspects: (1) we propose a novel one-stage framework (SOV) to enable the model to concentrate on what to detect and what to recognize; (2) we propose a novel training strategy (STG) to allow the model to learn the positional and label-specific information from the ground-truth. With the SOV framework design and the STG denoising training strategy, we achieve a new state-of-the-art performance on the HOI detection benchmark with 3× fewer training epochs (30 epochs on HICO-DET) than the current state-of-the-art method. § RELATED WORK For one-stage methods, how to extract the interaction information under a predefined representation of the interaction region is a key issue. To improve the detection efficiency, one-stage methods adopt heuristic representations of the interaction region. Learning to localize the interaction region. Before the transformer-based methods represent the HOI detection as a set prediction problem, the difficulties for one-stage methods <cit.> lie in how to aggregate the interaction information from a proper region and allocate it to a pair of subject and object. PPDM <cit.> and IP-Net <cit.> use the interaction points and vectors from heatmaps to represent the interaction and require a post-process to match the interaction and the pair of subject and object. UnionDet <cit.> predicts the union box to represent the interaction and matches the union box with the subject and object pair. Predicting interactions by point priors. Transformer-based methods <cit.> also adopt different ways to represent the HOI instance according to the attention mechanism of the transformer decoders. A simple way is to use query embedding to represent all the elements of the HOI instance <cit.>. However, the query embedding is learned to represent the localization and recognition information simultaneously, leading to slow convergence and low accuracy. Subsequent studies <cit.> attempt to leverage the deformable attention mechanism <cit.> to guide the decoding by reference points. In Figure <ref>, QAHOI <cit.> and FGAHOI <cit.> views the deformable transformer decoder's reference point as the HOI instance's anchor and uses the anchor to guide the subject and object detection. Although QAHOI and FGAHOI split the embedding for reference points from the HOI query embeddings, the HOI query embeddings are still used to predict all the elements of the HOI instance. In Figure <ref>, MSTR <cit.> proposes to use the subject, object, and context reference points to represent the HOI instance and predict the subject, object, and verb based on the reference points. The context reference point is defined as the center of the subject and object reference point, which follows the idea of the interaction point <cit.>. Nevertheless, the query embedding in MSTR is used to predict the final boxes and labels of the HOI instance and still suffers from ambiguous representations. Besides, QAHOI and MSTR use x-y coordinates as the positional priors to guide the decoding, while the box size priors are not considered. Learning from the ground-truth. Both the studies of CNN-based one-stage <cit.> and two-stage <cit.> methods show that simply using the ground-truth object detection results promotes the final performance a lot. For the transformer-based method, DOQ <cit.> introduces the oracle queries to implicitly encode the ground-truth boxes of human-object pairs and the object labels, and guide the decoder to learn to reconstruct the ground-truth HOI instances. In this way, DOQ distills the knowledge of the ground-truth information into the decoder and improves the performance and training convergence of QPIC and CDN. However, the oracle queries implicitly bind the detection and recognition, which limits the acceleration effect of using the ground-truth information, thus, DOQ needs 80 epochs to converge. Recently, in the object detection task, DAB-Deformable-DETR <cit.> formulates explicit learnable anchor boxes as the box queries to improve the connection between queries and features and accelerate the training convergence. Based on DAB-Deformable-DETR, DN-DETR <cit.> introduces the denoising strategy to further improve training efficiency and detection performance. Motivated by DN-DETR, our SOV-STG extends the definition of learnable anchor boxes to represent the HOI instance, and we introduce a HOI specific denoising strategy. However, different from DN-DETR, the label denoising queries of our STG are generated from the label priors which can be used for both denoising and inference. § HOI EFFICIENT DECODING AND TRAINING Figure <ref> shows the overall architecture of our framework. In this section, we first introduce the HOI efficient decoding architecture, which includes the design of the verb box in Section <ref>, the split decoders in Section <ref> and the initialization of the label queries in Section <ref>. Then, the STG denoising training strategy built on the efficient decoding architecture is introduced in Section <ref>. Finally, the training and inference details are presented in Section <ref>. §.§ Predicting Interactions by Anchor Box Priors To clarify the query embeddings for specific usage, our SOV framework directly uses learnable subject and object anchor boxes to predict the subject and object boxes. The anchor boxes are updated layer by layer during the decoding process, and the subject and object boxes from the last layer are used to form the verb box. As shown in Figure <ref>, we introduce the adaptive shifted minimum bounding rectangle (ASMBR) to generate the verb box while considering the spatial relationship between the subject and object boxes. Unlike the UnionDet, which uses the union box to guide the verb recognition, the verb box of SOV is not learned from any additional module but directly from the subject and object boxes. With the intention of balancing the attention between the subject and object, we shift the center of the MBR to the center of the subject and object boxes. Considering the boxes will overlap with each other, we shrink the width and height of the MBR according to the spatial relationship between the two boxes. Finally, the verb box can constrain the interaction region for sampling points of the deformable attention and extract interaction information from specific subject and object pairs. As shown in Figure <ref>, given the final subject box B_s=(x_s, y_s, w_s, h_s) and object box B_o=(x_o, y_o, w_o, h_o), where (x,y) indicates the box center, the ASMBR (verb box) is defined as: B_v = (x_s + x_o/2, y_s + y_o/2, w_v, h_v) w_v = w_s + w_o/2+|x_s-x_o|, h_v = h_s + h_o/2+|y_s-y_o| §.§ HOI split decoders To clarify the decoding target, the design of the split decoders is crucial for our framework. The split decoding design is similar to the multi-stream detection design of two-stage HOI detection methods <cit.>. Two-stage methods use several branches to aggregate different features for better interaction recognition, in our case, our verb decoder also benefits from the two streams of subject and object features. Furthermore, the object decoder maintains the same architecture with the model pretrained on the COCO dataset, which helps to extract fine object features from the beginning of the finetuning on HOI detection. Subject Decoder and Object Decoder. The same as QAHOI <cit.> and MSTR <cit.>, we leverage a CNN backbone and deformable transformer encoder <cit.> to extract the multi-scale global features f_g∈ℝ^N_g× D, where N_g is the number of the total pixels of the multi-scale feature maps and D is the hidden dimension of the embeddings in the whole transformer architecture. As shown in Figure <ref>, the global features are fed into the subject and object decoder with the learnable anchor boxes. To maintain the detection capability of the object detector, the object decoder with the feed-forward heads is the same as the one trained in the detection task. Furthermore, we clone the object decoder to initialize the subject decoder and alleviate the learning burden of the subject decoder. The subject and object decoder updates the subject anchor box B_s and object anchor box B_o and query embeddings e layer by layer in a parallel manner. Then, the object embedding e_o from the object decoder is used to predict the object class, and the subject box and object box are used to generate the verb box B_v. Next, the object and subject embeddings are fed into the S-O attention module to fuse the verb embeddings. Finally, the verb boxes generated from the subject and object boxes with the verb embeddings are fed into the verb decoder to predict the verb class. Verb Decoder with S-O attention module. Since the verb boxes are directly generated from the subject and object boxes, the verb decoder can focus on the verb recognition without learning to predict the verb boxes. As shown in Figure <ref>, the verb recognition part mainly consists of two parts: the S-O attention module and the verb decoder. To integrate the knowledge of the verb label during the feature fusing, we further fuse the verb label priors in our designed S-O attention. Moreover, we introduce a bottom-up path in S-O attention to amplify the information from the bottom to the top layer. In Figure <ref>, we illustrate the calculation of the S-O attention module. Given the subject embedding e_s_i∈ℝ^N_q× D and object embedding e_o_i∈ℝ^N_q× D from the i-th layer (i>1), where N_q is the number of queries. First, similar to the GEN-VLKT <cit.>, our S-O attention module also fuses the subject and object embeddings by a sum operation, and the weights of the cross-attention are shared across different layers. Then, the fused embeddings e_so_i=(e_o_i + e_s_i)/2 are used to calculate the cross-attention with the verb label embeddings t_v. The learnable verb label embeddings t_v∈ℝ^N_q× D as the basic knowledge of the verb labels will be introduced next, in Section <ref>. Furthermore, a short-cut path is added to enhance the information in the current layer. Finally, the verb embedding e_v_i after the bottom-up path can be defined as: e_v_i = ( (CrossAttn(e_so_i-1, t_v) + e_so_i-1 ) + ( CrossAttn(e_so_i, t_v) + e_so_i ) ) /2 Then, the verb embeddings from the top layer are fed into the verb to further extract the global semantic information based on the global feature f_g and the verb boxes. §.§ Split Label Embeddings To explicitly equip the prior label knowledge into the decoders and disentangle the training and decoding target, as shown in Figure <ref>, two kinds of learnable label embeddings are used to initialize the query embeddings for SOV decoders. Different from the original denoising method <cit.>, we use the label embeddings both in the denoising and inference parts and enable the inference part to obtain the input query with label-specific information from the beginning. We define the object label embeddings t_o∈ℝ^C_o× D as the object label priors, which consist of C_o vectors with D dimensions, where C_o is the number of object classes and D is the hidden dimension of the transformer. Similarly, the verb label embeddings t_v∈ℝ^C_v× D are defined as the verb label priors. With the object label and verb label priors, we first initialize the query embeddings of object label q_o∈ℝ^N_q× D and verb label q_v∈ℝ^N_q× D by linear combining the object label and verb label embeddings with two learnable coefficient matrices A_o∈ℝ^N_q× C_o and A_v∈ℝ^N_q× C_v, respectively. Then, we add the object and verb label embeddings to obtain the inference query embeddings q_ov∈ℝ^N_q × D. The initialization of q_o, q_v, and q_ov is defined as follows: q_o = A_o t_o, q_v = A_v t_v q_ov = q_o + q_v §.§ Specific Target Guided Denoising As the object and verb labels are the targets of HOI detection, the two label embeddings can be viewed as the specific target priors. Since the denoising query embeddings are generated from the specific target priors and used to guide the denoising training, thus, we call our denoising strategy as Specific Target Guided (STG) denoising. In Figure <ref>, we show the initialization of the DN query embeddings and visualize the process of adding noise to one of the ground-truth HOI instances. Given the ground-truth object label set O_gt={o_i}_i=1^K and verb label set V_gt={v_i}_i=1^K of an image, where o_i and v_i are the labels of the object and verb classes, K is the number of ground-truth HOI instances, two kinds of label DN query embeddings are initialized. Following the DN-DETR <cit.>, for the k-th ground-truth HOI instance, the noised object label o'_k is obtained by randomly flipping the ground-truth index of the object label o_k to another object class index, and N_p groups of noised labels are generated. Next, the object DN query embeddings q_dn^(o)∈ℝ^N_p· K× D are gathered from the object label embeddings t_o according to the indexes of the noised object labels O'_gt. Because the verb label consists of co-occurrence ground-truth classes, to keep the co-occurrence ground-truth indexes appearing in the noised verb label, we randomly flip the other indexes of the ground-truth verb label to generate the noised verb label v'_k. Then, the verb label DN query embeddings q_dn^(v)∈ℝ^N_p· K× D are the sum of the verb label DN embeddings selected from the verb label embeddings t_v according to the indexes of the noised verb labels V'_gt. Finally, we concatenate the object DN query embeddings and verb DN query embeddings to form the DN query embeddings q_dn∈ℝ^2N_p· K× D for the denoising training. Since the specific target priors can be learned by the denoising training separately and used to guide the inference, our STG can accelerate the training convergence and improves the inference performance at the same time. §.§ Training and Inference Our proposed framework SOV-STG is trained in an end-to-end manner. For inference query embeddings, the Hungarian algorithm <cit.> is used to matching the ground-truth HOI instances with the predicted HOI instances, and the matching cost and the training loss are the same as QAHOI <cit.>. Moreover, the denoising and inference parts are trained with the same loss function. With the basic concept that the same ground-truth label flip rate is difficult for the model to denoise at the beginning of the training but becomes acceptable during the training, we further improve the denoising strategy by introducing a dynamic DN scale factor γ∈ (0,1) to control the object label flip rate η_o∈ (0,1) and the verb label denoising rate η_v∈ (0,1) according to the training epochs. The noised verb labels need to contain ground-truth labels, thus, the verb label denoising rate η_v is used to control the percentage of the ground-truth verb labels for adding noising, and an additional verb label flip rate λ_v∈ (0, 1) is used to control the flipping rate of the elements inside the multi-hot verb lables which is selected by η_v. With the dynamic DN scale strategy, the label flip rate η will be set to γ·η at the beginning of the training and linearly increase to η during the training. As our STG moves the label encoding embeddings out of the denoising part as the specific target priors, SOV-STG uses all of the parameters in training and inference. § EXPERIMENTS We evaluate our proposed SOV-STG on the HICO-DET <cit.> and V-COCO <cit.> datasets to compare with current SOTA methods and conduct extensive ablation studies to analyze the contributions of each component and show the effectiveness of our proposed method. §.§ Experimental Settings Dataset and Metric. The HICO-DET <cit.> dataset contains 38,118 images for training and 9,658 images for the test. The 117 verb classes and 80 object classes in HICO-DET form 600 HOI classes. According to the number of HOI instances appearing in the dataset, the HOI classes are divided into three categories: Full, Rare, and Non-Rare. Moreover, considering HOI instances including or not including the unknown objects, the evaluation of HICO-DET is divided into two settings: Default and Known Object. The V-COCO <cit.> dataset contains 5,400 images for training and 4,946 images for the test. In V-COCO, 80 object classes and 29 verb classes are annotated, and two scenarios are considered: scenario 1 with 29 verb classes and scenario 2 with 25 verb classes. We follow the standard evaluation <cit.> and report the mAP scores. Implementation Details. We adopt the DAB-Deformable-DETR trained on the COCO <cit.> dataset to initialize the weight of the feature extractor, the subject decoder, and the object decoder. The feature extractor consists of a ResNet-50 <cit.> backbone and a 6-layer deformable transformer encoder. Similar to GEN-VLKT <cit.>, we implement three variants of SOV-STG by adjusting the backbone and the number of layers in all the decoders, which are denoted as SOV-STG-S with ResNet-50 and 3-layer decoders, SOV-STG-M with ResNet-101 and 3-layer decoders, and SOV-STG-L with ResNet-101 and 6-layer decoders. The hidden dimension of the transformer is D=256, and the number of the query is set to N_q=64. For the DN part, 2N_p=6 groups of noised labels are generated for each ground-truth HOI instance. The dynamic DN scale is set to γ=2/3, and we define the maximum denoising level by setting the noising rate of the box to δ_b = 0.4, the object label flip rate to η_o = 0.3, the verb denoising rate to η_v = 0.6, and the verb label flip rate to λ_v = 0.6. We train the model with the AdamW optimizer <cit.> with a learning rate of 2e-4 (except for the backbone, which is 1e-5 for HICO-DET, 2e-6 for V-COCO) and a weight decay of 1e-4. The batch size is set to 32 (4 images per GPU), and the training epochs are 30 (learning rate drops at the 20th epoch), which is one-third of the GEN-VLKT <cit.>, and one-fifth of the QPIC <cit.> and QAHOI <cit.>. All of the experiments are conducted on 8 NVIDIA A6000 GPUs. §.§ Comparison to State-of-the-Arts In Table <ref>, we compare our proposed SOV-STG with the recent SOTA methods on the HICO-DET dataset. Our SOV-STG-S with ResNet-50 backbone achieves 33.80 mAP on the Full category of the Default setting. Compared with the transformer-based one-stage methods, QAHOI and MSTR, which are based on the reference point, SOV-STG benefits from the anchor box priors and label priors and achieves 7.62 (29.11%) and 2.63 (8.44%) mAP improvements, respectively. Note that, without any extra language prior knowledge <cit.>, SOV-STG-M outperforms GEN-VLKT-M by 0.26% in one-third of the training epochs. Our proposed framework and learning strategy close the gap in training efficiency between transformer-based one-stage and two-stage methods. As a result, compared with UPT, SOV-STG-L achieves 3.42 (10.56%) mAP improvements with only 10 more training epochs. Since our SOV-STG explicitly makes full use of the ground-truth information, compared with DOQ, which also uses ground-truth to guide the training, SOV-STG-S achieves 1.05% mAP improvement with less than half of the training epochs of DOQ. Furthermore, our best model SOV-STG-Swin-L with Swin-Large <cit.> backbone achieves a new SOTA performance of 43.35 mAP, which outperforms FGAHOI-Swin-L by 14.23%. Similarly, in Table <ref>, SOV-STG-M achieves 63.7 mAP on AP_role^S1 and surpasses UPT and GEN-VLKT-L by 3.92% and 0.63%, respectively. §.§ Ablation Study We conduct all the ablation experiments on the HICO-DET dataset with the SOV-STG-S model, and if not explicitly noticed, the same training setting is used as the training of our SOTA model. Contributions of proposed modules. SOV-STG is composed of flexible decoding architecture and training strategies. To clarify the contributions of each proposed module, in Table <ref>, we remove the proposed modules one by one and conduct ablation studies on the HICO-DET dataset. The row of (5) indicates the experiment removing the STG strategy and the S-O attention module is degraded to a sum fusion module which is similar to GEN-VLKT <cit.>. From the result, the STG strategy and S-O attention improve the performance by 5.96% on the Full category. Moreover, without the STG strategy, our framework also achieves a significant improvement over QPIC (ResNet-50) by 9.74% with one-fifth of the training epochs. Next, in (4), we remove the verb decoder in (5). As the result, comparing (4) with (5), without the verb decoder, the performance drops by 4.01%. Then, in (3), we remove the subject decoder and the sum fusion module, and update both the subject and object boxes by the object decoder. Without balancing the decoding burden of the detection, compared with (4), the performance drops by 1.57%. Furthermore, in (1) and (2), we conduct drop-one-out experiments on the subject and verb decoder, respectively. Compared with (1) and (2), the model without the verb decoder is worse than the model without the subject decoder, which indicates that the verb decoder plays a more critical role. Formulations of the verb box. The proposed ASMBR is an adaptable verb box that dynamically considers the spatial relationship between the subject and object box and guides the verb decoder to extract semantic features from the corresponding region. To verify the effectiveness of ASMBR, we use the verb box degraded from the ASMBR to conduct ablation studies, and the results are shown in Table <ref>. From the results of (3) to (5), the adaptive and shift operations for the MBR promote the performance of the verb box, by 1.08% on the Full category and 5.17% on the Rare category. Furthermore, in (1) and (2), we directly use the object or subject box as the verb box, and the results show that the region of the object plays a more critical role in the verb prediction. S-O attention module. The S-O attention module is the core module of the SOV model, which is responsible for the fusion of the object and subject features. To explore the strength of the S-O attention mechanism, different variants of designs we have attempted are shown in Table <ref>. The result of (1) indicates the S-O attention module used in SOV-STG-S. In (2), we remove the bottom-up path in S-O attention. Since the bottom-up path strengthens the feature fusion, without the information flow from lower layers, the performance drops by 1.18%. In (3), similar to GEN-VLKT <cit.>, we attempt to feed all layers' fused embeddings into the verb decoder. However, compared with (1), the accuracy drops by 3.05%. The cross-attention enables the fused embeddings to be enriched by the verb label embeddings. In (4), we remove the cross-attention of S-O attention, and the attention module is degraded to a sum fusion module. From the result, the performance drops by 2.34% compared with (1). Similarly, in (5), we also attempt the multi-layer design of the (4), and the performance also drops. We consider that the multi-layer design is not suitable for the verb prediction as the deformable transformer attention mechanism is a local attention mechanism, which focuses on different parts of the source feature in different layers. Specifically, the sampling points in the verb decoder are not related to the sampling points in the object and subject decoders, which focuses on different positions of the global semantic feature. Consequently, the multi-layer design forces the verb decoder to match the attention of the object and subject decoders, which leads to the performance drop. Denoising Strategies. In Table <ref>, we investigate the denoising strategies of three parts of the targets, i.e., the box coordinates, the object labels, and the verb labels. The result of (6) indicates the result of SOV-STG-S. In (1), we set the noise rate of box coordinates to δ_b=0, the object label filp rate to η_o=0, and the verb label filp rate to η_v=0, thus, the ground-truth box coordinates, object labels, and verb labels are directly fed into the model without any noise. From the result, the accuracy drops by 2.40% compared with the full denoising training in (6). In (3), (4), and (5), we conduct drop-one-out experiments, and the results show that each part of the denoising strategy is effective. For the results between (2) and (3), and (4) and (3), the verb denoising increases the performance while it used with the object denoising. Dynamic DN scales. The dynamic DN scale is used to adjust the denoising training difficulties during the whole training session. In Figure <ref>, we adjust the dynamic DN scale (γ) to reveal the effects of different dynamic DN scales. Compared with γ=1, while γ=2/3, the best performance is achieved, and the dynamic DN strategy mainly improves the performance on the Full and Rare categories. § CONCLUSION AND FUTURE WORK In this paper, we proposes a novel one-stage framework, SOV with HOI split decoders for target-specific decoding and a specific target guided denoising strategy, STG, for efficient training. Our framework SOV-STG adopts a new format to represent HOI instances in boxes and learns HOI-specific priors for decoding. With the well-designed architecture and efficient training strategy, our framework achieves state-of-the-art performance with less training cost. Since our architecture disentangles the HOI detection by specific priors and decoders, it is easy to improve any one of them. In the future, we are going to incorporate the knowledge from the language models to improve performance. ieee_fullname
http://arxiv.org/abs/2307.00409v1
20230701184515
Maximum Overlap Area of Several Convex Polygons Under Translations
[ "Hyuk Jun Kweon", "Honglin Zhu" ]
cs.CG
[ "cs.CG", "68U05", "I.3.5" ]
Let k ≥ 2 be a constant. Given any k convex polygons in the plane with a total of n vertices, we present an O(nlog^2k-3n) time algorithm that finds a translation of each of the polygons such that the area of intersection of the k polygons is maximized. Given one such placement, we also give an O(n) time algorithm which computes the set of all translations of the polygons which achieve this maximum. Aggregation Consistency Errors in Semantic Layers and How to Avoid Them Eugene Wu ======================================================================= § INTRODUCTION Shape matching is a critical area in computational geometry, with overlap area or volume often used to measure the similarity between shapes when translated. In this paper, we present a quasilinear time algorithm to solve the problem of maximizing the overlap area of several convex polygons, as stated in the following theorem. thmmainTheorem Let P_0,P_1,…,P_k-1 be convex polygons, with a total of n vertices, where k is constant. In O(nlog^2k-3n) time, we can finds translations _0,_1,…,_k-1 maximizing the area of (P_0+_0) ∩…∩ (P_k-1+_k-1). Once we have found a placement _0,_1,…,_k-1 that maximizes the overlap area, we can compute the set of all such placements in linear time. thmsecondMainTheorem With the notation in <Ref>, suppose that we have found a placement (_0,_1,…,_k-1) maximizing the overlap area. Then in O(n) time, we can compute the set of all placements that maximize the overlap area. This set is represented in terms of O(n) linear constraints without redundancy. Suppose that we have k polytopes in ℝ^d with n vertices in total. Clearly, the overlap volume function under translation is a piecewise polynomial function. To find the maximum overlap volume under translation, we can compute the maximum on each piece. For example, Fukuda and Uno presented an O(n^4) time algorithm for maximizing the overlap area of two polygons in ℝ^2 <cit.>. They also gave an O((kn^dk+1)^d) time algorithm for the problem with k polytopes in ℝ^d <cit.>. If the polytopes are convex, then the overlap volume function is log-concave. With this additional structure, one may apply a prune-and-search technique and make the algorithm much faster. For example, de Berg et al. gave a highly practical O(nlog n) time algorithm to find the maximum overlap of two convex polygons in ℝ^2 <cit.>. Ahn, Brass and Shin gave a randomized algorithm for finding maximum overlap of two convex polyhedrons in expected time O(n^3log^4 n) <cit.>. Ahn, Cheng and Reinbacher <cit.> find an O(nlog^3.5n) time algorithm for the same problem after taking a generic infinitesimal perturbation. The last two results cited from <cit.> and <cit.> have also been generalized to higher-dimensional cases within the same papers. On the other hand, there are few known results for problems involving several convex shapes. In this regard, the authors proposed an O(nlog^3 n) time algorithm to find the maximal overlap area of three convex polygons <cit.>. This result is based on an O(nlog^2 n) time algorithm that finds the maximum overlap area of a convex polyhedron and a convex polygon in ℝ^3 <cit.>. The main algorithm of this paper is a strict generalization of both <cit.> and <cit.>. The model of computation is the real RAM model. In particular, we assume that in the field of real numbers ℝ, binary operations +, -, × and / as well as binary relations < and = can be exactly computed in constant time. We remark that the base field ℝ can be replaced by any ordered field such as ℚ and ℝ((ε)). § NOTATION AND TERMINOLOGY In this paper, we use the notation f to refer to the closed support of a function f, i.e., the closure of the set of points where f is nonzero. Given a set S of vectors over a field R, its spanning space is denoted as _R S. For two sets A,B∈ℝ^d, we define their Minkowski sum and difference as A+B={+|∈ A, ∈ B} and A-B = {|+B⊂ A}, respectively. We consider closed polytopes unless otherwise specified. When referring to a polytope P, its (geometric) interior consists of the set of points not on the facets, while its (geometric) boundary comprises the set of points on the facets. On the other hand, the topological interior of P⊂R^n is the set of points in P that have an open ball entirely contained in P. The topological boundary of P consists of points that are on the interior of P. Note that the (geometric) interior is an intrinsic property, while the topological interior is an extrinsic property. We employ the technique of symbolic infinitesimal translation, similar to <cit.>. However, unlike <cit.>, our problem requires multiple levels of infinitesimal numbers to handle multiple polygons. Given a field R, let R((ε)) be the field of Luarent series of R. We work over a very large ordered field ℝ⟨⟨ε_0,ε_1,…⟩⟩ = ⋃_n,s > 0ℝ((ε_0^1/n,ε_1^1/n,…,ε_s-1^1/n)), which is the field of Puiseux series with countably many variables.[If the base field R is not ℝ, we may need to take an algebraic closure of R.] Here, ε_s is a positive infinitesimal smaller than any positive expression involving only ε_0,…,ε_s-1. Then ℝ⟨⟨ε_0,…⟩⟩ is a real closed field <cit.>. Hence, assuming constant time computability for basic operations in ℝ⟨⟨ε_0,…⟩⟩, any algorithm in the real RAM model can be executed with the same time complexity using ℝ⟨⟨ε_0,…⟩⟩. Of course, ℝ⟨⟨ε_0,…⟩⟩ is far from computable, so we limit our usage of it in this paper. A geometric object in ℝ^m (such as flats, hyperplanes, polytopes, etc.) is called ε(s)-translated if it is defined by equations and inequalities that involve only linear polynomials of the form · + b where ∈ℝ^m is a vector of variables, ∈ℝ^m and b ∈_ℝ{1,ε_0,ε_1,…,ε_s-1} are constants. By restricting the inputs to ε(s)-translated objects, we can usually ensure that the whole computation is performed within a finite ℝ-vector subspace of ℝ⟨⟨ε_0,…⟩⟩ of dimension O(1). This enables us to apply many algorithms involving ε(s)-translated polytopes with the same time complexity, while guaranteeing the mathematical rigor. Specifically, the following algorithms that we use in our work are valid with ε(s)-translated objects: * Computing intersection of two convex polygons <cit.> * Computing intersection of two convex polyhedra <cit.> * Computing maximum sectional area of a convex polyhedron <cit.> * Computing (1/r)-cuttings <cit.> * Solving linear programming <cit.> Moreover, our algorithm performs computations within _ℝ{ε_0^e_0ε_1^e_1…ε_2k-4^e_2k-4 | 0≤ e_i≤ 2}. § CONFIGURATION SPACE The aim of this section is to define the configuration space, the domain of the overlap area function, and discuss its properties. Throughout the paper, we take k convex polygons P_0, P_1, …, P_k-1, where k is a constant. Let _0,…,_k-1∈ℝ^2 be vectors of indeterminates. The overlap area of I = (P_0+_0) ∩ (P_1+_1) ∩…∩ (P_k-1+_k-1) is invariant under the map (_0,…,_k-1)↦ (_0+,…,_k-1+). Therefore, we define the configuration space as a (2k-2)-dimensional quotient linear space {(_0,…,_k-1)_i∈ℝ^2}/{(,…,)∈ℝ^2}. Any element of will be called a placement. We denote (_0;…;_k-1)∈ as a placement that corresponds to (_0,…,_k-1)∈(ℝ^2)^k. We define the overlap area function →[0,∞) as (_0;…;_k-1) | (P_0+_0) ∩…∩ (P_k-1+_k-1) |. and then its support is compact. To compute (_0;…;_k-1) in linear time, we use the following theorem: Let P and Q be convex polygons of m vertices and n vertices, respectively. Then P∩ Q can be computed in O(m+n) time. This was first proved by Shamos <cit.>; see also <cit.>. The vertices (x_0,y_0),…,(x_r-1,y_r-1) of the overlap I can be expressed as linear polynomials in _0,…,_k-1 in a generic setting. Ordering them in counter-clockwise direction, the area of I can be computed using the shoelace formula: |I| = 1/2∑_i ∈ℤ/rℤ(x_iy_i+1-x_i+1y_i), where the indices are taken modulo r. Therefore, is a piecewise quadratic function of _0,…,_k-1. Note that may not be quadratic in two cases: * an edge of a polygon P_i+_i contains a vertex of another polygon P_j+_j and * edges of three distinct polygons P_i+_i, P_j+_j and P_k+_k intersect at one point. Each of these events defines a polytope in of codimension 1. Following <cit.>, we call such a polytope as an event polytope. An event polytope defined by <ref> (resp. <ref>) is called of type I (resp. of type II). A hyperplane containing a type I (resp. type II) event polytope is also called of type I (resp. of type II). There are O(n^2) type I hyperplanes and O(n^3) type II hyperplanes. § LINEAR PROGRAMMING Let L ⊂ be an ε(s)-translated m-flat. The goal of this section is to provide an O(n) time algorithm that finds a placement ∈ L such that ()≠ 0. If no such placement exists, the algorithm returns . When working with two polygons, is simply the Minkowski sum P_0 + (-P_1), where -P_1 is the polygon P_1 reflected about the origin. However, when working with more than two polygons, the problem becomes more complex. To tackle this problem, we use linear programming with Meggido's solver. A linear programming problem with a fixed number of variables and n constraints can be solved in O(n) time. Let n_i be the number of vertices of P_i. Then P_i is defined by n_i linear inequalities: f_i,a() ≥ 0 (for a<n_i). The codimension of the m-flat L⊂ is 2k-m-2. Thus, L is defined by ε(s)-translated 2k-m-2 linear equations: g_b() = 0 (for b<2k-m-2). Then a point ∈ℝ^2 and a placement = (_0;…;_k-1) ∈ satisfy the constraints { f_i,a(-_i) ≥ 0 (for i<k and a<n_i) and g_b() = 0 (for b<2k-m-2). . if and only if ∈ (P_0+_0) ∩…∩ (P_k-1+_k-1) and ∈ L. Therefore, we obtain the lemma below. We have ∈ L∩ if and only if (,) satisfies (<ref>) for some ε(s)-translated point in a plane. Hence, in O(n) time, we can get ∈ L∩, by solving any linear programming with the constraints (<ref>). One problem is that might be on the (topological) boundary of . Let M be the solution set of ε(s)-translated linear constraints { p_i() ≥ 0 (for i < n) and q_j() = 0 (for j < m). where ∈ℝ^d and d is constant. Then we can compute the maximal affinely independent set S in O(m+n) time. By <Ref>, we can assume that M≠∅. Moreover, by eliminating variables, we may also assume that m = 0. To compute the maximal affinely independent set, we start with an empty set S and gradually add points to it. At each step, we look for a new point that is not in the affine hull of the current set S. To do this, we first select a linear functional h that is non-zero but evaluates to zero on all points in S. We can find such a functional in constant time since d is a constant. We then find the minimum and maximum values of h subject to the constraints in M, denoted by _min and _max, respectively. If |S| ≤ M, then h(_min)<h(_max). Therefore, for some ∈{_min,_max}, the set S∪{} should be also affinely independent. In this case, we replace S by S∪{}. If not, we terminate the process. In O(n) time, we can either return ∈ L such that ()≠ 0, or return if none exists. Let M ⊂ℝ^2× L be the solution set of the constrains (<ref>). Then () ≠ 0, if and only if (,) is an topological interior point of M ⊂ℝ^2× L for some ∈ℝ^2. Applying <Ref>, we get the maximal affinely independent set S of M. If |S| ≤ m+2, then M < 2+ L, and M has no topological interior point, so we return . If |S| = m+3, then (_avg,_avg) = 1/|S|∑_(,)∈ S (,) is an topological interior points of M ⊂ℝ^2× L. Hence, we return _avg. § DECISION PROBLEM We aim to find the maximum of on an m-flat L⊂ using an induction on m. To do so, we apply a prune-and-search technique on the set of event polytopes. However, this technique requires solving a decision problem: given a hyperplane H⊂ L, we must determine on which side of H the maximum of |_L lies. In this section, we provide an algorithm for this decision problem under certain induction hypotheses. The square root of →[0,∞) is concave on its support. This follows immediately from the Brunn–Minkowski inequality <cit.><cit.>; see also <cit.>. Now, we assume the following hypothesis in the rest of this section. Let s be any constant and L⊂ be an ε(s)-translated (m-1)-flat. Then we can find ∈ L maximizing |_L in O(T(n)) time. We can partition L into ε(s)-translated open polytopes on which is quadratic. Therefore, the maximum ∈ L of |_L is an ε(s)-translated placement. Given an ε(s)-translated m-flat L and its ε(s)-translated hyperplane H⊂ L, let M⊂ L be the set of maximum points of |_L. We can determine which side of H contains M in O(T(n)) time. For any t∈ℝ⟨⟨ε_0,…⟩⟩, let h(t) = max_∈ t + H(). Let N ⊂ε(s) be the set of all maximum points of h(x). It suffices to decide on which side N lies with respect to 0. By <Ref>, the function h ε(s) → [0,∞)_ε(s) is unimodal. By <Ref> with s+1, we can compute the sequence S = (h(-ε_s+1), h(0), h(ε_s+1)) in O(T(n)) time. If h(0) = 0, then all interior points of h lie in the same side with respect to 0. In this case, apply <Ref> and attempt to get one point of h. If h(0) ≠ 0, there are three remaining cases. * If S is strictly increasing, then N ⊂ (0,∞). * If S is strictly decreasing, then N ⊂ (-∞,0). * If S is not strictly monotonic, then 0 ∈ N. This proof highlights the necessity of infinitesimal translations for our algorithm. Since s only increases in this step, it is bounded by = 2k-2 throughout the paper. § TWO POLYGONS The goal of this section is to present a linearithmic time algorithm for finding a translation that maximizes the overlap area of two convex polygons under translations. This problem was previously studied by de Berg et al. <cit.>, but our approach is different and allows for handling multiple polygons. In this section, we only have two convex polygons P = P_0 and Q = P_1 with n and m vertices, respectively. We consider only one translation vector = _1 - _0, and since is two-dimensional, we refer to event polytopes and hyperplanes as event line segments and lines, respectively. Since there are no type II line segments, all event line segments can be defined by one of the following two events: * an edge of a polygon P contains a vertex of polygon Q+ and * an edge of a polygon Q+ contains a vertex of polygon P. The first type of event lines segment will be called of type (0,1) and the second type of event lines will be called type (1,0) line segments. The same rules apply to event lines. Type (0,1) lines are organized into n groups, each with m parallel lines. Our goal is to efficiently prune this set, requiring an appropriate representation. We use 'arrays' to denote sequential data structures with constant time random access, and assume the size of each array is predetermined. The n groups of parallel lines are represented by sorted arrays A_0, A_1, …, A_n-1. Each array A_i holds the y-intercepts and a single slope value for the lines in the i-th group. For vertical lines in A_i, we store the x-intercepts instead. A slope-intercept array A consists of sorted arrays A_0, A_1, …, A_n-1, each with an associated potentially infinite number. Its number of groups is n, and its size |A| is the sum of the sizes of A_i. Another slope-intercept array A' is a pruned array of A if it consists of A with identical slopes. We can use <cit.> to prune a slope-intercept array A, but the description is complicated and the result is weaker. Instead, we rely on a stronger version, which we prove in the appendix. For a slope-intercept array A with n groups of lines, we can partition the plane ℝ^2 into four closed quadrants T_0,…,T_3 using one horizontal line ℓ_0 and one non-horizontal line ℓ_1. Additionally, for each i<4, we can compute pruned array P_i of A that include all lines intersecting the interior of P_i and have size at least (7/8)|A|, all in O(n) time. Now, we will represent the set of type (0,1) event lines using a slope-intercept array. We have n linear functions f_0,…,f_m-1 and m vertices v_0,…,v_n-1 of a convex polygon, both ordered counterclockwise by their gradient vectors and arrangement, respectively. In O(m+n) time, we can find indices a(0),…,a(n-1) such that vertex v_a(i) minimizes f_i(v_j) for all j<m. In O(m) time, we can find a(0) by computing all f_0(v_j). Now, suppose that a(i-1) is computed. Then compute the sequence f_i(v_a(i-1)),f_i(v_a(i-1)+1),f_i(v_a(i-1)+2),… until it increases after some index a'. Then f_i(v_a') maximizes f_i, so a(i) = a'. By repeating this process, we can find all a(0),a(1),…,a(m-1). Observe that v_a(0), v_a(1), …, v_a(n-1) are sorted counterclockwise. Since we only perform one rotation, this process requires O(m+n) time. In O(m+n) time, we can construct a slope-intercept array of 2n groups of size mn representing the set of all type (0,1) lines Let P be a polygon with n linear inequalities f_i() ≥ 0, sorted counterclockwise by the gradients of ∇ f_i. Let ℓ_i be the line defined by f_i=0, and let v_0,…,v_m-1 be the vertices of Q sorted counterclockwise and indexed modulo m. Then the set of all type (0,1) lines is S = {-v_j + ℓ_i | i<n and j< m }. By using <Ref>, we can determine the indices a(i) and b(i) for each i, such that v_a(i) (resp. v_b(i)) is the vertex of Q that minimizes (resp. maximizes) f_i(v_j) for all j<m. This computation can be done in O(m+n) time. We can then construct two arrays: A_2i (-v_a(i) + ℓ_i , -v_a(i)+1 + ℓ_i , …, -v_b(i)-1 + ℓ_i) and A_2i+1 (-v_b(i) + ℓ_i , -v_b(i)+1 + ℓ_i , …, -v_a(i)-1 + ℓ_i), whose intercepts are sorted. Note that we do not need to compute the entries of A_i explicitly; once we have computed a(i) and b(i), we can perform random access in O(1) time using the formulas above. The resulting arrays A_0,…,A_2n-1 provide a slope-intercept array representing the set of all type (0,1) lines. Let P and Q be convex polygons, with m and n vertices, respectively. In O((m+n)log(m+n)) time, we can finds a translation ∈ℝ^2 maximizing the overlap area () = |P ∩ (Q+)|. For any line ℓ⊂ℝ^2, we can compute a point ∈ℓ maximizing |_ℓ in O(m+n) time by <cit.>. Using <Ref>, we can determine on which side of ℓ the set of maxima of lies in O(m+n) time. By constructing a slope-intercept array A of (m+n) groups with <Ref>, we can represent all event lines in O(m+n) time. Applying <Ref> to ℓ_0 and ℓ_1 obtained from <Ref>, we can prune A to about 1/8 of its size, and this step requires O(m+n) time. After O(log(m+n)) steps, only O(1) lines remain, and we can find a placement that maximizes the overlap area () directly. § SEVERAL POLYGONS The aim of the section is to give an O(nlog^2k-3 n) time algorithm to compute ∈ maximizing . We first restrict the domain of into an m-flat L⊂ and prove a slightly stronger statement below by induction on m. Let L⊂ be an ε(s)-translated m-flat. Then in O(nlog^m-1 n) time, we can find ∈ L maximizing |_L. The proof of the base case can be obtained by modifying the proof of <cit.>. Let ℓ⊂ be an ε(s)-translated line. Then in O(n) time, we can find ∈ℓ maximizing |_ℓ. We parameterize ℓ by f(t) = (f_0(t), f_1(t), …, f_k-1(t)), where f_iℝ→ℝ^2 are ε(s)-translated linear functions. We define cylinders C_i (x,y,z)∈ℝ^3,|, (x,y) ∈ f_i(z)+P_i. We can compute C = C_0∩ C_1∩…∩ C_k-1 in O(n) time using Chazelle's algorithm <cit.>. Let H_t⊂ℝ^3 be the hyperplane defined by z = t. Then we have |C∩ H_t| = |(P_0+f_0(t))∩…∩(P_k-1+f_k-1(t))|. We can find t maximizing |C∩ H_t| in O(n) time using <cit.>. For such a t, the maximum point of |_ℓ is f(t)∈ℓ. Therefore, we assume that m>1 and the following induction hypothesis is true. Let L⊂ be an ε(s)-translated (m-1)-flat. Then we can find ∈ L maximizing |_L in O(nlog^m-2 n) time. We will first find an m-simplex T_I⊂ L such that T_I has the maximum point of Π|L and no type I hyperplane intersects the interior of T_I. Recall that type I hyperplanes are defined by the following event. <ref> an edge of a polygon P_i+_i contains a vertex of another polygon P_j+_j and If i and j are specified, then it will be called a type (i,j) hyperplane. Then type I hyperplanes are grouped into k(k-1) groups, each of which is the set of type (i,j) hyperplanes. Any type (i,j) hyperplane H is defined by a linear equation of the form ·(_i-_j) = c for some ∈ℝ^2 and c∈ℝ. Consider the projection π_i,j →ℝ^2 ↦_i - _j. Then π_i,j(H)⊂ℝ^2 is a line. Such a line will also be called of type (i,j). Thus, we will find a triangle T_i,j⊂ L such that no type (i,j) lines intersect the interior of T_i,j. In O(nlog^m-1n) time, We can find a triangle T_i,j⊂ℝ^2 such that * a maximum point of |_L lies on π_i,j^-1(T_i,j) ∩ L, and * no type (i,j) lines intersects the interior of T_i,j. The proof is similar to that of <Ref>. Let M ⊂ L be the set of placements maximizing |_L. To determine on which side of a line ℓ the set π_i,j(M) lies, we apply <Ref>, which takes O(nlog^m-2n) time. We can represent all type-(i,j) lines by a slope-intercept array A in O(n) time, as shown in <Ref>. Applying <Ref> to obtain lines ℓ_0 and ℓ_1, we can prune A to about 1/8 of its size using <Ref>. This step requires O(nlog^m-2n) time. After O(log n) steps, only O(1) lines remain, and then we triangulate the remaining region. This gives a triangle T_i,j with the desired properties in O(nlog^m-1n) time. Now, define T_I ⋂_i,j < dπ_i,j^-1(T_i,j)⊂ L. Then T_I is defined by 3 k(k-1) ∈ O(1) linear polynomials, and by construction, no type I hyperplanes intersect the interior of T_I. Our goal now is to find an m-simplex T⊂ T_I such that T has the maximum point of Π|_L and no event polytopes intersect the interior of T. To achieve this, we first note that only O(n) type II hyperplanes intersect the interior of T_I. Thus, we can obtain T by repeatedly applying Chazelle's cutting algorithm. [Matoušek <cit.>] A cutting of ℝ^d is a collection C of possibly unbounded d-simplices with disjoint interiors, which together cover ℝ^d. Let S be a set of n hyperplanes in ℝ^d. Then a cutting C is a (1/2)-cutting for S if the interior of each simplex intersects at most n/2 hyperplanes. With the notation in <Ref>, a (1/2)-cutting of size O(2^d) can be computed in O(n2^d-1) time. In addition, the set of hyperplanes intersecting each simplex of the cutting is reported in the same time. In O(nlog^m-1n) time, we can find an ε(s)-translated m-simplex T⊂ L such that * the maximum point of |_L lies on T, and * no event polytope intersects the interior of T. Take T_I as defined in (<ref>). By construction, no type I hyperplane intersects the interior of T_I ⊂ L. Therefore, the set of pairs of intersecting edges of P_i and P_j does not depend on the placement ∈ T_I. Moreover, every edge of P_i intersects at most two edges of P_j. Therefore, there are at most d3 4n ∈ O(n) type II polytopes intersecting the interior of T_I. In O(n) time, we can compute the set S containing all such type II hyperplanes by sampling a placement in the interior of T_I. To find a simplex T satisfying the conditions of <Ref>, we first set T = T_I. Then we define S as the set of hyperplanes in L containing a facet of T or a type II polytope that intersects the interior of T. We can compute a (1/2)-cutting C of size O(1) for S in O(n) time using <Ref>. Using <Ref>, we can then find a simplex T' ∈ C containing the maximum point of |_L in O(nlog^m-2n) time. We set T = T' and repeat this process O(log n) times until no type II polytopes intersect the interior of T. We can find T as in <Ref> and compute Π|_T, which is a quadratic polynomial. Then we can directly compute the maximum point of Π|_T. * This is a corollary of <Ref> with R = ℝ and m = 2k-2. § SET OF MAXIMA Our next step is to determine the set M⊂ of placements ∈ that maximize the overlap area . Once we identify at least one such placement, the problem becomes easy, as every maximal overlap is the same up to translation. To accomplish this, we rely on the equality condition of the Brunn-Minkowski inequality. Let A and B be compact subsets of ℝ^2 with nonzero area. Then |1/2A+1/2B|^1/2≥1/2|A|^1/2+1/2|B|^1/2, and the equality holds if and only if A and B are homothetic. We define I() for any placement ∈, as follows: I() (P_0+_0) ∩…∩ (P_k-1+_k-1). Let ,∈ be two placements that both maximize . Then I() and I() are equivalent up to translation. Since P_0,…,P_k-1 are convex, 1/2I() + 1/2I() ⊂ I(+/2). Therefore, |1/2I() + 1/2I()| ≤|I(+/2)| ≤ |I()|. As a result, I() and I() are homothetic by <Ref>. Since |I()| = |I()|, this implies that I() and I() are equivalent up to translation. We then fix a maximal overlap I_max⊂ℝ^2. The set of all _i such that I_max⊂_i + P_i is given by the Minkowski difference (-P_i) - (-I_max) = {∈ℝ^2 | + (-I_max) ⊂ - P_i} = {∈ℝ^2 | I_max⊂ + P_i}. We define N ∏_i<m (P_i - I_max) and let π(ℝ^2)^k → be the natural quotient. The restricted map π|_N N → M is an affine isomorphism. By construction M = π(N). Suppose there exist two distinct , ∈ N such that = + (, , …, ) for some ∈ℝ^2. This implies that I_max = I() and I_max = I() = I() +. As a result, we must have =. Since each P_i and I_max contain at most n vertices, we can represent (-P_i) - (-I_max) using O(n) linear constraints without redundancy. This computation can be completed in O(n) time. Consequently, by employing standard linear algebra techniques, we can describe M ⊂ using O(n) linear constraints without redundancy in O(n) time. In O(n) time, we can represent M ⊂ using O(n) linear constraints without redundancy. Let _i = (x_i, y_i) for each i < m. A linear polynomial f(_0, …, _k-1) can be written as an affine combination of _1 - _0, …, _k-1 - _0 if and only if ∑_i<m∂/∂ x_i f = 0 and ∑_i<m∂/∂ y_i f = 0. Every edge of I_max should be part of an edge of P_i for some i < m. Consider two nonparallel edges. They yield two linear equations: ·_i = c and ·_j - d. Here, _i and _j are column vectors, and and are row vectors. Let ' = [ x'; y' ][ ; ]^-1[ ·_i - c; ·_j - d ]. Then ∑_i<m∂/∂ x_i' = [ 1; 0 ] and ∑_i<m∂/∂ y_i' = [ 0; 1 ], so we replace every _i by _i - ' in the linear constraints. As a result, each constraint is expressed in terms of _1 - _0, …, _k-1 - _0. <Ref> is an immediate corollary of <Ref>. § PARTITIONING WITH TWO LINES In this section, we prove <Ref>. While the main theorems can be derived solely from <cit.>, this approach is somewhat unsatisfactory. Specifically, it requires three queries at every step and prunes only 1/18 of the lines, leading to a slowdown factor of 27/8. Moreover, the statement of <cit.> is much more difficult to describe. To provide a more convenient (at least in the authors' taste) proof, we instead prove the dual statement. This is the problem of partitioning a set of points in the plane with two lines such that each quadrant contains at least 1/8 of the points. We begin by presenting Megiddo's linear time algorithm for a special case of the ham sandwich problem <cit.>. Given two finite sets of points in the plane with a total of n points, and with disjoint convex hulls, we can compute a line that bisects both sets in O(n) time. The following corollary is a slightly stronger result than Megiddo's original main theorem <cit.>. Given a set of n points in a projective plane ℙ^2, we can compute a horizontal line ℓ_0 and a non-horizontal line ℓ_1 in O(n) time, such that each closed quadrant defined by the two lines contains at least ⌊ n/4 ⌋ points in O(n) time. First, we can assume that there are no points on the line at infinity by applying the perturbation (a; b; c) ↦ (a; b; c + ε b). An appropriate value for ε can be computed in O(n) time. Additionally, we can disregard a single point at (1; 0; 0), as it is contained in all closed quadrants. Next, we identify the horizontal line that passes through the median y-coordinate of the points, denoted as ℓ_0. If ℓ_0 contains at least half of the points, we can select any non-horizontal line ℓ_1 that passes through the median point m of ℓ_0. As a result, we assume that ℓ_0 contains fewer than half of the points. We put the points above the line ℓ_0 in a set A. Moreover, we also put points on ℓ_0 from left until A has at least half of the points. Then B is the set of remaining points. Since the convex hulls of A and B are disjoint, we can apply <Ref> to compute the line ℓ_1 that simultaneously bisects both sets. Since ℓ_0 contains less than half of the points, ℓ_1 should not be horizontal. This divides the plane into four closed quadrants, each containing at least ⌊ n/4 ⌋ points. An intersecting aspect is that <Ref> offers a linear-time algorithm for its own weighted version. It is important to note that this approach heavily relies on the following well-established result. Given n distinct real numbers with positive weights, we can determine the weighted median of these numbers in O(n) time. Given n weighted points in a projective plane ℙ^2 with positive weights λ_0,…,λ_n-1, we can compute a horizontal line ℓ_0 and a non-horizontal line ℓ_1 in O(n) time such that each closed quadrant defined by the two lines contains at least 1/4 of the total weight. Once again, we can assume that there are no points on the line at infinity by applying perturbation (a;b;c) ↦ (a; b; c+ε b) and ignoring a single point at (1, 0, 0). Let ℓ_0 be the weighted median horizontal line. If ℓ_0 contains at least half of the total weight, then we can choose any non-horizontal line ℓ_1 passing through the weighted median point m of ℓ_0. Therefore, we assume that ℓ_0 contains less than half of the total weight. We start by putting all points above the line ℓ_0 into a set A, and adding points on ℓ_0 from left to right until A has at least half of the total weight. We modify the weight of the last point p so that the total weight of A is exactly half of the total weight, and set B as the remaining points and p with the remaining weight. Since ℓ_0 contains less than half of the total weight, any ham sandwich cut of A and B must not be horizontal. We can then find two lines ℓ_0' and ℓ_1' as in <Ref>. Let v_0 be their intersection, and let v_1 be the intersection of ℓ_1' and the line at infinity. Without loss of generality, we may assume that the y-coordinate of ℓ_0 is at most that of ℓ'_0. We then take a line ℓ_1 passing through v_0 and bisecting the weight of B. If ℓ_1 also bisects the weight of A, then this is the desired line. Otherwise, we may assume without loss of generality that the left side of ℓ_1 contains more weight. Then any ham sandwich cut of A and B must pass through the left side of ℓ_0' with respect to v_0. We can repeat this process with v_1. Then we determine which side of the line at infinity a ham sandwich cut of A and B must pass through with respect to v_1. After this, we identify one quadrant that does not intersect any ham sandwich cut of A and B. Thus, we can merge the points in that quadrant into two points, one for A and one for B, and repeat the entire process. Every step, the number of points become 3/4 and we get at most 3 new points. Thus, in O(n) time, at most 12 points remains. Then we can get a ham sandwich cut of A and B by brute force. The ham sandwich theorem implies that such a cut exists. Let A be an array of arrays A_0,…,A_n-1 of points. Suppose that for each i<m, points on A_i lie on the same horizontal line and are sorted from left to right. Then, in O(n) time, we can find ℓ_0 and ℓ_1 such that for each i<4, we can obtain a pruned array P_i of A with |P_i| ≥ |A|/8 and P_i contained in the ith quadrant. We can simply choose median points of each of A_i, and let the weight be the size of A_i. Then we can apply <Ref> and get the answer. Let (ℙ^2)^ be the dual projective space, the space parametrizing lines on ℙ^2. Consider the map (ℙ^2)^ →ℙ^2 ax+by+cz=0 ↦ (c;b;a). Then <Ref> is exactly the dual theorem of <Ref> under this map. In fact, we can do a little better if extra time is allowed. Let S be a collection of m sorted arrays. Given x, we can compute the rank of x in O(nlog|S|) time using binary search on each array. We can apply binear search on each array and get the answer. Let S be a collection of m sorted arrays. Then we can find the ith element of S in O(nlog^2|S|) time. Let m be the weighted median of medians of each array, where the weight is given by the size of each array. By applying binary search on each array, we can compute the rank of m in O(nlog|S|) time. From the medians, we can then discard 1/4 of the elements of S and recursively repeat the process. Since there are O(log|S|) levels of recursion, the overall time complexity is O(nlog^2|S|). Let A be an array of arrays A_0,…,A_n-1 of points. Suppose that for each i<m, points on A_i lie on the same horizontal line and are sorted from left to right. Then, in O(nlog^2|S|) time, we can find ℓ_0 and ℓ_1 such that for each i<4, we can obtain a pruned array P_i of A with |P_i| ≥ |A|/4 and P_i contained in the ith quadrant. The proof is almost same are that of <Ref>. However, we need to use <Ref> for pruning points, <Ref> for bisecting B, and <Ref> for counting points of A. plain
http://arxiv.org/abs/2307.02902v1
20230706102847
Colored delta-T noise in Fractional Quantum Hall liquids
[ "K. Iyer", "J. Rech", "T. Jonckheere", "L. Raymond", "B. Grémaud", "T. Martin" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.str-el" ]
Aix Marseille Univ, Université de Toulon, CNRS, CPT, Marseille, France Photons are emitted or absorbed by a nano-circuit under both equilibrium and non-equilibrium situations. Here, we focus on the non-equilibrium situation arising due to a temperature difference between the leads of a quantum point contact, and study the finite frequency (colored) noise. We explore this delta-T noise in the finite frequency regime for two systems: conventional conductors described by Fermi liquid scattering theory and the fractional quantum Hall system at Laughlin filling fractions, described by the chiral Luttinger liquid formalism. We study the emission noise, its expansion in the temperature difference (focusing on the quadratic component) as well as the excess emission noise defined with respect to a properly chosen equilibrium situation. The behavior of these quantities are markedly different for the fractional quantum Hall system compared to Fermi liquids, signalling the role of strong correlations. We briefly treat the strong backscattering regime of the fractional quantum Hall liquid, where a behavior closer to the Fermi liquid case is observed. Colored delta-T noise in Fractional Quantum Hall liquids K. Iyer, J. Rech, T. Jonckheere, L. Raymond, B. Grémaud, and T. Martin August 1, 2023 =========================================================================== § INTRODUCTION In recent years, the study of non-equilibrium noise in mesoscopic devices has generated new investigations, both on the experimental and on the theoretical side. Indeed, instead of using the standard method to impose a non-equilibrium situation by connecting the device to leads with different voltages and generating so called quantum shot noise, experimentalists have opened the field of “delta-T noise’’ by choosing instead to apply a thermal gradient and zero voltage drop to the device. In this situation, provided that electron hole symmetry is respected, a finite zero frequency non equilibrium noise can be measured while the current flowing through the device remains zero. Voltage bias induced quantum noise<cit.> has always been considered as a crucial diagnosis of quantum transport, providing complementary information about the charge of the current carriers or their statistics. Early theoretical works on delta-T noise suggest that it is also relevant to characterize nanoscopic devices.<cit.> In particular, in correlated systems such as quantum Hall devices, delta-T noise in one dimensional correlated systems clearly depends on the dimension of the operators which describe the elementary excitations of the system, suggesting that it could provide information about anyonic statistics. On the experimental side delta-T noise has been studied in atomic break junctions representing quantum point contacts,<cit.> tunnel junctions,<cit.> integer quantum Hall effect edge channels,<cit.> under a weak or a strong temperature bias. Also, it has recently been employed to study the heat transport along the edges. <cit.> On the theoretical side, delta-T charge noise (and in some instance heat noise<cit.>) has been already studied in a vast variety of systems, ranging from quantum point contacts/tunnel junctions,<cit.> resonant levels or quantum dots in the Kondo regime,<cit.> Fractional Quantum Hall systems,<cit.> bosonic systems and quantum spin Hall systems,<cit.> and normal metal/superconductor junctions.<cit.> All of these studies have focused uniquely on zero frequency noise, the experimental regime where the “white noise’’ has a weak dependence on the frequency because this frequency scale is sufficiently high so that 1/f noise can be neglected, but also sufficiently low to avoid specific features associated with the non equilibrium conditions which are imposed on the device. Voltage induced non-equilibrium noise at high frequency, sometimes dubbed “colored noise’’,<cit.> has been discussed and introduced theoretically about a quarter of a century ago<cit.>. It was pointed out that its measurement requires a quantum treatment of both the noise detector and the nanoscopic device under study. It is therefore considered a subtle quantity because of the necessity to distinguish emission noise, where the nanoscopic device emits microwave photons to the quantum detector, from absorption noise where the detector (which in practice has photon occupations specified by the Bose-Einstein distribution for instance) emits photons which are absorbed by the nanoscopic device. Voltage induced finite frequency (colored) noise in normal metal junctions is characterized by cusps in the emission and absorption noise located at frequencies corresponding to the Josephson frequency associated with the electron charge. Experimentally, the measurement of colored noise has for a long time shied away experimentalists because of the inherent difficulties of the measurement scheme, but some successes have been won in superconducting hybrid junctions<cit.> and with the refinement of experimental detection techniques, recently the Josephson frequency<cit.> of fractional quasiparticles of the (Laughlin) fractional quantum Hall effect was measured<cit.>, constituting the first finite frequency measurement of noise in a correlated electron system, and an alternative diagnosis of the fractional charge (as compared to the measurement of the Fano factor). The question which we want to address in the present work is simple, but the answer may not be so obvious: what is the frequency spectrum of photons emitted/absorbed from a nanoscopic device when the non-equilibrium condition is imposed solely by a temperature gradient? Does finite frequency delta-T noise have specific signatures which can be tied to the scaling dimension of the operators describing the elementary excitations – and thus their statistics ? Similar questions have been addressed in recent works <cit.> for normal metal leads connected by a quantum dot. As a starting point, we explore the physics of finite frequency delta-T noise in a (normal metal) Fermi liquid system. This will subsequently be used as a benchmark to study finite frequency delta-T noise in the fractional quantum Hall effect, the focus of this article. The paper is organized as follows: in Sec. <ref> we introduce the emission and absorption noise, as well as the excess emission noise and the thermal-like contribution of finite frequency noise; in Sec. <ref> we discuss finite frequency noise for Fermi liquids; in Sec. <ref> we focus on the Fractional Quantum Hall effect regime and we conclude in Sec. <ref>. § EMISSION, ABSORPTION AND EXCESS NOISE As explained in the literature,<cit.> when considering finite frequency noise, the quantum nature of the noise detector needs to be described on the same footing as the device under study. There exist typically two coupling schemes between the two circuits: an inductive coupling scheme<cit.>, where microwave photons are exchanged between the device and a resonant (LC) circuit, or a capacitive coupling scheme,<cit.> where photons emitted/absorbed by the device trigger inelastic transitions in a nearby measuring circuit where current is measured. As a result, in full generality, two distinct correlators need to be defined in order to define the physically measured noise. The emission noise describes the spectrum of microwave photons emitted to the (quantum) noise detection device: S_+ (ω) = ∫_-∞^+∞dτ ⟨δ I(0) δ I(τ)⟩ e^iωτ , where δ I(τ)=I(τ)-⟨ I(τ)⟩ describes the deviation of the current operator from the stationary current ⟨ I(τ)⟩=⟨ I⟩. The absorption noise describes the absorption of microwave photons emitted from the detector. S_- (ω) = ∫_-∞^+∞dτ ⟨δ I(τ) δ I(0)⟩ e^iωτ . They are related by the equation S^+ (ω)=S^- (-ω), which allows us to consider only the emission noise from this point onward. We focus on a situation where the two lead reservoir have the same chemical potential, while the left (right) reservoir is at temperature T_L (T_R). In this situation, and in the presence of electron/hole symmetry, no net current flows (⟨ I⟩ =0), but the (non-equilibrium) emission noise S_+(ω, T_R, T_L)≠ 0 depends on both temperatures. We now introduce the “thermal-like” contribution to the noise: S_+^th(ω,T_R,T_L) = 1/2S_+(ω,T_R,T_R) +1/2S_+(ω,T_L,T_L) which reduces exactly to the finite frequency Johnson-Nyquist thermal equilibrium emission noise when T_R=T_L. Following Ref. hasegawa21, it is then convenient to define the excess emission noise according to: Δ S_+(ω,T_R,T_L) = S_+(ω,T_R,T_L) -S_+^th(ω,T_R,T_L) where we have subtracted the thermal contributions of both the leads from the emission noise. This quantity is measurable experimentally, and reduces to the sole out-of-equilibrium contribution in the non-interacting regime, even when the transmission probability is energy-dependent. § FERMI LIQUIDS We start by analyzing the general non-equilibrium noise in a system of non-interacting fermions. Consider a two-terminal phase coherent system composed of fermionic reservoirs separated by a scattering region specified by a scattering matrix 𝒮 (which contains the amplitudes for a particle from reservoir L (R) to be transmitted/reflected in reservoir R or L; for simplicity, we choose both leads to bear only a single channel). Each reservoir is described by a Fermi distribution function: f_p (ω) = 1/e^ħω/k_B T_p + 1 . where p is the lead index. Noise in such fermionic systems is caused by the transmission of electrons from the left/right lead to the right/left lead, accompanied by the absorption or emission of photons, as depicted in Fig. <ref>. When considering emission noise, an electron (top left panel) from the tail of the left (high temperature) Fermi function can lose energy and end up in the vicinity of the Fermi level because there are free states available. This can also happen in reverse (top right panel), but to a lower extent, due to the thermal broadening of the Fermi functions. We emphasize that the latter channel for emission noise is specific to temperature-biased junctions. It is absent for zero-temperature, voltage-biased junctions since there are no states available below the Fermi level. The two lower panels refer to absorption noise processes and both electron transfer processes due to photon absorption are also present for pure voltage-biased junctions. Our starting point is the general formula for finite frequency emission noise:<cit.> S_+(ω) = 4 e^2/h∫ dE∑_p p'[δ_Lpδ_Lp'-s^*_L p(E) s_L p'(E-ħω)] ×[δ_Lp'δ_Lp-s^*_L p'(E-ħω) s_L p(E)] × f_p(E) [ 1 - f_p'(E-ħω) ]  . where p and p' are lead indices. The scattering matrix is described by the minimal parametrization: 𝒮=[ s_LL s_RL; s_LR s_RR; ] = [ i√(1-𝒯) √(𝒯); √(𝒯) i√(1-𝒯) ] , where 𝒯(E) is the energy-dependent transmission probability. In the context of scattering theory, assuming that the measurement frequency ω can be neglected in the scattering matrix elements [s_pp'(E-ħω)≈ s_pp'(E)] the emission noise can be split into thermal (equilibrium) and non-equilibrium (excess) contributions. The thermal-like contribution of the emission noise, given by Eq. (<ref>) reads, in the context of scattering theory: S_+^th(ω,T_R,T_L)=2e^2/h∫ dE 𝒯(E) {f_L(E) [ 1-f_L(E-ħω) ]+ f_R(E) [ 1-f_R(E-ħω) ]} while the excess emission noise given by Eq. (<ref>) reads Δ S_+(ω,T_R,T_L) = 2e^2/h∫ dE 𝒯(E) [1-𝒯(E) ] [ f_R(E)-f_L(E) ] [ f_R(E-ħω)-f_L(E-ħω) ] which naturally implies that it describes a purely off-equilibrium quantity as Δ S_+(ω,T,T)=0. Assuming particle-hole symmetry, and using the basic properties of the Fermi distribution, one can prove the following identities: Δ S_+(ω,T_R,T_L) = Δ S_+(-ω,T_R,T_L) ∫_-∞^+∞ dω Δ S_+(ω,T_R,T_L) = 0 The result of Eq. (<ref>) suggests that the excess emission noise is symmetric with respect to frequency (parity rule) so that it does not distinguish between emission and absorption processes. This symmetry property then allows us to obtain the result of Eq. (<ref>) that the excess emission noise also satisfies a sum rule, where the noise integrated over all frequencies is zero. These features of the excess emission noise will be later examined for fractional quantum Hall liquids. In the remainder of this section, we shall assume the transmission coefficient to be constant 𝒯(E)=𝒯, as the scattering theory result will be compared to the (Fermi) filling fraction ν=1 of the quantum Hall effect, where in the wide band limit, Ohm's law is satisfied and a constant transmission coefficient is implicit. Note that in this situation, at equilibrium (T_L=T_R=T), the thermal contribution of the emission noise has the analytical expression: S_+^th(ω,T,T)=4e^2/h𝒯ħω/exp(ħω/k_B T)-1 which, by definition of Eq. (<ref>), yields the usual zero frequency Johnson-Nyquist thermal noise as ω→ 0. §.§ Small temperature gradient We define the temperature difference Δ T = T_R - T_L and the average temperature T_avg = (T_R+T_L)/2. Working up to lowest order in the transmission amplitude, we ignore the 𝒯^2 term in the non-equilibrium part of the noise for later comparison with the weak backscattering regime of the fractional quantum Hall effect. The full emission noise (S_+) is plotted in Fig. <ref>, for a fixed gradient (Δ T = 10 mK) and several average temperatures. We note that in the small Δ T regime, S_+(ω, T_R,T_L) is almost equal to S_+^th(ω, T_avg, T_avg), given by Eq. (<ref>), the difference between the two being only of order O ( Δ T/T_avg). Going from the left to the right of the plot, S_+ corresponding to different T_avg are equal for large, negative ω and decrease linearly. As we get closer to ω =0 S_+ still keep decreasing, but the curves corresponding to different T_avg branch off and the curves with higher T_avg decay at a slower rate. The temperature-dependent decay continues for ω >0 and eventually, for large, positive ω, all the curves vanish. These features can be understood as a consequence of the thermal broadening of the Fermi distributions, by looking at Fig. <ref>, where ω > 0 corresponds to subfigures (i), (ii) (emission processes) and ω < 0 to subfigures (iii), (iv) (absorption processes.) A greater number of higher energy states are occupied as T_avg is increased, hence, for ω > 0, S_+ decays slower as a function of frequency until it ultimately vanishes – corresponding to energies where the state occupation is negligible. Likewise, for ω < 0, the slower decay of S_+ for higher T_avg, can be understood in a similar fashion. For large negative omega, the distinction between Fermi distributions corresponding to different T_avg is negligible, and the noise is essentially the same, caused by absorption of high-frequency photons by the low energy states. This picture is better understood from the inset of Fig. <ref> which shows the difference between the emission noise at a given temperature and the same quantity evaluated at zero temperature. This reflects precisely the change in the occupation of the levels due to a non-zero T_avg. As pointed out earlier, the non-equilibrium consequences of the temperature difference are completely masked by the equilibrium thermal noise for Δ T ≪ T_avg. This can also be checked by plotting the emission noise for a fixed T_avg and different Δ T where one finds that the curves almost all collapse with the pure equilibrium thermal noise. This motivates us to look at the excess emission noise (Δ S_+) given by Eq. (<ref>), which has been designed specifically to get rid of the thermal contributions in the non-interacting regime and isolate the non-equilibrium contributions to the noise arising from the temperature difference <cit.>. Δ S_+ is displayed in Fig. <ref>, the top panel showing the excess emission noise for a fixed temperature gradient and several average temperatures, while the bottom panel corresponds to a fixed average temperature but several values of the temperature gradient. In all cases, Δ S_+ is characterized by a central peak at ω=0, where the noise is positive. Indeed, for small frequencies, the Δ T-biased system is noisier than the corresponding equilibrium system averaged over the two temperatures. Δ S_+ then decreases gradually, bearing negative values for intermediate positive/negative frequencies, reaching a minimum whose position scales with the average temperature. Negative noise in the intermediate frequency regime suggests that there is less noise in the Δ T non-equilibrium scenario compared to an equilibrium situation of equal temperatures on both the leads. Finally, for large positive or negative frequencies, the excess noise vanishes meaning that the temperature difference does not modify the noise substantially compared to the equilibrium noise in this regime. The change in the sign of Δ S_+ for different frequency regimes can again be understood as a consequence of the difference in the occupation of the left and right Fermi leads. In the delta-T biased regime, for small frequencies, there is a higher number of processes contributing to the noise compared to an equilibrium situation, making the excess noise positive. On the contrary, for intermediate frequencies, there are fewer processes contributing to the noise, giving a negative excess noise. As predicted by the parity rule of Eq. (<ref>), Δ S_+ is an even function of frequency, and the sum rule Eq. (<ref>) has been checked numerically to be satisfied. We note that for a fixed Δ T, the peak and minima are more pronounced when the average temperature is small. This goes together with an overall spread infrequency which increases linearly as T_avg increases. However, for a fixed T_avg and increasing Δ T, while the spread of the spectrum remains the same, the size of the peak in Δ S_+ increases quadratically. It follows that the spread in frequency of the Δ S_+ spectrum seems to be governed by the average temperature, T_avg of the system, while the magnitude of the excess noise, which reflects the degree of non-equilibriumness of the noise, seems to be largely dictated by the temperature difference, Δ T. We express this quantitatively as Δ S_+(ω, T_R, T_L) ∼Δ T^2/T_avg𝒮( ω/T_avg). §.§ Large temperature gradient We next consider a non-equilibrium scenario where T_R ≪ T_L, such that we can essentially consider T_R ∼ 0. Our consideration of this regime is motivated by its relatively easy experimental accessibility. <cit.> We again find that the decay rate of the emission noise is controlled strongly by the average temperature of the system (see Fig. <ref>). The higher the average temperature, the slower the decay. The excess emission noise in this large temperature difference regime displays a behavior analogous to the small Δ T case. Again, the spread of the excess noise spectrum depends strongly on the average temperature while the magnitude of the noise is fixed by the temperature difference, as can be seen in Fig. <ref>. However, in contrast with the small Δ T case, S_+ may substantially differ from S_+^th in this regime since for large enough Δ T, the magnitude of Δ S_+ may be comparable to that of S_+^th § FRACTIONAL QUANTUM HALL EFFECT §.§ Luttinger Liquid model We now turn to a Hall bar in the fractional quantum Hall (FQH) regime, with a Laughlin filling factor, ie., ν = 1/(2n + 1) (n ∈ℤ^+). We want to analyse the behaviour of delta-T noise in these systems, which constitutes the central part of this work. FQH systems host edge states that can be described by a chiral Luttinger liquid Hamiltonian given by<cit.> H_0 = v_F/4πν∫ dx[ (∂_xϕ_R)^2 + (∂_x ϕ_L)^2 ] where ϕ_R/L are chiral bosonic fields that describe the right/left moving modes, propagating with velocity v_F. The bosonic fields are quantized by the commutation relation [ϕ_R/L(x), ϕ_R/L(y)] = ± iπsgn(x-y) and are related to the quasiparticle operators on the edge through the identity: ψ_R/L(x,t) = U_R/L/√(2π a)e^± ikxe^-i√(ν)ϕ_R/L(x,t), where a is a short-distance cutoff, U_R/L are the Klein factors, and k_F the Fermi momentum. We further equip the Hall bar with a quantum point contact (QPC), placed at position x=0, allowing tunneling between the counter-propagating edges. Working in the weak backscattering regime, where quasiparticles are allowed to tunnel between the edges, we need to add a tunneling term to the Hamiltonian H_WB(t) = Γ_0 ψ_R^†(0,t) ψ_L(0,t) + H.c. where Γ_0 is the tunneling amplitude. With this, the tunneling current operator can be calculated to be I_T(t) = ie^*Γ_0 ψ_R^†(0,t) ψ_L(0,t) + H.c. where e^* = ν e is the quasiparticle charge. We compute the delta-T emission noise associated with the backscattering current at the QPC using the Keldysh formalism, to lowest order (Γ_0^2) in the tunneling amplitude:<cit.> S_+(ω, T_R, T_L) = ( e^*Γ_0/ħπ a)^2 ∫ dτ  e^iωτ e^ν𝒢_R(-τ) + ν𝒢_L(-τ) where T_R, T_L are the temperatures at the right- and left-moving edges respectively, ω is the frequency at which the noise is measured and 𝒢_R/L are the finite-temperature bosonic Greens functions of the bosonic fields ϕ_R/L, typical of the chiral Luttinger liquids modelling the FQHE: 𝒢_R/L(τ) = ln[sinh(iπk_B/ħ T_R/Lτ_0 )sinh(πk_B/ħ T_R/L(iτ_0 - τ) )] with τ_0 = a/v_F being a short time cutoff. For T_R = T_L = T_avg in Eq. (<ref>), the thermal equilibrium emission noise can be evaluated analytically and is given by S^th_+(ω, T_avg, T_avg) = ( e^*Γ_0/ħπ a)^2 τ_0 ( 2π k_B T_avg/ħτ_0)^2ν -1 ×exp(-ħω/2 k_B T_avg) |Γ( ν + i ħω/2 π k_B T_avg)|^2/Γ(2ν) §.§ Small temperature gradient We now discuss the properties of delta-T noise in the strongly correlated regime of the Laughlin fractional quantum Hall effect. The delta-T emission and excess noise, for the weak backscattering regime where anyons tunnel across the QPC, are plotted in Fig. <ref>, for several values of the fractional filling factors ν=1/3,1/5,1/7. For the sake of comparison with the Fermi liquid results of the previous section, we use here the same convention of excess emission noise, which was defined in Eq. (<ref>). Note that, although this definition ensures that, in the non-interacting regime, all thermal contributions are filtered out, leaving only the purely non-equilibrium contributions to the noise, such a cancellation is not guaranteed in the FQH regime and may only be partial. In the FQH regime, similarly to the Fermi liquid case, the full emission noise (S_+) is almost equal to the equilibrium thermal noise (S_+^th), given in Eq. (<ref>). However, the general behavior is quite different from that of the Fermi liquid case, as the emission noise now shows a central asymmetric peak at small negative frequencies, then decreases for large positive/negative frequencies. The sharp decrease for positive frequencies is reminiscent of the Pauli blocking which restricts the emission of photons due to the presence of a Fermi sea. On the other hand, the slow decrease of S_+ for negative frequencies has no Fermi liquid equivalent. This behavior at high frequency can be readily understood by considering the asymptotics of Eq. (<ref>). For large, positive frequencies, one has S^th_+(ω→∞, T_avg, T_avg) ∼ω^2ν -1exp(-ħω/2 k_B T_avg), thus explaining the rapid exponential decay with frequency. Whereas in the limit of large negative frequency, it reduces to a simple power law in ω given by S^th_+(ω→ -∞, T_avg, T_avg) ∼ω^2ν -1. This power law behavior is directly related to the scaling dimension of the tunneling operator. It has been checked numerically. Interestingly, the noise spectrum always satisfies the inequality S_+(-ω, T_R, T_L) ≥ S_+(ω, T_R, T_L), independently of the temperatures of the incoming edge states. This property can be proven exactly in the case of Fermi liquid leads and holds irrespective of the details of the junction or the temperature difference. It amounts to stating that the rate at which the system absorbs energy from the electromagnetic field is always greater than or equal to the rate at which it transfers energy to the field. <cit.> This is typically interpreted in terms of processes involving electrons and holes being scattered in the conductor before recombining to emit or absorb the energy of a photon. <cit.> It is quite striking to observe that this generalizes to the case of FQH devices, suggesting a similar interpretation based on quasiparticle-quasihole pairs. Contrary to the Fermi liquid case, the excess emission noise Δ S_+ is asymmetric in frequency for nontrivial Laughlin fractions, which constitutes another example of the role of electronic correlations in the FQH regime. This breaks the parity rule of Eq. (<ref>), departing from the Fermi liquid picture. However, and quite importantly, the excess emission noise still satisfies the sum rule of Eq. (<ref>), despite its asymmetry in frequency. This can be readily understood upon integrating the expression of Eq. (<ref>) over the whole frequency range, noticing from Eq. (<ref>) that 𝒢_R/L(τ=0)=0, so that the integrated emission noise reduces to a constant, independently of the temperature of the leads. This result for the sum rule has also been checked numerically. We now look at the behaviour of Δ S_+ focusing on the filling factor ν = 1/3 FQH, first as a function of the average temperature (T_avg) for a fixed temperature difference (Δ T), and then as a function of Δ T for a fixed T_avg. The results are displayed in Fig. <ref>. First, we find that the (Δ T, T_avg) dependence of Δ S_+ for ν = 1/3 FQH, is largely similar to that of the Fermi liquid regime. Indeed, even in this strongly correlated system, we find that the spread in frequency of the noise spectrum is a function of the average temperature of the entire system, whereas the magnitude of the excess noise depends primarily on the temperature difference between the two FQH edges. Other filling fractions display the same behavior (not shown.) This behavior can be described quantitatively by an expression of the form Δ S_+( ω, T_R, T_L) ) ∼ T^2ν -3Δ T^2 𝒮( ω/T_avg), for small Δ T. As Δ T increases, the higher-order contributions become more important and we observe deviation from this behavior. Comparing the results of Fig. <ref> to those of Fig. <ref>, one first notices an overall sign flip of the excess emission noise with a similar-looking structure involving three extrema. The central zero-frequency peak of the Fermi liquid case is now shifted toward negative frequency signaling a strong reduction of the absorption. While the side peaks are also present, they differ from the Fermi liquid case in that they are no longer symmetric, occurring at frequencies that are seemingly unrelated, with a bigger amplitude at positive frequencies, corresponding to a stronger enhancement of the emission. This behavior is qualitatively reminiscent of the one observed for a resonant level asymmetrically coupled to Fermi liquids <cit.>, and as such could be related to the nontrivial energy dependence of the scattering at the QPC. For completeness, we now consider the strong backscattering regime of the FQHE, where electrons, instead of anyons tunnel across the QPC. This regime can be accessed by invoking the standard duality properties of the chiral Luttinger liquid description of the edge states, which amounts to simply replacing ν→ 1/ν and e^* → e in Eq. (<ref>). We show the emission and excess noise for several values of the filling factors in Fig. <ref>. The emission noise S_+ in the strong backscattering regime is closer to the one of Fermi liquids than that of the FQH weak backscattering regime, rapidly decaying for ω > 0 and growing for ω < 0, without any significant features. For negative frequencies, the emission noise now grows as a power law of |ω|^2/ν-1, as opposed to the simple linear-in-frequency behavior observed in the Fermi liquid case. Again, a simple interpretation of this behavior of the emission noise is hard to come by, but one may point out that this is associated with the scaling dimension of the tunneling operator, which now involves electrons rather than anyons. The excess noise Δ S_+ in the strong backscattering regime is rather featureless, only showing a reduction of the absorption. Note that a careful examination of the excess noise at extremely high frequency does show a peculiar behavior. This is actually an artifact of the calculation as it happens for frequencies beyond the scale set by the cutoff of the theory, namely ω≫ v_F/a. These results are unphysical and only signal a breakdown of the chiral Luttinger liquid description at such high energies. Lastly, we note that the Luttinger liquid results map back exactly to the Fermi liquid results if one sets ν = 1, as expected. This has been dealt with analytically in Appendix <ref>. §.§ Temperature gradient expansion Unfortunately, Eq. (<ref>) is not analytically tractable in its full form, motivating us to treat it perturbatively in the small temperature gradient limit. Following the zero frequency delta-T noise analysis of Ref. rech20, starting from T_R/L = T_avg±Δ T/2 where Δ T ≪ T_avg, we expand the exponentiated Green's function perturbatively up to second order in Δ T/2, giving us (we assume ħ = k_B =1 in this section to declutter the equations) S_+(ω, T_R, T_L) = S_0(ω,T_avg)[ 1 + (Δ T/2T_avg)^2 C_2(ω,T_avg) ] where S_0(ω, T_avg) = ( e^*Γ_0/π a)^2 ∫ dτ e^iωτ[sinh(iπ T_avgτ_0 )sinh(π T_avg(iτ_0 + τ) )]^2ν and C_2(ω, T_avg) = 1/S_0(T_avg,ω)( e^*Γ_0/π a)^2 ∫ dτ  e^iωτ[sinh(iπ T_avgτ_0 )sinh(π T_avg(iτ_0 + τ) )]^2ν[ ν(π(iτ_0 +τ))^2 /sinh(π T_avg(iτ_0 +τ)) - ν(iπτ_0)^2 /sinh(iπτ_0 T_avg)] Here, S_0(ω, T_avg) ≡ S^th_+(ω, T_avg, T_avg) is just the equilibrium thermal noise, already evaluated in Eq. (<ref>). Both the integrals of Eq. (<ref>) can also be evaluated analytically. The details of the calculation are summarized in Appendix <ref>. The result for C_2 then reads: C_2(ω/2π T_avg) = ν[ -1 + |ν + iω/2π T_avg|^2/2ν (2ν +1)( π^2 +4 πIm[ψ( ν + 1 + iω/2π T_avg) ] . . +4 . . {Im[ψ( ν + 1 + iω/2π T_avg) ] }^2 - 2 Re[ψ'( ν + 1 + iω/2π T_avg) ] ) ] where ψ is the digamma function and prime indicates a derivative. The C_2 coefficient in Eq. (<ref>) is obtained directly from an expansion of the emission noise, following in that respect the convention adopted in earlier works. <cit.> This corresponds to a slightly different definition of the excess noise compared to the one used so far and defined in Eq. (<ref>). It corresponds to an excess noise where the reference noise is chosen to be the equilibrium noise at the average temperature, i.e. C_2 = [S_+(ω, T_R, T_L) - S_+(ω, T_avg)]/S_+(ω, T_avg). While one could equally introduce an equivalent coefficient by expanding in Δ T the excess noise Δ S_+ defined in Eq. (<ref>), this is merely a matter of convention, and ultimately allows to highlight different properties. Here, we resort to the present choice since it readily distinguishes the weak and strong backscattering regime, <cit.> which is not so clear with other conventions. Interestingly, it turns out that C_2(ω, T_avg) actually does not depend separately on frequency and temperature, but rather in a combined way, being a function of the ratio ω/T_avg. The behavior of this C_2 coefficient, which encodes the relevant ”non-equilibrium" information, is plotted in Fig. <ref> as a function of θ = ħω/(2 π k_B T_avg) in the case of weak backscattering at the QPC. There is a clear distinction between the behavior for the Laughlin fractions and the one for the trivial integer case. While for ν=1, the C_2 coefficient increases monotonically, it displays a dip, crossing into negative values for frequencies close to zero for the Laughlin fractions. The value of the minimum is only marginally affected by the filling factor (within the Laughlin sequence), however the range of frequency over which C_2 < 0 is ν-dependent and shrinks with the filling factor. In all cases, the C_2 coefficient grows as a power-law at high frequency, nevertheless the contribution to the emission noise is washed out by the exponential decay of the equilibrium thermal noise. In the strong backscattering regime, which is accessed by making use of the duality properties and simply replacing ν→ 1/ν in Eq. (<ref>), we find that the curves for Laughlin FQH show a strong resemblance to the ν = 1 curve, monotonically increasing as a function of ω/T_avg with no dips to negative values as shown in Fig. <ref>. § CONCLUSIONS This work dealt with the finite frequency spectrum of photons emitted from a thermal gradient generated non equilibrium transport in both Fermi and quantum Hall junctions. The finite frequency noise was characterized here by the emission noise as well as with the excess emission noise, which has solely non equilibrium origins in the Fermi picture as thermal noises of each reservoirs are subtracted. For electron-hole symmetric Fermi junctions, the Landauer-Büttiker formalism can be employed, and the excess noise does not distinguish between emission and absorption processes as it is an even function of frequency. The excess emission noise of Fermi liquid thus has a central positive peak, and changes sign at moderate frequencies, acquires a minimum and then vanishes to zero. The height of the peak is controlled by the temperature difference and its width is determined by the average temperature. For a QPC in the fractional quantum Hall regime, we employed the chiral Luttinger liquid theory to compute in the weak backscattering regime the emission and excess noise when both edges have different temperatures. We started with the weak backscattering regime which is dominated by quasiparticle tunneling where new physics is expected. While the emission noise vanishes for positive frequencies, it also does for negative frequencies which departs strongly from the Fermi liquid picture. The emission noise has a central, asymmetric peak for small negative frequencies. The excess noise contains a minimum for small negative frequencies, in sharp contrast with the Fermi liquid case, and it is also asymmetric. The excess noise can be explored by varying both the average temperature and the temperature gradient. The emission noise in the strong back-scattering regime, where only electrons can tunnel between the two semi infinite Hall fluids, resembles strongly the Fermi liquid case (it decays as positive frequencies and grows for negative frequencies) but it follows a Luttinger liquid power law (rather than the linear behavior predicted by Fermi liquid theory) at negative frequencies. It seemed judicious to follow Ref. rech20 and explicitly perform a thermal gradient expansion of the emission noise (in the weak backscattering regime) to characterize the coefficient C_2 of the quadratic term in the gradient, which was obtained analytically as a function of the ratio between the frequency and the average temperature. C_2 is negative and has a minimum for small negative frequencies (in accordance with the zero frequency result). It grows for positive frequencies and decays to zero for negative frequencies. C_2 plotted as a function of frequency allows to further point out the differences with Fermi liquid theory. In the strong backscattering regime C_2 behaves roughly as in the Fermi liquid case, monotonically increasing, with no minima of negative contributions. This work does open the path to the investigation of finite frequency noise in mesoscopic systems driven out of equilibrium by a thermal gradient. While the regime of Fermi liquids was used here primarily as a benchmark and a point of comparison, we believe that our study of the strongly correlated regime of the fractional Hall effect deserves attention, as on many instances, departures from the Fermi liquid picture are observed. This work received support from the French government under the France 2030 investment plan, as part of the Initiative d'Excellence d'Aix-Marseille Université - A*MIDEX. We acknowledge support from the institutes IPhU (AMX-19-IET-008) and AMUtech (AMX-19-IET-01X). § CONNECTION BETWEEN SCATTERING THEORY AND THE INTEGER QUANTUM HALL EFFECT In this appendix, we show that the emission noise of a QPC between two IQH edges expressed in the langauge of Luttinger liquids is equivalent to the noise in the transmission of electrons between Fermi liquids described by scattering theory. We start from the emission noise of Luttinger liquids with ν = 1 S_+(ω, T_R, T_L) = ( e^*Γ_0/π a)^2 ∫ dτ  e^iωτexp[ 𝒢_R(-τ) + 𝒢_L(-τ) ] where the Greens function 𝒢_R/L(τ) is given by 𝒢_R/L(τ) = -ln[ sinh[iπ T_R/Lτ_0 ]/sinh[π T_R/L(iτ_0 + τ)]] To proceed, we define D_R/L(ϵ) = ∫ dτ e^iϵτD_R/L(τ ) where D_R/L(τ) = 1/2π a sinh[iπ T_R/Lτ_0 ]/sinh[π T_R/L(iτ_0 + τ)] Now S_+(ω, T_R,T_L) = (2eΓ_0)^2 ∫ dτ e^iωτ D_R(τ) D_L(τ) Inverting the Fourier transform relation in Eq. (<ref>) S_+(ω, T_R,T_L) = (2eΓ_0)^2 ∫ dτ e^iωτ∫dω_R/2πe^-i ω_RτD_R(ω_R) ∫dω_L/2πe^-i ω_LτD_L(ω_L) = (2eΓ_0)^2 ∫ dτdω_R/2πdω_L/2π 2πδ(ω-ω_R-ω_L) D_R(ω_R) D_L(ω_L) We now want to calculate D_R/L(ω_R/L) D_R/L(ϵ) = 1/2π a∫ dτ  e^iϵτ[sinh(iπ T_R/Lτ_0 )/sinh[π T_R/L(iτ_0 + τ) ]] Setting π T_R/Lτ_0 ≡α and π T_R/Lτ≡ u, we can coax this integrand into the following expression D_R/L(ϵ) = 1/π T_R/L1/2π a e^ϵα/π T_R/L∫ du e^-iϵ(u-iα)/T_R/L sinh(iα)/sinh(iα - u) = 1/π T_R/L1/2π a e^ϵα/π T_R/L A_1/2(ϵ/T_R/L) where A_ν(z) = 1/2(sinα)^2νe^-π/2z| Γ(ν + iz/2 )|^2/Γ(2ν) Giving us, D_R/L(ϵ) = 1/π T_R/L1/2π a e^ϵα/π T_R/L sinα e^-π/2z| Γ(1/2 + iz/2)|^2 Using the identities | Γ(1/2 + iz/2)| = √(π/cosh(π z)), and then τ_0 = a/v_F and taking the limit a⟶ 0, we end up with D_R/L(ϵ) = π/2v_F1/1 + e^πϵ/T_R/L ≡π/2v_Ff_R/L(πϵ) Plugging this back into Eq. (<ref>), we have S_+(ω, T_R,T_L) = 1/2(eΓ_0/v_F)^2 ∫ dE f_R(E) f_L(ω-E) Playing around with this result, we end up with the following expression for the noise S_+(ω, T_R,T_L) = e^2 (Γ_0/2v_F)^2 ∫ dE { f_L(E) [1-f_L(E-ω) ] . . + f_R(E) [1-f_R(E-ω)]+ [f_R(E-ω) - f_L(E-ω)] [f_R(E) - f_L(E) ] } which is equal to the scattering theory noise expression if we make the identification π(Γ_0/2v_F)^2 ≡𝒯. § C2 ANALYTICS Here, we briefly go over the evaluation of the integrals in Eqs. (<ref>) and (<ref>). The perturbatively expanded noise is given by S_+(ω, T_R, T_L) = ( e^*Γ_0/π a)^2 ∫ dτ  e^iωτ[sinh(iπ T_avgτ_0 )sinh(π T_avg(iτ_0 + τ) )]^2ν ×{ 1 + (Δ T/2T_avg)^2 [ ν(π(iτ_0 +τ))^2 /sinh(π T_avg(iτ_0 +τ)) - ν(iπτ_0)^2 /sinh(iπτ_0 T_avg)] } Absorbing the constants into the variables, the key integral to be evaluated takes the following generic form A_ν(z) = ∫ du e^-iz(u-iα)[sinh(iα)/sinh(iα - u)]^2ν Rewriting the hyperbolic sine in terms of exponentials, the integral can be recast as A_ν(z) = (1- e^-2iα)^2νe^-zα∫ du e^-(2ν+iz)u/[ e^-2u + e^i(π-2α)]^2ν This integral can be evaluated using Ref. gradshteyn14 (3.314) which gives us ∫ dx e^-μ̃ x/[ e^-x/γ̃ + e^β̃/γ̃]^ν̃ = γ̃exp[ β̃(μ̃ - ν̃/γ̃)]Γ(ν̃-γ̃μ̃)Γ( γ̃μ̃)/Γ( ν̃) provided the conditions Re( ν̃/γ̃) > Re μ̃ > 0 and |Im β̃| < π Re γ̃ are satisfied. Here, Γ(x) is the Euler-Gamma function. For the integral in Eq. (<ref>), we can identify β̃ = i(π/2 - α), γ̃ = 1/2, ν̃ = 2ν and μ̃ = 2ν + i z, which satisfies all the conditions. We then finally have A_ν(z) = 1/2(2sinα)^2νe^-π/2z| Γ( ν + iz/2)|^2/Γ(2ν) The first integral in Eq. (<ref>) can be evaluated, after trivial manipulations, directly using Eq. (<ref>). The second term in the second integral can be evaluated similarly. The first term in the second integral of Eq. (<ref>) is a bit more involved and is related to the second derivative of Eq. (<ref>) with respect to z. This can be seen from Eq. (<ref>), where a z-derivative will bring down a factor (u-iα). The second derivative of Eq. (<ref>) can be expressed as ∂_z^2A_ν(z) = 1/2(2sinα)^2ν/Γ(2ν)e^-π/2z| Γ( ν + iz/2) |^2 1/4{π^2 - [ ψ(ν + iz/2) - ψ(ν - iz/2) ]^2 -2iπ[ ψ( ν + iz/2)-ψ( ν - iz/2)] -[ ψ'( ν + iz/2)+ψ'( ν - iz/2)] } where ψ(z) is the digamma function and ψ'(z) its z-derivative. Finally, using Eqs. (<ref>) and (<ref>), we can express the full noise, in the limit τ_0 ⟶ 0 as S_+(ω, T_R, T_L) = ( e^*Γ_0/π a)^2τ_0 [2πτ_0 T_avg]^2ν -1e^-ω/2T_avg| Γ( ν + iω/2π T_avg)|^2/Γ(2ν) ×{ 1+ ( Δ T/2 T_avg)^2 ν[ -1 + |ν + iω/2π T_avg|^2/2ν (2ν +1)( π^2 + 4πIm[ψ( ν + 1 + iω/2π T_avg) ] . . . . . . + 4 {Im[ ψ( ν + 1 + iω/2π T_avg) ]}^2 - 2Re[ψ'( ν + 1 + iω/2π T_avg) ] ) ] } From this, we can extract the coefficient C_2 which isolates the non-equilibrium contributions in second order of the temperature difference C_2(ω/2π T_avg) = ν[ -1 + |ν + iω/2π T_avg|^2/2ν (2ν +1)( π^2 + 4πIm[ψ( ν + 1 + iω/2π T_avg) ] . . . . + 4 {Im[ ψ( ν + 1 + iω/2π T_avg) ]}^2 - 2Re[ψ'( ν + 1 + iω/2π T_avg) ] ) ]
http://arxiv.org/abs/2307.00281v1
20230701092717
Cosmological constraints on dark energy in $f(Q)$ gravity: A parametrized perspective
[ "A. Mussatayeva", "N. Myrzakulov", "M. Koussour" ]
gr-qc
[ "gr-qc" ]
[Email: ]a.b.mussatayeva@gmail.com Department of Physics and Chemistry, S. Seifullin Kazakh Agrotechnical University, Astana 010011, Kazakhstan [Email: ]nmyrzakulov@gmail.com L. N. Gumilyov Eurasian National University, Astana 010008, Kazakhstan. Ratbay Myrzakulov Eurasian International Centre for Theoretical Physics, Astana 010009, Kazakhstan. [Email: ]pr.mouhssine@gmail.com Quantum Physics and Magnetism Team, LPMC, Faculty of Science Ben M'sik, Casablanca Hassan II University, Morocco. In this paper, we focus on the parametrization of the effective equation of state (EoS) parameter within the framework of f(Q) symmetric teleparallel gravity. Here, the gravitational action is represented by an arbitrary function of the non-metricity scalar Q. By utilizing a specific parametrization of the effective EoS parameter and a power-law model of f(Q) theory, namely f(Q)=β Q^( m+1) (where β and m are arbitrary constants), we derive the cosmological solution of the Hubble parameter H(z). To constrain model parameters, we employ recent observational data, including the Observational Hubble parameter Data (OHD), Baryon Acoustic Oscillations data (BAO), and Type Ia supernovae data (SNe Ia). The current constrained value of the deceleration parameter is found to be q_0=-0.50^+0.01_-0.01, indicating that the current Universe is accelerating. Furthermore, we examine the evolution of the density, EoS, and Om(z) diagnostic parameters to deduce the accelerating nature of the Universe. Finally, we perform a stability analysis with linear perturbations to confirm the model's stability. Cosmological constraints on dark energy in f(Q) gravity: A parametrized perspective M. Koussour0000-0002-4188-0572 August 1, 2023 =================================================================================== § INTRODUCTION In modern cosmology, the observational aspect is critical. The introduction of new tools in observation causes cosmologists to reassess the formulation of gravitational theories regularly. With the discovery of Hubble, Einstein was forced to remove the cosmological constant from his field equations in General Relativity Theory (GRT). The observation of Type Ia supernovae (SNe Ia) in 1998 forced cosmologists to abandon the hypothesis of decelerating Universe expansion <cit.>. Since then, the Baryon Acoustic Oscillations (BAO) <cit.>, Cosmic Microwave Background (CMB) <cit.>, and Large Scale Structure (LSS) <cit.>, and many more measurements have provided evidence for the Universe's accelerated expansion. Thus, it is critical to include observable data while developing a theoretical cosmological model of the Universe. The accelerated expansion of the Universe is a key characteristic of modern cosmology. The Einstein field equations in GRT invariably result in a decelerating expansion of the Universe with the normal matter constituent. The accelerating expansion can be characterized by introducing a new constituent to the energy-momentum tensor part of the field equations or by making some changes to the geometrical part. Using these concepts, recent research has developed a variety of cosmological models of the Universe that explain the accelerating expansion. The notion of dark energy (DE) has recently gained prominence. DE is an exotic energy constituent with high negative pressure that explains numerous data and addresses several significant issues in modern cosmology. The second alternative is to suppose that GRT fails at large scales and that gravity may be explained via a more general action than the Einstein-Hilbert action. In general, modified theories of gravity can be divided between models following the GRT structure with null torsion and non-metricity (such as the f(R) and f(R,T) theories <cit.>), models with torsion T (the teleparallel equivalent of GRT) <cit.>, and models with non-metricity Q (the symmetric teleparallel equivalent of GRT) <cit.>. Here, we will examine the f(Q) theory, an extension of the symmetric teleparallel equivalent GRT in which gravity is due to the non-metricity scalar Q. In f(Q) theory, the covariant divergence of the metric tensor g_μν is non-zero, and this feature can be represented mathematically in terms of a new geometric variable known as non-metricity i.e. Q_γμν=∇ _γg_μν, which geometrically represents the variation of the length of a vector in a parallel transport process. Recently, several intriguing cosmological and astrophysical consequences of f(Q) gravity have been published, such as: The first cosmological solutions <cit.>; Quantum cosmology <cit.>; The coupling matter in f(Q) gravity <cit.>; Black hole solutions <cit.>; General covariant symmetric teleparallel gravity <cit.>; Evidence that non-metricity of f(Q) gravity can challenge ΛCDM <cit.>; Gravitational waves <cit.>; The acceleration of the Universe and DE <cit.>; Observational constraints <cit.>. Motivated by the previous discussion and studies on modified f(Q) theory of gravity, in the present study, the accelerated expansion has been investigated using one specific parameterization of the total or effective equation of state (EoS) parameter ω_eff in the background of f(Q) theory of gravity (Sec. <ref> explored the fundamental features of the specified ω_eff). We have also considered the power-law form of f(Q)=β Q^( m+1), where β and m are arbitrary constants <cit.>. The primary purpose of this research is to examine the nature of late-time cosmology's evolution. The observational constraints on model parameters are established by employing the Observational Hubble parameter data (OHD), BAO data, and SNe data. We then examined the evolution of the density parameter, the effective EoS parameter, and the deceleration parameter at the 1-σ and 2-σ confidence levels (CL) using the estimated values of model parameters. This work is structured as follows: in Sec. <ref>, we present a brief review of the f(Q) gravity. In Sec. <ref>, we write the cosmological solution of the Hubble parameter by using a specific parameterization of the effective EoS parameter and a power-law model of f(Q) theory. In Sec. <ref>, we calculate the values of the model parameters using the combined OHD+BAO+SNe data. Moreover, we describe the behavior of several parameters such as the density, EoS, and deceleration parameters. In Sec. <ref>, we examine the Om(z) diagnostic parameter history of our f(Q) model to see if the assumed model recognizes the DE behavior, and then we do a linear perturbation analysis. Finally, in Sec. <ref>, we summarize our findings. § A BRIEF REVIEW OF F(Q) GRAVITY In general, in the presence of matter components, the action for a f(Q) gravity model is written as <cit.>, S=∫√(-g)d^4x[ f(Q)/2κ ^2+L_m] , where g is the determinant of the metric tensor g^μν, i.e. g=det(g_μν), κ ^2=8π G=1/M_p^2, G is the Newtonian constant, while M_p is the reduced Planck mass. L_m denotes the Lagrangian density of the matter components. For the time being, the term f(Q) is an arbitrary function of the non-metricity scalar Q. The tensor of non-metricity and its traces are given by Q_γμν=∇ _γg_μν, Q_β=g^μνQ_βμν Q_β=g^μνQ_μβν. Furthermore, as a function of the non-metricity tensor, the superpotential (or the non-metricity conjugate) can be expressed as, P_ μν^β=-1/2L_ μν^β+1/ 4(Q^β-Q^β)g_μν-1/4δ _(μ^βQ_ν ). where L^β_μν is the disformation tensor, L^β_μν≡1/2g^βσ( Q_νμσ+Q_μνσ-Q_βμν) . Hence, the non-metricity scalar is expressed as, Q=-Q_βμνP^βμν . Using the variation of action in Eq. (<ref>) with respect to the metric tensor g^μν, one can obtain the field equations, 2/√(-g)∇_β(f_Q√(-g) P^β_ μν)+1/2f g_μν+ f_Q(P_μβλQ_ν^ βλ-2Q_βλμP^βλ_ ν)=- T_μν, where f_Q=dfdQ. Moreover, T_μν is the energy-momentum tensor of the cosmic fluid, which is considered to be a perfect fluid, i.e. T_μν=(ρ +p)u_μu_ν+pg_μν, where u^μ=(1,0,0,0) represents the 4-velocity vector components that form the fluid. ρ and p represent the total energy density and total pressure of any perfect fluid of matter and DE, respectively. In the context of a flat FLRW space-time, the modified Friedmann equations ds^2=-dt^2+a^2(t)[dx^2+dy^2+dz^2], where a(t) is the scale factor of the Universe are given by <cit.> 3H^2=1/2f_Q( -ρ +f/2) , Ḣ+3H^2+ḟ_Q/f_QH=1/2f_Q( p+f/2 ) , where Q=6H^2, and H denotes the Hubble parameter, which estimates the rate of expansion of the Universe. It is interesting to note that the standard Friedmann equations of GR can be found if the function f(Q)=-Q is considered, i.e. 3H^2=ρ and 2Ḣ+3H^2=-p. In our study, we consider a simplified cosmological scenario where the universe is composed of two main components: matter and DE. The matter is assumed to be fluid without pressure (p_m=0), while DE is considered to possess negative pressure, which is responsible for driving the observed cosmic acceleration. For this reason, we assume that ρ=ρ_m+ρ_DE and p=p_DE. In addition, the equation of state (EoS) parameter is a quantity used in cosmology to explain the properties of DE. The effective or total EoS parameter is defined as the ratio of the total pressure to the total energy density. In the context of our study, it takes into account contributions from various cosmic components, including DE and matter. Therefore, the effective EoS parameter, denoted as ω_eff, is given by ω_eff =p/ρ=-1+( .H+. f_Q/f_QH) ( 2f_Q) /( f/2 -6H^2f_Q) . The above dot symbolizes the differentiation with regard to cosmic time t. Furthermore, the EoS parameter which combines the energy density and pressure of the DE component is, ω_DE=p_DE/ρ_DE Now, in order to derive the matter conservation equation, we can be taking the trace of the field equation, .ρ_m+3ρ _mH=0, By solving Eq. (<ref>), we are able to derive the solution for the energy density of the matter ρ_m as, ρ _m=ρ _m0a^-3, where ρ_m0 is the present value of the energy density of the matter. § LATE-TIME COSMOLOGICAL EVOLUTION VIA A SPECIFIC TYPE OF EOS PARAMETER This section examines the Universe's evolution at late times using a specific type of EoS parameter. However, the equations obtained from this analysis are complex and require numerical solutions. To simplify the implementation of such solutions, a change of variable is performed, where the red-shift, z, is used as the dynamical variable instead of the cosmic time t. One starting point that we can rely on is that z=a_0/ a( t) -1, where a_0 is the present time of the scale factor. For simplicity, the scale factor is set to 1 currently. It is not directly observable, but we can observe the ratio of the scale factor at different times to its value at the present time. The following relationship may therefore be deduced: d/dt=-H( z) ( 1+z) d/dz. Thus, it is clear that. .H=-H( z) ( 1+z) H^^'( z) , where, the symbol 'prime' represents differentiation with respect to the red-shift variable, denoted by 'z'. In this context, it is evident that we can utilize only Eqs. (<ref>) and (<ref>) for our analysis. However, rather than solving the ensuing equation for H(z), we can introduce an effective form of the EoS parameter, which is defined as follows: ω _eff=-1+A/A+B( 1+z) ^-3, where A and B are arbitrary constants. The reason behind selecting this particular parametrization for ω _eff is that at high red-shift values z≫ 1 (early stages of cosmological evolution), ω _eff is nearly zero, indicating the behavior of the EoS parameter for a pressureless fluid, such as ordinary matter. As we move towards the present epoch (z=0), ω _eff decreases gradually to negative values, leading to negative pressure and an effective EoS value ω _eff=-B/A+B. In this case, the functional form of ω _eff is dependent on the specific values of A and B. As a result, the form of ω _eff can effortlessly incorporate the phases of cosmic evolution, including the early matter-dominated era and the late-time DE-dominated era. The specific form mentioned, introduced in Ref. <cit.>, exhibits phantom-like behavior in the present epoch. Due to the presence of a large number of free parameters in the effective EoS parameter, we adopt a specific approach for the observational analysis. In order to constrain the model and facilitate the analysis, we fix the value of n to be 3. In literature, various parametrization models of EoS for DE have been proposed and fitted to observational data. Ref. <cit.> proposed an one-parameter family of EoS DE model. Two-parameters family of EoS DE parametrizations, especially the Chevallier-Polarski-Linder parametrization <cit.>, the Linear parametrization <cit.>, the Logarithmic parametrization <cit.>, the Jassal-Bagla-Padmanabhan parametrization <cit.>, and the Barboza-Alcaniz parametrization <cit.>, were also explored. Further, in <cit.> three and four parameters family of EoS DE parametrizations are examined. In this section, we will look at a specific cosmological model in f(Q) gravity theory. We also look at how geometrical and physical cosmological parameters such as energy density, pressure, and deceleration behave under f(Q) gravity. In this study, we investigate the scenario where the function f(Q) can be expressed as, f(Q)=β Q^( m+1), where β and m are arbitrary constants <cit.>. For the function f(Q) we obtain the expression f_Q=β( 1+m) Q^m and f_QQ=β( 1+m) mQ^m-1. By putting the above expressions for f, f_Q, and f_QQ into Eqs. (<ref>) and (<ref>) we can derive the energy density and pressure as, ρ =β( -2^m) 3^m+1(2m+1)H^2(m+1), and p=β 6^m(2m+1)H^2m( 2Ḣ(m+1)+3H^2) . Now by using Eq. (<ref>), we obtain the EoS parameter in terms of Hubble parameter and its derivative as, ω_eff =-1+2( m+1) /3( 1+z) H^'( z) /H( z) . By using Eq. (<ref>) and the presumed ansatz of ω _eff, the evolution equation of the Hubble function takes the form dH( z) /dz=3A( 1+z) ^2/2( m+1) ( A( 1+z) ^3+B) H( z), which yields the following solution H( z) =H_0[ A(z+1)^3+B/A+B] ^1/2m+2 , where H_0 describes the present value (i.e. at z=0) of the Hubble parameter. In particular, for the scenario m=0 with β =-1, the solution reduces to f( Q) =-Q. In other words, it is directly related to the ΛCDM model. As a result, the equation for Hubble parameter H( z) is reduced to H( z) =H_0[ Ω _m^0( 1+z) ^3+Ω _Λ^0] ^1 /2, where Ω _m^0=A/A+B and Ω _Λ^0=( 1-Ω _m^0) =B/A+B are the present density parameters for matter and the cosmological constant, respectively. As a result, the model parameter m is an excellent indicator of the present model's deviation from the ΛCDM model due to the addition of non-metricity terms. The deceleration parameter q is one of the cosmological parameters that is important in describing the status of our Universe's expansion. If the value of the deceleration parameter is strictly less than zero, the cosmos accelerates; when it is non-negative, the cosmos decelerates. The deceleration parameter q is defined as q( z) =- ..a/aH^2=-1+( 1+z) /H( z) dH( z) /dz. In this scenario, the expression of the deceleration parameter is q( z) =-1+3A(1+z)^3/2(m+1)( A(1+z)^3+B) . The behavior and important cosmological features of the model represented in Eq. (<ref>) are entirely reliant on the model parameters (A, B, and m). In the next part, we use current observational data to study the behavior of the cosmological parameters to constrain the model parameters (A, B, and m). § METHOD OF DATA FITTING In our research, we took into account the most current and relevant observational findings: * Observational Hubble parameter Data (OHD): We examine H(z) data points calculated by employing differential galaxy ages as a function of red-shift z and line-of-sight BAO data. <cit.>. * Baryon Acoustic Oscillation (BAO): We additionally take into account the BAO data from the SDSS-MGS, Wiggle Z, and 6dFGS projects <cit.>. * Type-Ia Supernova measurement (SNe Ia): We examine the Pantheon sample of 1048 SNe Ia luminosity distance values from the Pan-STARSS1 (PS1) Medium Deep Survey, the Low-z, SDSS, SNLS, and HST missions <cit.>. In addition, for likelihood minimization, we employ the MCMC (Markov Chain Monte Carlo) sample from the Python package emcee <cit.> , which is commonly used in astrophysics and cosmology to investigate the parameter space θ_s=(A, B, m). To do this, we are now focusing on three data: OHD, BAO, and SNe Ia data. We evaluate the priors on the parameters -10.0<A<10.0, -10.0<B<10.0, and -10.0<m<10.0. To find out the outcomes of our MCMC study, we employed 100 walkers and 1000 steps. The discussion about the observational data has also been presented in a very similar fashion in Ref. 2, shedding further light on the significance of these findings. In the following subsections of our manuscript, we provide further detailed discussions on the observational data used, as well as the statistical analyses employed. We aim to present a comprehensive and transparent description of our methodology, emphasizing the novelty and contributions of our work while acknowledging the commonalities with existing literature. §.§ OHD We utilize a commonly popular compilation with an updated set of 57 data points. In this collection of 57 Hubble data points, 31 were measured using the method of differential age (DA), while the remaining 26 were measured using BAO and other methods in the red-shift range provided as 0.07≤ z≤2.42, allowing us to determine the expansion rate of the Universe at red-shift z. Hence, the Hubble parameter H(z) as a function of red-shift can be written as H(z)= -1/1+zdz/dt. To calculate the mean values of the model parameters A, B, and m, we used the chi-square function (χ^2) for OHD as, χ^2_OHD = ∑_i=1^57[H(θ_s, z_i)- H_obs(z_i)]^2/σ(z_i)^2, where H(z_i) denotes the theoretical value for a specific model at different red-shifts z_i, and H_obs(z_i) denotes the observational value, σ(z_i) denotes the observational error. §.§ BAO We employ a compilation of SDSS, 6dFGS, and Wiggle Z surveys at various red-shifts for BAO data. This paper incorporates BAO data as well as the cosmology listed below, d_A(z)=c ∫_0^zdy/H(y), D_v(z)=[ d_A^2 (z) c z /H(z)]^1/3, where d_A(z) represents the comoving angular diameter distance, and D_v represents the dilation scale. Moreover, the chi-square function (χ^2) for BAO is given by χ^2_BAO = X^T C_BAO^-1 X. Here, X depends on the considered survey and C_BAO^-1 represents the inverse covariance matrix <cit.>. §.§ SNe To obtain the best values using SNe Ia, we begin with the measured distance modulus μ_obs produced from SNe Ia detections and compare it to the theoretical value μ_th. The Pantheon sample, a recent SNe Ia dataset containing 1048 points of distance modulus μ_obs at various red-shifts in the range 0.01<z<2.26, is taken into consideration in this work. The distance modulus of each SNe can be calculated using the following equations: μ_th(z)=5 log_10d_l(z)/Mpc+25, d_l(z)=c(1+z) ∫_0^zdy/H(y,θ). where c is the speed of light. The distance modulus can be calculated using the relationship, μ= m_B-M_B+α x_1 - β c + Δ_M + Δ_B, where m_B is the measured peak magnitude at the B-band maximum, and M_B is the absolute magnitude. The parameters c, α, β, and x_1, respectively, correspond to the color at the brightness point, the luminosity stretch-color relation, and the light color shape. Moreover, Δ_M and Δ_B are distance adjustments based on the host galaxy's mass and simulation-based anticipated biases. The nuisance parameters in the above equation were obtained using a novel method known as BEAMS with Bias Corrections (BBC) <cit.>. As a result, the measured distance modulus is equal to the difference between the apparent magnitude m_B and the absolute magnitude M_B i.e., μ = m_B-M_B. For the Pantheon data, the χ^2 function is assumed to be, χ^2_SNe =∑_i,j=1 ^1048Δμ_i( C_SNe^-1)_ijΔμ_j where Δμ_i= μ_th-μ_obs and C_SNe represents the covariance matrix. §.§ OHD+BAO+SNe Now, the χ^2 function for the OHD+BAO+SNe data is assumed to be, χ^2_total=χ^2_OHD+χ^2_BAO+χ^2_SNe, By using the aforementioned combined OHD+BAO+SNe data, we obtained the best-fit values of the model parameters A, B, and m, as shown in Fig. <ref> with 1-σ and 2-σ likelihood contours. The best-fit values obtained are A=0.342^+0.022_-0.022, B=0.677^+0.025_-0.025, and m=0.013^+0.021_-0.021. For m=0, Fig. <ref> shows the results of 1-σ and 2-σ likelihood contours with the best-fit values of model parameters are A=0.3353± 0.0010, and B=0.6837 ±0.0019. Figs. <ref> and <ref> also show the error bars for H(z) and μ(z) using H_0=(67.4±0.5) Km/s/Mpc <cit.>. The figures also show a comparison of our model to the commonly used ΛCDM model in cosmology i.e. H( z) =H_0√( Ω _m^0( 1+z) ^3+Ω _Λ^0) (we have considered Ω _m^0=0.315± 0.007) <cit.>. As shown in the figures, our model matches the observed data nicely. We will now discuss the cosmological consequences of the obtained observational constraints. Using the obtained mean values of the model parameters A, B, and m constrained by the combined OHD+BAO+SNe data, we investigate the behavior of the density, the EoS, and the deceleration parameters. In Figs. <ref>, <ref>, and <ref>, we presented the density parameter, EoS parameter, and deceleration parameter as a function of red-shift for the combined OHD+BAO+SNe data. From Fig. <ref>, it can be observed that as the universe expands, both the matter density parameter and the DE density parameter exhibit a decrease. In the late stages, the matter density approaches zero, while the DE density converges towards a small value. In addition, the densities parameter behaves positively for model parameter values constrained by the combined OHD+BAO+SNe data. As mentioned in Sec. <ref>, the EoS parameter is a vital cosmological parameter for understanding the nature of the Universe and its history through time, and it is defined as ω =p/ρ, where p is the pressure and ρ is the energy density. The value of the EoS parameter governs how the fluid behaves and how it affects the expansion of the Universe. For example, if ω = 0, the fluid is referred to as non-relativistic matter and behaves like dust. However, if ω = 1/3, the fluid is referred to as relativistic matter and behaves like radiation. If ω <-1/3, the fluid is considered to have negative pressure and is responsible for the Universe's accelerated expansion, a phenomenon associated with DE, which includes the quintessence (-1< ω < -1/3) era, cosmological constant (ω=-1), and phantom era (ω <-1). The existing observational constraints imply that the EoS parameter of the Universe's dominating component (DE), is extremely near to -1. In other terms, the pressure of DE is negative and nearly constant, fueling the Universe's accelerated expansion. Recent investigations of the CMB radiation, the LSS of the Universe, the luminosity-distance relation of SNe Ia, and others, have given compelling evidence for the existence of DE and its dominating role in the Universe's expansion. The most recent measurements of the EoS parameter from these data produce a value of ω_0= -1.03 ± 0.03 <cit.>, which is compatible with the cosmological constant. In this paper, we focus on the analysis of an effective EoS parameter using three model parameters: A, B, and m. The behavior of the EoS parameter is depicted in Fig. <ref> for constrained values of A, B, and m from the combined OHD+BAO+SNe data. From the analysis conducted, it is apparent that both the evolving EoS parameter for the DE and the effective EoS parameter demonstrate quintessence-like behavior. This observation highlights the resemblance to the typical characteristics associated with quintessence, shedding light on the intriguing nature of the DE component under investigation. The present value (i.e. at z=0) of the EoS parameter for DE is ω_0=-0.91 ± 0.08 <cit.>, indicating an accelerating phase. In addition, as shown in Fig. <ref>, we analyzed the behavior of the deceleration parameter for constrained values of A, B, and m from the combined OHD+BAO+SNe data. The sign of the deceleration parameter (q) indicates whether the model is accelerating or decelerating. If q > 0, the model decelerates, if q = 0, it expands at a steady rate, and if -1 < q < 0, it expands at an accelerating rate. With q = -1, the Universe shows exponential growth or De-Sitter expansion and super-exponential expansion for q < -1. In Eq. (<ref>), we have obtained the deceleration parameter for our model. According to Fig. <ref>, the model transitions from a decelerated stage to an accelerated stage. It can also be seen that our model initially decelerates (q > 0) and then approaches exponential expansion in late times (q = -1). In the figure, we also compare our model to the commonly accepted ΛCDM model in cosmology. According to the constrained values of model parameters A, B, and m from the combined OHD+BAO+SNe data, the present value of the transition red-shift is z_tr=0.60 ± 0.02 <cit.>, while the present value of the deceleration parameter is q_0=-0.50 ±0.01 <cit.>, indicating that the phase is accelerating. § OM(Z) DIAGNOSTIC AND LINEAR PERTURBATIONS §.§ Om(z) diagnostic Sahni et al. <cit.> introduced the Om(z) diagnostic parameter as an alternative to the statefinder parameter, which aids in distinguishing the current matter density contrast Om in various models more successfully. This is also a geometrical diagnostic that is clearly dependent on red-shift ( z) and the Hubble parameter (H). It is defined as follows: Om( z) =( H( z) /H_0) ^2-1 /( 1+z) ^3-1. The negative slope of Om(z) corresponds to quintessence type behavior (-1<ω< -1/3), while the positive slope corresponds to phantom-type behavior (ω<-1). The ΛCDM model (ω=-1) is represented by the constant nature of Om(z). According to Fig. <ref>, the Om(z) diagnostic parameter has a negative slope throughout its entire domain. As a result of the Om(z) diagnostic test, our f(Q) model follows the quintessence scenario. Based on the findings, we can draw a conclusive inference that the behavior of the Om(z) diagnostic parameter aligns with the behavior exhibited by the EoS parameter. The correspondence between these two parameters indicates a strong relationship, suggesting that variations in the EoS parameter are effectively captured by the Om(z) diagnostic parameter. This observation underscores the utility and reliability of the Om(z) parameter as a diagnostic tool for understanding the dynamics of the DE component. §.§ Linear perturbations In this subsection, our focus is on examining the stability of the f(Q) cosmological model by analyzing the effects of linear homogeneity and isotropic perturbation. By considering small deviations from the Hubble parameter given by Eq. (<ref>) and the energy density evolution i.e. Eq. (<ref>), we aim to understand the behavior and robustness of the cosmological models under study. Linear perturbation analysis has been extensively used in cosmology to study the growth of structures and the evolution of the universe. Many previous studies have successfully employed linear approximations to explore the behavior of modified gravity theories and assess their compatibility with observational data <cit.>. The perturbations under consideration in this analysis are of first order, H(t)=H(t)(1+δ( t) ) ρ(t)=ρ (t)(1+δ _m( t) ), where δ( t) represents the isotropic deviation of the background Hubble parameter, while δ_m( t) corresponds to the matter overdensity. Hence, the perturbation of the functions f(Q) and f_Q can be expressed as δ f=f_Qδ Q and δ f_Q=f_QQδ Q, where δ Q=12Hδ H is the first-order perturbation of the scalar Q. So, neglecting the higher power of δ( t), the Hubble parameter can be expressed as 6H^2=6H^2( 1+δ( t) ) ^2=6H^2( 1+2δ( t) ). Now, using Eq. (<ref>) we get Q( f_Q+2Qf_QQ) δ =-ρδ _m. This gives rise to the matter-geometric perturbation relation, and the perturbed Hubble parameter can be calculated using Eq. (<ref>). Then, just use perturbation continuity equation to get the analytical solution to the perturbation function, δ̇ ̇_̇ṁ+3H(1+ω )δ =0. Solving the above equations for δ and δ _m yields the first order differential equation, δ̇ ̇_̇ṁ-3H(1+ω )ρ/Q(f_Q+2Qf_QQ)δ _m=0. Using Eqs. (<ref>) and (<ref>) to simplify the previous equation once more, the solution is expressed as, δ _m( z) =δ _m_0H( z), and δ( z) =-δ _m0/3( 1+ω _eff) .H/H. By using Eqs. (<ref>) and (<ref>), we obtain δ _m( z) =δ _m0H_0( A(z+1)^3+B/A+B) ^1/2m+2. Again, by using Eqs. (<ref>) and (<ref>), we obtain δ( z) =δ _m0H_0( A(z+1)^3+B/A+B) ^1/ 2m+2/2(m+1). Figs. <ref> and <ref> show the history of the perturbation terms δ _m( z) and δ( z) in terms of red-shift z . Both the perturbations δ _m( z) and δ( z) diminish rapidly and reach zero at late times. It may also be demonstrated that the behavior of δ _m( z) and δ( z) is the same for all model parameter values. Consequently, using the scalar perturbation approach, our f(Q) model demonstrates stable behavior. § CONCLUSION The current scenario of accelerating Universe expansion is now a significant topic of study. Two approaches have been proposed to explain this cosmic acceleration. One approach is to investigate different dynamical DE models (such as quintessence and phantom), while another is to analyze alternate gravity theories. In this paper, we investigated accelerated expansion using the FLRW Universe and the f(Q) theory of gravity, particularly f(Q)=β Q^( m+1), where β and m are arbitrary constants. We obtained the solution of the Hubble parameter using the parametrization form of the effective EoS parameter as ω _eff=-1+A/A+B( 1+z) ^-3 (where A and B are arbitrary constants), which leads to a varying deceleration parameter. As shown in Sec. <ref> of this work, we constrained model parameters (A, B, and m) using the MCMC approach with a combined analysis of OHD, BAO, and SNe data. The best-fit values obtained are A=0.342^+0.022_-0.022, B=0.677^+0.025_-0.025, and m=0.013^+0.021_-0.021. For m=0, the best-fit A=0.3353± 0.0010, and B=0.6837 ±0.0019. Furthermore, with the constrained values of A, B, and m from the combined OHD+BAO+SNe data, we analyzed the behavior of the density parameter, EoS parameter, and deceleration parameter as a function of red-shift, as shown in Figs. <ref>, <ref>, and <ref>. Fig. <ref> shows that both the matter density parameter and the DE density parameter are increasing functions of red-shift and exhibit the expected positive behavior. The evolution of the EoS parameter in Fig. <ref> supported the accelerating nature of the Universe's expansion phase, and the model behaves like a quintessence in the present. Furthermore, the present value of the EoS parameter for DE is estimated to be ω_0=-0.91 ± 0.08. Fig. <ref> indicates that the model transitions from a decelerated stage to an accelerated stage. The present value of the transition red-shift is z_tr=0.60 ± 0.02 based on constrained values of model parameters A, B, and m from the combined OHD+BAO+SNe data, whereas the present value of the deceleration parameter is q_0=-0.50 ±0.01, showing that the phase is accelerating. Then we evaluated the Om(z) diagnostic parameter for our presumed f(Q) model. As a result, we observed that the behavior of the Om(z) diagnostic parameter conforms to the behavior of the effective EoS parameter. Lastly, the perturbation terms shown in Figs. <ref> and <ref> confirmed that the model is stable under the scalar perturbation method. Based on our analysis, we reach a compelling conclusion that our f(Q) cosmology, incorporating the effective EoS parameter form, offers a highly efficient framework for explaining various late-time cosmic phenomena in the Universe. The fact that this model demonstrates observational validity further supports its credibility and reliability. § ACKNOWLEDGMENTS This research is funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP09058240). Data availability This article does not present any novel or additional data. 99 Riess A.G. Riess et al., Astron. J. 116, 1009 (1998). Perlmutter S. Perlmutter et al., Astrophys. J. 517 , 565 (1999). D.J. D.J. Eisenstein et al., Astrophys. J. 633, 560 (2005). W.J. W.J. Percival at el., Mon. Not. R. Astron. Soc. 401, 2148 (2010). R.R. R.R. Caldwell, M. Doran, Phys. Rev. D 69, 103517 (2004). Z.Y. Z.Y. Huang et al., J. Cosm. Astrop. Phys. 0605, 013 (2006). T.Koivisto T. Koivisto, D.F. Mota, Phys. Rev. D 73 , 083502 (2006). S.F. S.F. Daniel, Phys. Rev. D 77, 103513 (2008). Capo/2008 S. Capozziello, V.F. Cardone, V. Salzano, Phys. Rev. D, 78, 063504 (2008). Nojiri/2007 S. Nojiri, S. D. Odintsov, Phys. Lett. B 657, 238 (2007). Harko/2011 T. Harko et al., Phys. Rev. D, 84, 024020 (2011). Momeni/2015 D. Momeni, R. Myrzakulov, E. Gudekli, Int. J. Geom. Meth. Mod. Phys. 12, 1550101 (2015). Capo/2011 S. Capozziello et al., Phys. Rev. D, 84, 043527 (2011). Nunes/2016 R.C. Nunes, S. Pan, E.N. Saridakis, J. Cosm. Astropart. Phys., 08, 011 (2016). Q0 J. B. Jimenez et al., Phys. Rev. D 98, 044048 (2018). Q1 J.B. Jimenez et al., Phys. Rev. D 101, 103507 (2020). Q2 W. Khyllep et al., Phys. Rev. D 103, 103521 (2021). Q3 N. Dimakis, A. Paliathanasis, and T. Christodoulakis, Class. Quantum Gravity 38, 22 (2021). Q4 T. Harko et al., Phys. Rev. D 98, 084043 (2018). Q5 F. D Ambrosio et al., Phys. Rev. D 105, 024042 (2022). Q6 M. Hohmann, Phys. Rev. D 104, 124077 (2021). Q7 F. K. Anagnostopoulos, S. Basilakos, and E. N.Saridakis, Phys. Lett. B 822, 136634 (2021). Q8 M. Hohmann, Phys. Rev. D 99, 024009 (2009). Q9 B. J. Barros et al., Phys.Dark Univ. 30, 100616 (2020). Q10 I. Soudi et al., Phys. Lett. B 100, 044008 (2019). ko1 M. Koussour et al., Phys. Dark Univ. 36, 101051 (2022). ko2 M. Koussour et al., J. High Energy Phys. 37, 15-24 (2023). ko3 M. Koussour and M. Bennai, Chin. J. Phys. 79, 339-347 (2022). ko4 M. Koussour et al., Ann. Phys. 445, 169092 (2022). ko5 M. Koussour et al., J. High Energy Astrophys, 35, 43-51 (2022). OC1 R. Lazkoz et al., Phys. Rev. D 100, 104027 (2019). OC2 I. Ayuso, R. Lazkoz, and V. Salzano, Phys. Rev. D 103, 063505 (2021). OC3 S. A. Narawade and B. Mishra, arXiv preprint arXiv:2211.09701 (2022). P0 A. A. Mamon, Int. J. Mod. Phys. D 26, 1750136 (2017). P1 Y. g. Gong and Y. Z. Zhang, Phys. Rev. D 72, 043518 (2005). P2 M. Chevallier and D. Polarski, Int. J. Mod. Phys. D 10, 213 (2001). P3 E. V. Linder, Phys. Rev. Lett. 90, 091301 (2003). P4 A. R. Cooray and D. Huterer, Astrophys. J. 513 , L95 (1999). P5 P. Astier, Phys. Lett. B 500, 8 (2001). P6 J. Weller and A. Albrecht, Phys. Rev. D 65, 103512 (2002). P7 G. Efstathiou, Mon. Not. Roy. Astron. Soc. 310 , 842 (1999). P8 H. K. Jassal, J. S. Bagla and T. Padmanabhan, Phys. Rev. D 72, 103503 (2005). P9 E. M. Barboza, Jr. and J. S. Alcaniz, Phys. Lett. B 666, 415 (2008). P10 ] E. V. Linder and D. Huterer, Phys. Rev. D 72 , 043509 (2005). P11 A. De Felice, S. Nesseris and S. Tsujikawa, JCAP 1205, 029 (2012). P12 R. J. F. Marcondes and S. Pan, arXiv preprint arXiv :1711.06157 (2017). PW1 M. Koussour et al., Nucl. Phys. B. 990, 116158 (2023). PW2 M. Koussour and A. De, Eur. Phys. J. C 83, 400 (2023). Yu/2018 Yu, B. Ratra, F-Yin Wang, Astrophys. J., 856, 3 (2018). Moresco/2015 M. Moresco, Month. Not. R. Astron. Soc., 450 , , L16-L20 (2015). Sharov/2018 G.S. Sharov, V.O. Vasilie, Mathematical Modelling and Ge-ometry 6, 1 (2018). Blake/2011 C. Blake et al., Month. Not. R. Astron. Soc., 418, 1707 (2011). Percival/2010 W. J. Percival et al., Month. Not. R. Astron. Soc., 401, 2148 (2010). Giostri/2012 R. Giostri et al., J. Cosm. Astropart. Phys. 1203, 027 (2012). Scolnic/2018 D.M. Scolnic et al., Astrophys. J, 859, 101 (2018). Chang/2019 Z. Chang et al., Chin. Phys. C, 43, 125102 (2019). Mackey/2013 D. F. Mackey et al., Publ. Astron. Soc. Pac. 125, 306 (2013). Kessler/2017 R. Kessler, D. Scolnic, Astrophys. J., 836, 56 (2017). Planck2020 Planck Collaboration, Astron. Astrophys., 641, A6 (2020). Hernandez A. Hernandez-Almada, et al., Eur. Phys. J. C 79, 12 (2019). Gruber C. Gruber, O. Luongo,Phys. Rev. D 89, 103506 (2014). Farooq O. Farooq, et al.,Astrophys. J. 835, 26-37 (2017). Jesus J.F. Jesus, et al.,J. Cosmol. Astropart. Phys. 04, 053-070 (2020). Garza J.R. Garza, et al.,Eur. Phys. J. C 79, 890 (2019). Capozziello S. Capozziello, R. D Agostino and O. Luongo, Mon. Not. Roy. Astron. Soc. 494, 2576 (2020). Mamon S. A. Al Mamon and S. Das, Eur. Phys. J. C 77, 495 (2017). Basilakos S. Basilakos, F. Bauera and J. Sola,J. Cosmol. Astropart. Phys. 01, 050-079 (2012). Omz V. Sahni, A. Shafieloo, and A. A. Starobinsky, Phys. Rev. D 78, 103502 (2008). Farrugia/2016 G. Farrugia, J. L. Said, Phys. Rev. D, 94, 124054 (2016). Dombriz/2012 A. de la C-Dombriz, D. S-Gomez, Class. Quantum Grav., 29, 245014 (2012). Anagnost/2021 F. K. Anagnostopoulos, S. Basilakos, E. N. Saridakis, Phys. Lett. B, 822, 136634 (2021). Mishra S. A. Narawade et al., Phys. Dark Universe 36, 101020 (2022).
http://arxiv.org/abs/2307.05395v1
20230704121507
Graphene is neither Relativistic nor Non-Relativistic case: Thermodynamics Aspects
[ "Thandar Zaw Win", "Cho Win Aung", "Gaurav Khandal", "Sabyasachi Ghosh" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall", "cond-mat.stat-mech" ]
http://arxiv.org/abs/2307.00252v1
20230701071733
An ML approach to resolution of singularities
[ "Gergely Bérczi", "Honglu Fan", "Mingcong Zeng" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SC", "math.AG" ]
[ An ML approach to resolution of singularities equal* Gergely Bércziequal,arh Honglu Fanequal,unige Mingcong Zengequal,mpi arhDepartment of Mathematics, Aarhus University, Aarhus, Demark unigeDepartment of Mathematics, University of Geneva, Geneva, Switzerland mpiMPIM Bonn, Bonn, Germany Gergely Bérczigergely.berczi@math.au.dk Honglu Fanhonglu.fan@unige.ch Mingzong Zengmingzongzeng@gmail.com Machine Learning, ICML 0.3in ] The solution set of a system of polynomial equations typically contains ill-behaved, singular points. Resolution is a fundamental process in geometry in which we replace singular points with smooth points, while keeping the rest of the solution set unchanged. Resolutions are not unique: the usual way to describe them involves repeatedly performing a fundamental operation known as "blowing-up", and the complexity of the resolution highly depends on certain choices. The process can be translated into various versions of a 2-player game, the so-called Hironaka game, and a winning strategy for the first player provides a solution to the resolution problem. In this paper we introduce a new approach to the Hironaka game that uses reinforcement learning agents to find optimal resolutions of singularities. In certain domains, the trained model outperforms state-of-the-art selection heuristics in total number of polynomial additions performed, which provides a proof-of-concept that recent developments in machine learning have the potential to improve performance of algorithms in symbolic computation. § INTRODUCTION Systems of multivariate polynomial equations, for instance x^2z+yz^2+3y=0 x^2yz-3=0 play a central role in various scientific and engineering fields, as well as geometry and topology. If the number of variables is n, then the solution set, which is also called variety in algebraic geometry, is a subset of ℝ^n with rich geometry. The main technical challenge in solving these equations is the presence of ill-behaved, so-called singular points of the solution set. Geometrically, singularities can manifest as self-intersections, cusps, folds, or other irregularities in the shape of the solution set, see Appendix A for formal definition. In applications, such as computer graphics, robotics, and computer vision, these irregularities can lead to visual artifacts, inaccurate simulations, or incorrect interpretations of data. When solving systems of polynomial equations, singularities can cause numerical instability. Near singular points, the equations can become ill-conditioned, leading to inaccuracies, divergence, or difficulty in finding reliable solutions. This is particularly relevant in applications where high precision and accuracy are required, such as scientific simulations or engineering design. A fundamental problem in geometry about such systems is to remove these singular points by slightly modifying the solution set. This process is called resolution of singularities, and the main technical tool doing so is called blowing up. For a toy example take one equation y^2-x^3+x^2=0 with a node (double point) singularity at (0,0). To resolve this singularity, we substitute y=xt to get x^2(t^2-x-1)=0, which has the green and black component. The non-singular blown-up curve is the black curve t^2-x-1=0. This resolution "blows up" the origin: it replaces the origin with the projectivized tangent space of ℝ^2 at the origin. Resolution of singularities is an old, central problem in geometry with a long history. Resolution of curves goes back to Newton, Riemann and Albanese, while resolution of surfaces has been extensively studied by the 19th century Italian algebraic geometry school. In 1964, Hironaka proved that resolution is possible for any singularity in characteristic 0 <cit.>. This groundbreaking result re-defined the landscape of geometry and earned Hironaka the Fields Medal in 1970. While his initial proof was quite technical, subsequent algorithmic proofs <cit.> have been discovered with reduced complexity, resulting in resolution trees that outline the blow-up procedure. However, it is important to note that because the resolution of a singularity is not unique, discovering minimal resolutions remains a critical task in many mathematical fields. Interested readers can refer to <cit.> for more details. In this article we introduce a new approach that harnesses the power of deep reinforcement learning to seek out "good" solutions for resolving singularities. The idea starts with a classical observation by Hironaka <cit.>: by playing a particular two-player game known as Hironaka's polyhedra game (see <ref>), one can obtain solutions that lead to local resolutions of some singularity types. However, it should be noted that this relationship does not hold in the reverse direction. We have identified two crucial observations in this regard: * The Hironaka game is a Markov Decision Process(MDP) where actions only depend on the current game state (determined by the coordinates of discrete points in a space). * The game can be generalized to resolutions of singularities under different constraints (resolving hypersurface singularities, only using weighted blow-ups, or even fully general resolutions such as <cit.>) The main focus of this article is to highlight that solutions to certain Markov Decision Processes (MDP) can be used to resolve specific singularities. Moreover, Reinforcement Learning is a commonly used technique to solve MDP. The format of all such games is as follows: the game state consists of a finite set of points in either ℤ^n or ℚ^n. Two players alternate turns and make decisions from a finite (but possibly extensive) range of actions that determine a linear transformation to be carried out on the space. After these transformations, a winning condition for the first player is checked (such as whether the Newton polytope has only one vertex). The primary objective of the first player is to win the game in the fewest possible moves, while the second player's goal is to hinder the first player from winning by using adversarial actions. It should be noted that if the first player is not strategic enough, the game may continue indefinitely. In general, the relationship between games and resolutions can be summarized as follows: the state of the game corresponds to a certain Newton polytope of the singularity on an affine chart. The first player chooses the blow-up center, and the second player chooses an affine chart on the blow-up. The linear transformation encodes how the Newton polytope changes under the blow-up using the transition of this affine chart. On the math side, this work was motivated and subsequently guided by recent developments in the intersection theory of an important moduli space, the Hilbert scheme of points on manifolds <cit.>. Several classical problems in enumerative geometry and mathematical physics can be reformulated using so-called tautological integrals over Hilbert schemes, and recent results show that the integral formula is not unique: any resolution tree of a certain singularity encodes a formula, which is a rational sum over the leaves of the tree. Hence the formula's complexity highly depends on the size of the resolution tree, and finding optimal trees using reinforcement learning is crucial in the analysis of the formula, which leads to new insights into these enumerative geometry questions. In particular, our random hitting host (described in <ref>) has provided a formula for a classical problem in topological enumerative geometry, see <ref> for details. On the machine learning side, the work is inspired by recent breakthrough results <cit.> in ML-assisted proofs in pure mathematical problems, and by the deep reinforcement learning techniques which has been a powerful tool to solve problems that can be phrased as (or close to) a Markov decision process <cit.>. With the success of AlphaZero (<cit.>), the deep reinforcement learning is further amplified by the power of planning using tree search when the rules of the environment is perfectly understood. By connecting with the problem of resolution of singularities, we hope to provide deep reinforcement learning an additional use case that has a broad impact in mathematics with its own special challenges. Deep reinforcement learning has recently shown potential in various mathematical domains where computation is a crucial aspect. One such instance is the application of deep reinforcement learning to Buchberger's algorithm <cit.>, which was proposed by Dylan Peifer, Michael Stillman, and Daniel Halpern-Leistner. Their approach employs reinforcement learning agents for S-pair selection, which is a process involving a two-player game, similar to the Hironaka game we discussed. This article, while heavily focusing on the mathematical motivations and the mathematics-reinforcement learning translations, aims to also show some preliminary experiments suggesting the feasibility of applying deep reinforcement learning in the family of singularity resolution problems. § A FUNDAMENTAL VERSION OF THE GAME All of the games discussed in this paper can be regarded as variations of one fundamental, simple version. The two players play asymmetric roles, and for convenience we call Player 1 the “host", and Player 2 the “agent". The rules of this basic version are the following: A state is represented by a finite set of lattice points S⊂ℤ_+^n={(x_1,…, x_n):x_i>0 integer for 1≤ i ≤ n}. * the host chooses a subset I⊂{1,2,⋯, n} such that |I|≥ 2, and * the agent chooses a number i∈ I. The selected pair (I,i) deteremines the following linear change of variables: T_I,i: x_j ↦x_j, if i≠ j ∑_k∈ I x_k, if i=j , After S is transformed into T_I,i(S) ⊂_+^n, we remove all points sitting in the interior of the Newton polygon spanned by T_I,i(S). That is, the new state is S'=T_I,i(S) ∖ N(T_I,i(S)) where N(T_I,i(S)) = {(x_1,⋯,x_n)∈ T_I,i(S): ∃ (x_1',⋯,x_n')∈ S' such that x_i≤ x'_i, ∀ 1≤ i ≤ n}. A state is terminal if it consists of one single point. In this case, the game will not continue and the host is the winner. The host can also have an incentive to terminate the game in the fewest possible steps, and the resulting solutions can have significant mathematical implications. A host with a winning strategy against all possible agents corresponds to a resolution of the singularity corresponding to the initial state of the game. An example can be seen in https://www.youtube.com/watch?v=s2LYtd_UPY8this video. To connect with our context, for example, the E8 singularity corresponds to games with the initial state of {(2,0,0), (0,3,0), (0,0,5)}. The game does not necessarily terminate in finite number of steps. But the existence of a winning strategy is implied by Hironaka's Fields medal result <cit.>. However, the winning strategy is executed and the quality of the strategy (e.g., minimal amount of game steps against the smartest agents) still concern many directions in algebraic geometry. Although the original Hironaka game does not contain this step, we can reduce the length of the game (and hence the blow-up process) by a simple additional step when we form the new state S' from S. We call this the translation step: for a set of points Z ⊂_+^n let z_i^min=min_(z_1,…, z_n)∈ Z z_i denote the minimal ith coordinate in Z, and z^min=(z_1^min,…, z_n^min). Then the shifted set is Z^sh={z-z^min: z∈ Z} and the modified rule is that the new state in the Hironaka game is S'=(T_I,i(S) ∖ N(T_I,i(S)))^sh. Geometrically, this corresponds to removing exceptional divisors, and keeping only the strict transform of the singularity. In Figure <ref> the strict transform is the black (nonsingular) blown-up curve. We make two observations on the rules. * As one might have already noticed from the rules, the removal of interior points of the Newton polytope is in fact not necessary. One may simply carry on without removal of any point, and only draw the Newton polytope to declare the terminal state if all but one points are in the interior. This observation is helpful in avoiding our discussions becoming lengthy when we define its Markov Decision Process later. * In this particular version of the game, we can freely scale the points in ℤ^n by a common scalar. This allows for a few equivalent implementations including bounding the game states inside a unit sphere. § REINFORCEMENT LEARNING Reinforcement learning (RL) has recently shown remarkable success in a variety of applications, from playing games to robotic control and natural language processing. This, in principle, also applies to our mathematical problem which can be phrased as a Markov Decision Problem (MDP). We focus on two popular RL algorithms, Monte Carlo tree search (MCTS) and Deep Q-Networks (DQN). MCTS is a planning algorithm that searches the tree of possible actions to find the best sequence of actions that leads to the optimal solution. DQN, on the other hand, is a value-based algorithm that uses a deep neural network to approximate the action-value function and update the policy. In addition to MCTS and DQN, Proximal Policy Optimization (PPO) is a popular on-policy RL algorithm that has shown promising results in several applications. However, in our study, we only used MCTS and DQN for solving the optimization problem. Some examples of the influential applications in the field of RL include [1] for DQN, [2] for MCTS, and [3] for PPO. While our experimental results are not able to deterministically solve all singularities of a specifical dimension, they demonstrate the feasibility of using MCTS and DQN for solving Hironaka-style games and similar optimization problems. We observe improvements in the policy during training, which suggests that further refinement and exploration of RL techniques could lead to better performance. We will highlights the challenges and opportunities of applying RL techniques to mathematical optimization problems and provides a valuable contribution to the field of RL. We hope that our work will inspire further research in this direction and pave the way for more successful applications of RL in solving optimization problems. §.§ The asymmetric objectives Just like most RL setups, we have a set of game states 𝔖 (in the basic Hironaka game: 𝔖 = (^n)^k where k is the number of points in a given initial state). The rules of the games can be phrased as MDPs (section <ref>), and the two players can have their rewards functions to optimize (will be detailed in section <ref>). But a subtle point in the translation between mathematical setup and the RL environment is the following. A resolution of a singularity corresponds to a fixed and deterministic policy of the first player on all states reachable from a given initial state S. In particular, a full policy of the first player is determined by its reactions upon all possible actions from the second player. This suggests that the second player is only an adversarial challenger that is allowed to freely explore/retry and find the best way to delay the game from ending. So, the overall goal is slightly different than optimizing a single pair of policies: we view a fixed resolution of singularity as a “host" of the game against all possible adversarial agents. They have asymmetric objectives: * An adversarial agent needs to delay the games of one host for as long as possible. * But the best host needs to be able to terminate in finite steps against all agents, or even minimize the amount of steps against the best adversarial agent. This is also why in section <ref>, we call the first player the “host", and the second player the “agent". §.§ The Markov Decision Processes §.§.§ The training setups of the 2-player problem More formally, the basic Hironaka game can be converted into Markov Decision Process (MDP) if one of the players is fixed. As mentioned before, the process this article adopts goes into two steps: * Fix a host to search for the best adversarial agent; * Fix one or more adversarial agent with top performances and search for a better host policy. And the training will be continuously iterating through these two processes. We also note that there are two different ways to set up this two-player problem. * One may unify the host's and the agent's observation space, action space and reward space by leaving placeholders in their definitions of states and actions. It will turn into a symmetric game that can be improved by self-play. * One may fix a pool of host policies and adversarial agents, simulate the playoffs and select the elites using evolution methods. The first view is especially useful when incorporating planning (e.g. using MCTS) into the RL setup. Whether to improve by self-play will influence implementation details, and it is not yet clear which one is better. The second view is rather interesting from research perspectives. It requires a formulation of rewards that aggregates across the whole pool of fixed agents. One can only apply the group-evaluation idea only as an evaluation metric (Elo rating, etc.), or go with full-scale evolution experiments in search for strategies that are universally good. These are very involved directions. We have made some preliminary experiments of maintaining dynamic pools of DQN agents and select players using MAP-Elites, but the results are not yet enough for a systematic presentation. We leave them for future research. For this article, we do not go into details about these two alternatives. We focus on the iterations of the 2-step process fixing agents and hosts in turn. §.§.§ Searching for the best adversarial agent The first process is to fix a host in search for the best adversarial agent policy. We define the MDP as follows. At a time step t, let S_t ⊂(Z^n)^k be the current state of the game. For a fixed host policy, a choice of coordinates I_t⊂{1,2,⋯,n} will be made along with a given game state. The agent’s observation s_t = (S_t, I_t) is such a pair consisting of the game state and the host's choice. Strictly speaking, the action space of the agent is exactly the finite set I_t. But to fit the definition of MDP, we can simply define the action space A to be the full {1,⋯,n} and impose a probability 0 for actions outside I_t. The reward R_a_t(s_t, s_t+1) for a given action a_t∈ A can be simply defined as 0 if the game state S_t+1 is not terminal, and -1 if S_t+1 is terminal. As a common practice, we look for a policy π which determines the actions by a_t=π(s_t), optimizing the following objective: 𝔼[∑_t=0^∞γ^t R_a_t(s_t, s_t+1)], where γ is a discount factor between 0 and 1, 𝔼 is the mathematical expectation over all transitions. By having this discounted sum, it also encourages the agent to delay the end the game (if at all) as much as possible. Since we are working on a deterministic MDP with deterministic policies, the 𝔼 becomes redundant for a fixed initial state and the host policy. Thus, following the definitions, we are performing a search algorithm over agent actions that optimizes the discounted sum of rewards. §.§.§ Searching for the best host Once a strong agent policy is found, we in turn fix the agent policy, and look for a good host policy to counter that. The MDP is very similar except that the state s_t consists of S_t only, and the action space a_t ranges over all subsets I⊂{1,⋯,n} with more than 1 element. The reward R'_a_t(s_t, s_t+1) is 0 if s_t+1 does not terminate, and 1 if s_t+1 terminates. A potential risk of this iterative approach is that the evolution of host-agent pair might get stuck in loops countering each other without achieving high performances over all counter-parties. Therefore, at least theoretically, the optimizing objective for host player 𝔼_π[∑_t=0^∞γ^t R'_a_t(s_t, s_t+1)] must average over all agent policies π. In practice, evaluating Equation (<ref>) is often impractical. What we do is to simply fix a collection of agent policies 𝒫, and average over them: 1|𝒫|∑_π∈𝒫[∑_t=0^∞γ^t R_a_t(s_t, s_t+1)]. Although a host is trained against a limited number of agents, generalizability is observed and will be demonstrated later in Figure <ref>. §.§ A simple metric We introduce a metric which is helpful in measuring the performance of the host and agent during evaluations. The metric is based on the idea that for a fixed amount of time steps, a smart host should be able to play more games while a smart agent should be able to play fewer games. On the host side this means more low-turn solutions while on the agent side this means that more games are elongated in terms of steps, induced by the competent adversary. Given a bounded subset V⊂(Z^n)^k of initial states and a fixed host-agent pair, a process of continuous game-play is run for a fixed number of steps m, with an immediate restart after terminal state by uniformly sampling another initial state from V. Denote such an m-step game sequence by G_m. The metric is a simple ratio between the number of games played and the number of steps taken: ρ_V,m = 𝔼_G_m(ρ_G_m(m))=𝔼_G_m(g_G_m(m))/m where 𝔼_G_m is the mathematical expectation over all game sequences G_m and g(m) is the number of different games during these m steps. If a limit ρ_V=lim_m→∞𝔼_G_m(ρ_G_m(m)) exists, it could be a good measure for the pair of host and agent: If ρ_V is high, the host policy is uniformly stronger than the agent, vice versa. Our tests suggest the existence of the limit, but for a formal proof significantly more formality linking the proofs of the Hironaka theorem to our MDP formulation is needed. Our experiments show that * empirically ρ_V, m is observed to have a tendency of convergence after sampling enough games and let m grow; * in practice, we fix a uniform and sufficiently large m and only use ρ_V, m as the metric due to efficiency. For the rest of this paper, for each experiment we fix a sufficiently large integer N, and we sample initial states from the set V={(x_1, ⋯, x_n)  |  1≤ x_i≤ N, ∀ 1≤ i≤ n}^k. For notation convenience, we later drop the notion V in ρ_V, m, ρ_V and only denote them by ρ_m and ρ. Although it is very useful in evaluation and selection of policies, there are a couple limitations of this metric: * There are subtle differences between ρ_V and our mathematical goal: look for the host policy that consistently generates blow-ups that require as few steps as possible in all charts (i.e., against all agent policies). * The limit is not proven to exist, and estimating it requires many sampling from random games. Nevertheless, by using this empirical metric, we are able to select strong policies. For example, the one showcased in Appendix <ref> is selected among a sequence of checkpoints using its empirical ρ value against two very basic hard-coded agent benchmark (choose-first agent and choose-last agent). §.§ Agent benchmarks and generalizability Back to (<ref>), since we sample a set of agents, the simplest scenario is when they are fixed policies throughout the training. We have the following simple benchmarks. §.§.§ Constant agents and random agents There are two obvious choices to hard-code agent strategies: the constant policy and the random policy. For the constant policy, we focus on the special case where the agent always choose the first available coordinate from the host's choice I⊂{1,⋯,n}, ordered according to the coordinate labels. We call it the choose-first agent for short. Choose-first agent is in fact one of the strongest hard-coded policy we have in terms of the empirical metric ρ introduced in section <ref>. This obviously generalizes to all agents who always choose a fixed action such as “choose-last". §.§.§ Generalizability on different agents Since in practice, we sample a limited number of agents in the objective (<ref>), it is important to ask whether the training against one agent can be generalized to another. We observe that hosts trained against constant policies can already generalize its performance to other policies. An example is shown in Figure <ref>. The host is a 2-layer residual network with a 256 hidden dimension trained against the fixed choose-first agent using double DQN (<cit.>). The plot includes the evaluations of the same host network against the choose-first agent and the random agent throughout its training process. The y-axis are approximated by ρ_n with n=1000 and use log scales. The x-axis starts at 200-step for warm-ups. We observe the following: * The host vs choose-first curve in Figure <ref> shows an expected improvement throughout training, as the choose-first agent is directly used in the roll-outs as the fixed adversarial agent. * The host vs random curve in Figure <ref> shows a correlated (though not fully in sync) trajectory of improvements against random agent policy against whom the host policy has never played. . §.§ Host benchmarks Other than a learned host network, there are the following host strategies coming from or inspired by the mathematical literatures. §.§.§ The Zeillinger host We start with the simplest policy that is guaranteed to terminate in finite steps due to Zeillinger <cit.>. Let N be the Newton polyhedron of the finite set of points S∈ℤ^n. Zeillinger proved that the host wins the polyhedra game if he uses the following strategy: * If N is an orthant, the game is already won. * If N is not an orthant, choose I={k,l}⊂{1,… n} with 1 ≤ k <l ≤ n such that w_k, is a minimal and w_l is a maximal component of a characteristic vector (w_1,…, w_n) of N. The definition of the characteristic vector is a bit technical, but can be found in <cit.>. The main point is that the Zeillinger host always picks a pair of coordinates. Although it is a winning strategy, the length of the game against any agent–and hence the size of the resolution tree–is large. Zeillinger's strategy limits each choice to a finite number of pairs (k,l) any of which ends the game in finite steps. But since the choices are not unique, there might be a number of choices at each step resulting in different depths of the trees. §.§.§ The Spivakovsky host The first known strategy with guaranteed finite termination predates Zeillinger and was given by Spivakovsky <cit.>. The Spivakovsky host works with coordinate subsets I ⊂{1,…, n} which are hitting sets: we call I⊂{1,…, n} a hitting set if at least one coordinate in I is nonzero for all vertices of the Newton polygon N. We leave the exact technical details of how I is selected for <cit.>, but we point out two key features: * At each step the Spivakovsky host picks a hitting set of the Newton polygon N. * The Spivakovsky host works with big hitting sets, and hence it is relatively slow, the game is long. §.§.§ The Random Hitting host Our hand-made calculations with Thom polynomials for a slightly modified game (the Thom game) suggest that for an optimal (i.e minimal) resolution tree the host should pick a hitting set at each step, and the size of this should be small. However, deterministic selection of a minimal hitting set, such as constant selection policy, does not work in general: one can stuck in an infinite loop of transformations. Hence we introduced the following Random Hitting host: * The host picks a minimial hitting set at each step * The choice is random among the hitting sets of minimal size. Based on a large number of examples, the Random Hitting host performs the best. A future direction of research will be to find the best performing host who works with minimal hitting sets, using a reinforcement learning model. §.§ Planning with tree search Our strongest performer is obtained through AlphaZero-style RL training with MCTS planning (see for example, <cit.>). The idea of mixing deep learning and MCTS planning is to use the policy network as a heuristic to guide the Monte Carlo tree search in order to determine the action distribution of a state. The Monte Carlo tree search in turn provides an improved policy comparing to the vanilla result from the policy network. Using the MCTS-improved policies, the subsequent self-plays are collected and used to improve the policy network in a supervised manner. The key point is that this feedback loop achieves policy improvements in an overall unsupervised way. But this is in practice difficult to stabilize, and it requires efforts to experiment the model architectures and tune the hyper-parameters. We approximate the ρ metric mentioned in section <ref> and summarize in Table <ref>. The MCTS host is trained for surface singularities (states coming from ℤ^3) with a maximum of 20 points. The host is trained against an agent network that also improves using MCTS. We perform the training back-and-forth by first fixing the agent policy, then fixing the host policy, and then back fixing the agent policy, etc. For more training details, we refer to Appendix <ref>. In Table <ref>, * “MCTS" is our policy guided by the trained network and perform MCTS for 100 steps. * “Choose-all" is the constant host policy that picks the full set of coordinate in every single move. * “Zeillinger" uses the strategy in section <ref>. It is theoretically proven to be a winning strategy against all possible adversarial agents. In the Appendix <ref>, we demonstrate this MCTS host policy on a few mathematical meaningful examples (du Val singularities). There are two note-worthy features about this host policy: * most of the host policies (including those with high ρ trained from direct applications of DQN, PPO, etc.) are almost never capable of ending the game in finite steps against all possible agent policy. This problem occurs even for initial states with as few as 2-3 points. But our MCTS host is demonstrated to end all the initial states corresponding to du Val singularities in finite steps against all possible agents. * Although Zeillinger host always guarantees a resolution, it always blows up codimension-2 strata and in a lot of cases almost never a minimal resolution. But in Appendix <ref>, we show that our host is able to provide minimal or close-to-minimal resolutions for some du Val singularities. This shows the feasibility of applying deep RL to finding useful host policies corresponding to valid or even minimal resolutions of singularities. We only tried the most basic implementations and there are still a wide range of possible improvements. We are still exploring it in an ongoing research. § APPLICATIONS AND CONCLUSIONS While the existence of resolutions of singularities is proved (in characteristic zero), the development of effective algorithms for resolving singularities remains a challenge. Constructing explicit resolutions can be computationally demanding and often involves deep geometric and combinatorial techniques. Developing efficient algorithms that work in general settings is an ongoing research area. Our work offers evidence for the relevance of an ML-approach towards two main challenges: Minimal resolutions: Given that multiple resolutions of a singular variety may exist, a natural question is whether there exists a notion of "minimal resolution." Minimal resolutions capture the essential geometric and arithmetic properties of the singularity and provide a canonical representation. Understanding the existence and uniqueness of minimal resolutions is an active research topic. Classification of singularities: While there are some well-understood classes of singularities, such as ordinary double points or rational singularities, a complete classification of singularities is still an open problem. Developing a comprehensive classification scheme by extracting patterns of resolution trees using ML would contribute to a deeper understanding of singularities and their resolutions. §.§ Topology of maps with an outlook This work was originally motivated by the first authors' pure math paper <cit.> on the topology of maps, hence in this final section we collect applications which are closer to the authors' expertise. Global singularity theory is a classical subject in geometry which classifies singularities of maps f:^n →^m, and describes topological reasons for their appearance. One of its central questions is to determine the (cohomological) locus where f has a given type of singularity. This (cohomology) locus is called the Thom polynomial of the singularity, named after René Thom, who introduced and studied them in the 1950's. In <cit.> a formula for Thom polynomials of Morin singularities was developed, which can be reduced to toric geometry, in particular, to a sum of rational expressions over leaves of the blow-up tree obtained by a variant of the Hironaka game (which we call the Thom game) on a special singularity. The formula has the form Tp_k^n,m=Res_𝐳=∞(∑_L ∈𝒯_k R_L(𝐳)) ∏_i=1^k c_f(1/z_i)d𝐳 where: 𝐳=(z_1,…, z_k) are the residue variables, and the iterated residue is the coefficient of (z_1 … z_k)^-1 after we expand the rational expression on the contour z_1 ≪…≪ z_k; 𝒯_k is the set of leaves of an (arbitrary) blow-up tree of a certain Thom ideal I_k; R_L is a rational expression assigned to the label L; c_f(1/z_i) stands for a generating function of cohomology classes for the map f:^n →^m (these are called Chern classes). Using a modified resolution game, the Thom game (see Appendix), we managed to construct a blow-up tree 𝒯_k in the theorem for k≤ 7. The complexity of the formula is determined by the complexity of the Thom tree, but unfortunately, these resolution trees are quickly becoming oversized as k increases, and finding small resolution trees is crucial in understanding the structure and symmetries of the formula. Our formula for k=7 is a new result: we only knew Thom polynomials before up to k=6. We believe our approach has big potential in tackling other classical questions in enumerative geometry, such as -0.2em * The Chern positivity conjecture of Thom polynomials. Rimányi <cit.> conjectured that the Thom polynomials expressed in the Chern classes of f have nonnegative integer coefficients. This conjecture remained hopeless and intact since its formulation. * Counting plane curves with given set of singularities. * Counting maps between manifolds with given set of singularities * Determining (cohomological) locus of maps where the map has given singularities. * Conjectures on integrals in mathematical physics (Segre-Verlinde duality) icml2023 § APPENDIX: A BRIEF MATHEMATICAL BACKGROUND In 1964 Hironaka proved that it was possible to resolve singularities of varieties over fields of characteristic 0 by repeatedly blowing up along non-singular subvarieties, using a very complicated argument by induction on the dimension. Over the last 60 years several other proofs were discovered, with reduced complexity, including Bierstone & Milman <cit.>, Encinas & Villamayor <cit.>, Wlodarczyk <cit.>, McQuillen <cit.> and Abramovich & Tempkin & Wlodarchyk <cit.>. A resolution can be described by a series of blowing-ups, and these elementary operations can be arranged into a blow-up graph, which is a rooted tree labelled by clusters of variables. This tree is not unique; its size and complexity highly depend on some choices. Our knowledge about this complexity is very limited. To study the complexity of resolution trees, and to find optimal resolutions using reinforcement learning is the ultimate goal of what we propose in this paper. §.§ What is a resolution of singularity An affine algebraic variety X ={(x_1,…, x_n): f_1(x_1,…, x_n)=… =f_k(x_1,…, x_n)=0}⊂𝔸^n is the common zero locus of polynomial equations. Affine varieties play central role in mathematics, physics and biology. Affine varieties cut out by one polynomial equation are called affine hypersurfaces. E.g X={(x_1,…, x_n):f(x_1,…, x_n)=0} §.§ Singularities We can think of varieties as "shapes in affine spaces", and at a generic point x ∈ X the variety locally is 𝔸^r for some r, which we call the dimension of X. However, there are special, ill-behaved points, where the local geometry of X is less patent. The affine variety X is singular at a point a ∈ X if the Jacobian matrix Jac(X,a)=(∂ f_i/∂ x_j)(a) at a is of rank smaller than n-dim(X). The set of singular points of X is called the singular locus of X. §.§ Blow-up: turning singularities into smooth points Resolution of singularities is a classical central problem in geometry. By resolution we mean that we substitute the original, possibly singular X with a nonsingular Y with a proper birational map f:Y → X such that f is an isomorphism over some open dense subset of X. The celebrated Hironaka theorem <cit.> from 1964 asserts that such resolution exists for all X, and it can be constructed as a series of elementary operations, called blowing up. Blowing up or blow-up is a type of geometric transformation which replaces a subspace of a given space with all the tangent directions pointing out of that subspace. For example, the blow-up of a point in a plane replaces the point with the projectivized tangent space at that point, and this gives a resolution of the nodal curve y^2-x^2(x+1)=0. Over the field of real numbers, a picture can be illustrated in Figure <ref>. More precisely, Hironaka proved that the resolution of singularities can be achieved by a sequence of blowups Y=X_n → X_n-1→…→ X_0=X if the characteristic of the base field is zero. This beautiful and fundamental work was recognized with a Fields medal in 1970. Villamayor, as well as Bierstone and Milman independently, have described the algorithmic nature of the process of resolving singularities in characteristic zero. Hironaka's theorem has been proven through de Jong's innovative ideas, leading to simple proofs by Abramovich and de Jong, as well as by Bogomolov and Pantvev. The most recent significant development in devising a straightforward resolution algorithm has been made by Abramovich, Tempkin, and Vlodarczyk, as well as by McQuillen, who proposed a simple stacky presentation through the use of weighted blow-ups. § SOLUTIONS TO THE RESOLUTION PROBLEM In the literature, there appear several proofs for Hironaka's celebrated theorem on the resolution of singularities of varieties of arbitrary dimension defined over fields of characteristic zero. These proofs associate invariants to singularities, and show that certain type of blow-ups improve the invariant. We can interpret resolution as a game between two players. Player A attempts to improve the singularities. Player B is some malevolent adversary who tries to keep the singularities alive as long as possible. The first player chooses the centres of the blowups, the second provides new order functions after each blowup. The formulation of the resolution as a game goes back to Hironaka himself. He introduced the polyhedra game where Player A has a winning strategy, which provide resolution of hypersurface singularities. He formulated a "hard" polyhedra game, where a winning strategy for Player A would imply the resolution theorem in full generality, but such winning strategy does not necessarily exist. Later Hauser defined a game which provided a new proof of the Hironaka theorem. The most basic version of the game is defined in section <ref>. Here we list some (but not all) known variations with their key features. We introduce the Thom game, which provides formulas for Thom polynomials and integrals over Hilbert scheme of points as explained in section <ref>. §.§ Hauser game This version of the Hironaka game was suggested by Hauser <cit.>. A simple winning strategy was given by Zeillinger <cit.>, which gives a resolution process for hypersurfaces singularities. The rules: * states: A finite set of points S ⊂ℕ^n, such that S is the set of vertices of the positive convex hull Δ={S+ℝ^n_+}. * move: The host chooses a subset I⊂{1,2,⋯, n} such that |I|≥ 2. The agent chooses a number i∈ I. * state change: Given the pair (I, i) chosen by the host and agent, for x=(x_1,⋯,x_n)∈ℤ^n we define T_I,i(x)=(x_1',…, x_n') where x_j' = x_j, if i≠ j ∑_k∈ I x_k, if i=j , The new state S' is formed by the vertices of the Newton polyhedron of Δ'={T_I,i(x):x∈ S}. * terminal states: a state S is terminal if it consists of one single point. In short, the host wants to reduce the size of S as quickly as possible, but the agent wants to keep the size of S large. §.§ Hironaka's polyhedra game This is the original Hironaka game from 1970, see <cit.>. A winning strategy for the host was given by Mark Spivakovsky in 1980 <cit.> which proved the resolution theorem for hypersurfaces. The rules: * states: A finite set of rational points S ⊂ℚ^n, such that ∑_i=1^n x_i>1 for all (x_1,…, x_n)∈ S, and S is the set of vertices of the positive convex hull Δ={S+ℝ^n_+}. * move: The host chooses a subset I⊂{1,2,⋯, n} such that |I|≥ 2 and ∑_i∈ Ix_i≥ 1 for all (x_1,…, x_n)∈ S. The agent chooses a number i∈ I. * state change: Given the pair (I,i) chosen by the host and agent, for x=(x_1,⋯,x_n)∈ℤ^n we define T_I,i(x)=(x_1',…, x_n') where x_j' = x_j, if i≠ j ∑_k∈ I x_k -1, if i=j , The new state S' is formed by the vertices of the Newton polyhedron of Δ'={T_I,j(x):x∈ S}. * terminal states: a state S is terminal if it consists a point (x_1,…, x_n) such that ∑_i=1^n x_i ≤ 1. §.§ Hard polyhedra game The hard polyhedra game was proposed by Hironaka in 1978 <cit.> Hironaka has proved that a winning strategy for the host of this game would imply the local uniformization theorem for an algebraic variety over an algebraically closed field of any characteristic. However, a famous result of Mark Spivakovsky <cit.> showed that the host does not always have a winning strategy The rules: * states: A finite set of rational points S ⊂ℚ^n, such that ∑_i=1^n x_i>1 for all (x_1,…, x_n)∈ S, the denominators are bounded by some fix N, and S is the set of vertices of the positive convex hull Δ={S+ℝ^n_+}. * move: The host chooses a subset I⊂{1,2,⋯, n} such that |I|≥ 2 and ∑_i∈ Ix_i≥ 1 for all (x_1,…, x_n)∈ S. The agent chooses some element i∈ S and modifies the Newton polygon Δ to a set Δ^* by the following procedure: first, the agent selects a finite number of points y=(y_1,…, y_n), all of whose coordinates are rational numbers with denominators bounded by N as above, and for each of which there exists an x = (x_1, …, x_n)∈Δ which satisfy some basic relations. Δ^* is then taken to be the positive convex hull of Δ∪{selected points}. * state change: Given the pair (I,i) chosen by the host, for x=(x_1,⋯,x_n)∈ℤ^n we define T_I,i(x)=(x_1',…, x_n') where x_j' = x_j, if i≠ j ∑_k∈ I x_k -1, if i=j , The new state S' is formed by the vertices of the Newton polyhedron of Δ'={T_I,j(x):x∈ S}. * terminal states: a state S is terminal if it consists a point (x_1,…, x_n) such that ∑_i=1^n x_i ≤ 1. §.§ The Stratify game In 2012 Hauser and Schicho <cit.> introduced a combinatorial game, called Stratify. It exhibits the axiomatic and logical structure of the existing proofs for the resolution of singularities of algebraic varieties in characteristic zero. The resolution is typically built on a sequence of blowups in smooth centres which are chosen as the smallest stratum of a suitable stratification of the variety. The choice of the stratification and the proof of termination of the resolution procedure are both established by induction on the ambient dimension. §.§ Thom game This is a modified version of the Hironaka game which we developed in the present paper to find optimal solutions for Thom polynomials and integrals over the Hilbert scheme of points as explained in section <ref>. The Thom game is a weighted version of the Hironaka game. It has a winning strategy, and every run of the game provides a blow-up tree, which encodes a formula for Thom polynomials of singularities, answering long-standing question in enumerative geometry. The rules: * states: A pair (S,w), where: S is a finite set of points S ⊂ℕ^n, such that S is the set of vertices of the positive convex hull Δ={S+ℝ^n_+}; w=(w_1,…, w_n)∈ℕ^n is a weight vector associating a nonnegative integer weight to all coordinates. * move: The host chooses a subset I⊂{1,2,⋯, n} such that |I|≥ 2 and ∑_i∈ Ix_i≥ 1 for all (x_1,…, x_n)∈ S. The agent chooses an i∈ I such that w_i is minimal in {w_j: j∈ I}. * state change: Given the pair (I,i) chosen by the host and agent, for x=(x_1,⋯,x_n)∈ℤ^n we define T_I,i(x)=(x_1',…, x_n') where x_j' = x_j, if i≠ j ∑_k∈ I x_k, if i=j , The new state S' is formed by the vertices of the Newton polyhedron of Δ'={T_I,i(x):x∈ S}, shifted by a positive integer multiple of (-1,…, -1) such that S' still sits in the positive quadrant, but any further shift will move it out. The new weight vector is w'_j=w_j, if j=i or j∉ I w_j-w_i if j ∈ I∖{i}, * terminal states: a state S is terminal if it consists of one single point. §.§ The Abramovich-Tempkin-Wlodarczyk game In 2020 Abramovich, Tempkin and Wlodarczyk <cit.> introduced a new resolution algorithm, based on weighted blow-ups. Quillen <cit.> independently concluded similar results. Their resolution process uses intrinstic invariants of singularities which improves after each blow-up, resulting in a significantly simpler proof of the Hironaka theorem. In their original version, there is no choice in the blowing-up process, and our calculations with the Thom ideals indicate that the ATW resolution can be far from being optimal in terms of the number of leaves of the blowing-up tree. However we are working on transforming the ATW algorithm into a game with a view towards an ML approach. § A HOST TRAINED WITH MCTS The host in section <ref> is trained using a custom implementation of AlphaZero, with host and agent being different neural network and trained back-and-forth using MCTS planning taking turn fixing the counter-party. We evaluate the host against the choose-first agent and the choose-last agent (the benchmark agents who always pick the first/last action) and cherry-pick the checkpoint who has the best ρ score against both of them. Both host and agent networks use the simplest fully-connected neural network with ReLU as their activation functions. The detailed spec is in Table <ref>. The rule of the game uses the basic Hironaka game (section <ref>) with one extra transformation at the beginning of each turn: translating all points altogether, so that all coordinates remain non-negative while for each coordinate, at least one point touches the coordinate plane (the coordinate being 0). In the resolution of hypersurface singularities, this corresponds to removing exceptional divisors and only looking at the strict transforms. Although our policy does not outperform a guaranteed winning strategy in terms of ρ (see Table <ref>), it is getting close. Note that ρ^-1 only measures the overall average steps to end a game, which does not reveal other information such as whether it is guaranteed winning or whether the resolution is minimal. In the following subsections, we fix our host policy, and demonstrate a few complete state-action trees against all agent choices with initial states corresponding to du Val singularities on surfaces. §.§ Conventions Let us first explain some conventions and notation changes. In this section, we make a minor change of notation: * the label of coordinates will be 0-based instead of 1-based (see the notation of section <ref>). Concretely, host action I⊂{0, 1, ⋯, n-1}. We would also like to repeat and highlight the additional translation rule during state transition: * after the linear transformation, we post-compose with a translation, so that all coordinates remain non-negative while for each coordinate, there exists at least one point which touches the coordinate plane (the coordinate being 0) Recall that the during the state transition, the host first chooses a subset I⊂{0, ⋯, n-1} and agent chooses a number from I. Although the state transition consists of 1) host action, 2) agent action, we note that * it is only necessary to draw the agent actions as edges, because all possible agent choices will recover the subset I. Therefore, in the full trees we will demonstrate, the edges only correspond to agent choice, and they will be labeled by their corresponding coordinate. §.§ A2 surface singularity We first show the tree of A2 singularity where the host easily found the minimal resolution. We use this simple example to explain in details about how to parse our pictures. We consider affine surfaces defined in ℂ^3, and name the three coordinates x,y,z. The initial state (the root) consists of three points: (2,0,0), (0,2,0), (0,0,3) corresponding to the hypersurface x^2 + y^2 + z^3 = 0. There are 3 edges coming out of the root and labelled as 0, 1, 2. This implies that our policy chose the action of I={0,1,2} (according to the notation of <ref>) which corresponds to the blow-up center x=y=z=0. The agent now has 3 charts (actions) to pick from. For example, after taking action 0, according to the rule of the basic Hironaka game plus our additional translation rule, the three points undergo the following operations: * becoming (2,0,0), (2,2,0), (2,0,3) after adding the first and the second coordinate to the third coordinate; * removing (2,2,0) and (2,0,3) as they are in the interior of the Newton polytope. * translating (2,0,0) to (0,0,0) according to our extra rule of translation explained in the beginning of Appendix <ref>. Since we have only one point left, the game terminates at (0,0,0) after the agent took the action 0. The only way to continue the game is for the agent to take the action 2. After this action, the three points first become (2,0,2), (0,2,2), (0,0,3). None of the points are interior points, therefore all of them survive and get translated to (2,0,0), (0,2,0), (0,0,1) along the vector (0,0,-2) so that the first two points have the third coordinate being 0. The next host action is also I={0,1,2}. Note that the A2 singularity is already smoothed after this blow-up, while the game can still continue after the agent takes action 0 or 1. This is simply because the terminating condition of the basic Hironaka game is merely a sufficient condition of smoothness. For example, the state (1,0,0), (0,0,1) corresponds to the hyperplane x + z = 0 which is already smooth. One can easily modify the terminating rule to include this case (e.g. the sum of all coordinates being 1 for at least one point), but we do not do so for consistency. As a result, this A2 resolution according to the host policy is the minimal resolution. For readers not having algebraic geometry background, we encourage them to still continue with the D4 singularity where we prepared a detailed chart-by-chart analysis demonstrating how to parse the blow-up information purely mathematically. §.§ D4 surface singularity We start with the equation x^2 + y^2z + z^3 = 0 which defines a rational double point of type D4. The host demonstrates a slightly different blow-up path that ended up with the same D4 Dynkin diagram. The mapping between our full policy tree and the RL environment is already explained in the A2 example. So, we use this example to demonstrate a chart-by-chart calculation for readers not coming from algebraic geometry background and still looking to verify the geometry on the mathematical side. §.§ A more detailed chart-by-chart analysis Now we are looking at the D4 resolution (Figure <ref>). Recall that the game may not end even when the chart is smooth. We mark the ealiest smooth states in blue. Throwing away all the sub-trees coming after the blue nodes, we see that it takes two blow-ups to resolve this D4 singularity. §.§.§ Step 1, host move The host chose the coordinates I = {0, 2} as the first step, which corresponds to blowing up the line defined by x=z=0. The resulting surface is no longer an affine surface, and to look at the whole picture, one must try to observe through two different charts (think about atlas, charts in manifold theory): * (u,v,w) through the change of variables u=x, v=y, uw=z. * (u',v',w') through u'w'=x, v'=y, w'=z. Choosing any chart corresponds to an agent action. And the change of variables corresponds to the linear transformations of the game states when looking at the exponents. This step is particularly surprising for me at a first glance, as the usual approach is to blow up x=y=z=0 (e.g., the first chart would be u=x, uv=y, uw=z, etc). But turns out it doesn't hurt the result. A smart agent should choose the second chart, as the origin of the first chart is a already smooth point. Let us check this statement by hand: Plugging in u=x, v=y, uw=z, we obtain u^2+uv^2w+u^3w^3=u(u+v^2w+u^2w^3)=0. The equation now defines two surfaces: * u=0 which corresponds to the exceptional divisor of the blowup of 𝔸^3. It is the "shadow" coming from the modification of the outer space 𝔸^3, and spans outside the surface. * u+v^2w+u^2w^3=0 which corresponds to the strict transform of the original surface. It is the real modification of the original surface, and it is what we care. One can apply the Jacobian criterion to verify that this is a smooth surface. From the first chart, we can see that the exceptional curves consist of two lines: they are defined by u+v^2w+u^2w^3=0, u=0. By plugging u=0 in, the system of equation becomes v^2w=0, u=0, or equivalently, the line u=v=0 unions the line u=w=0. §.§.§ Step 1, agent move The agent chose the second chart. Now by plugging in u'w'=x, v'=y, w'=z, we obtain u'^2w'^2+v'^2w'+w'^3=w'(u'^2w'+v'^2+w'^2)=0. Again, * w'=0 is the exceptional divisor of the blowup on the ambient space 𝔸^3. * u'^2w'+v'^2+w'^2=0 is a singular surface. It glues together with u+v^2w+u^2w^3=0 and we are just seeing the two parts of the same surface. (As a side note, the two exceptional divisors from different charts u=0 and w'=0 are not mutually exclusive. They glue together to form one single quasi-projective variety. We are also looking at two parts of the same exceptional divisor. As a result, the exceptional line u=v=0 and w'=v'=0 are in fact charts of the same ℙ^1.) An easy application of Jacobian criterion tells us that the origin (0,0,0) on u'^2w'+v'^2+w'^2=0 still needs to be resolved. §.§.§ Step 2, host move and agent move Now we rinse and repeat from the new equation u'^2w'+v'^2+w'^2=0. The host chose all coordinates this time, which corresponds to blowing up at u'=v'=w'=0. With the analysis above as well as the help from the blowup tree, we see that only one chart is interesting (agent's choice of coordinate 0). Altogether, they correspond to the change of variable: u”=u', u”v”=v', u”w”=w'. By plugging in, we see u”^3w”+u”^2v”^2+u”^2w”^2=u”^2(u”w”+v”^2+w”^2)=0. Ignoring the exceptional divisor u”=0 (for now), we move on to the next singular surface u”w”+v”^2+w”^2=0. The next step can be easily carried out by imitating our previous procedures, and we leave it as an exercise for the interested readers. Now, if we backtrace all the steps and keep track of the exceptional curves, passing to its dual graph, we will see the famous Dynkin diagram D4. §.§ A few more surface singularities In addition, we include the full action trees of A3 and D5 as follows. §.§.§ A3 The host policy tree on A3 is very similar to A2. But the difference is that the first blow-up created two different components in the exceptional locus. From the equation, the agent's choice at coordinate 2 corresponds to plugging in x=uz, y=vz in x^2+y^2+z^4=0. The exceptional locus becomes the intersection of z=0 and u^2+v^2+z^2=0 and it consists of two complex lines intersecting at the origin. §.§.§ D5 The full action tree of D5 of our policy is shown in Figure <ref>. Note that it is not exactly a minimal resolution, but very close. The host's suboptimal choice is marked in red. Had the host chosen I={0,1,2} for the state transition, it resolves the singularity with 1 fewer step and recovers the D5 minimal resolution. §.§.§ E8 Finally, we include the full action tree of the E8 singularity in Figure <ref>. Our host policy was able to terminate the game within 9 steps against the strongest adversarial agent.
http://arxiv.org/abs/2307.03341v1
20230707005357
A model for molecular hydrogen-dependent star formation in simulations of galaxy evolution
[ "Ezequiel Lozano", "Cecilia Scannapieco", "Sebastian E. Nuza" ]
astro-ph.GA
[ "astro-ph.GA" ]
Unified Modeling and Rate Coverage Analysis for Satellite-Terrestrial Integrated Networks: Coverage Extension or Data Offloading? Jeonghun Park, Jinseok Choi, Namyoon Lee, and François Baccelli J. Park is with School of Electrical and Electronic Engineering, Yonsei University, South Korea (e-mail:). J. Choi is with School of Electrical Engineering, KAIST, South Korea (email:), N. Lee is with Department of Electrical Engineering, Korea University, South Korea (e-mail:). F. Baccelli is with INRIA-ENS, France (email: ) August 1, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The star formation rate (SFR) is a key characteristic of galaxies. In the context of the standard cosmological model, the SFR is determined by a combination of various processes that take place over the course of a galaxy's lifetime, such as gas cooling, star formation, chemical enrichment, and feedback from supernovae and galactic nuclei. These processes are influenced by factors like mergers, interactions, and mass accretion, which affect the amount and properties of the gas from which stars form. The density of a gas cloud is believed to be the most important factor in determining its star formation rate, although the details of this process are not yet fully understood. Observationally, the total gas density is found to be correlated to the star formation rate <cit.>, and this correlation is even stronger when considering the molecular gas <cit.>. As the formation of dark matter halos and galaxies is highly non-linear, numerical simulations have become the preferred tool to investigate how galaxies form and evolve from early times up to the present. This type of simulations naturally include mergers/interactions and continuous gas accretion, processes that may induce changes in the SFR. However, there are still significant uncertainties in the modelling of the evolution of the baryonic component, since the physical processes that affect baryons – such as star formation, feedback, and chemical enrichment – take place at scales that are too small to be resolved directly. As a result, these processes are introduced using sub-grid physics, which involves a number of adjustable parameters that are not always independent of one another or constrained observationally. This can lead to inconsistencies in the predictions of different models <cit.>. Because of its importance in galaxy formation, it is critical for simulations to accurately describe the star formation process at the scales that can be resolved, as well as the associated feedback effects. In this work, we present a new model of star formation that takes into account the chemical composition and the relative abundance of atomic and molecular gas phases. The model is grafted onto the cosmological, magnetohydrodynamical code Arepo <cit.>. The remainder of this work is organized as follows: in Sec. <ref>, we describe our new model; in Sec. <ref>, we discuss the results for simulations of an isolated galaxy; and in Sec. <ref>, we present our conclusions. § THE STAR FORMATION MODEL Our star formation model is designed to track the time evolution of the molecular and atomic phases of hydrogen in a gas cloud <cit.>, and be applied to numerical simulations of the formation of galaxies. By following the evolution of these two phases separately, we can construct different prescriptions for a star formation law linked to the amount of molecular and atomic gas in different proportions. In this way, we improve the traditional star formation law where the star formation rate is only a function of the total gas density and does not depend explicitly on the molecular or atomic fractions <cit.>. The evolution of the neutral gas and stellar components in a gas cell is obtained by solving a system of coupled equations for the atomic (a_f), molecular (m_f), and (newly formed) star (s_f) fractions, namely: ȧ_f(t)= -a_f(t) τ_C^ -1 + (η + R) ψ(t) , ṁ_f(t)= a_f(t) τ_C^ -1 - (η + 1) ψ(t) , ṡ_f(t)= (1 - R) ψ(t) . The exchange of mass between phases is driven by dissociation of molecular hydrogen – with an efficiency per unit of star formation rate η – and condensation of atomic hydrogen – catalyzed by dust grains <cit.> and regulated by the time parameter τ_C. For the star formation rate we use the following parametrization: ψ(t) = m_w m_f(t) + a_w a_f(t)/τ_S , where m_w and a_w are the relative weights of the atomic and molecular fractions (which can vary for different models) and τ_S is a typical time-scale. The mass return from stars to interstellar gas due to supernovae is characterized by the parameter R, and assumed to be in the atomic phase. This assumption is based on the instantaneous recombination hypothesis, which states that the ionized hydrogen transforms into atomic hydrogen in a timescale much shorter than the integration time. This allows us to accurately model the mass return and its effects on the interstellar gas. Most of the input parameters of the model are constants that are well constrained empirically or theoretically, that we take from the literature. In contrast, τ_C depends on the properties of the gas, particularly the gas metallicity (∝ 1/Z) and density (∝ 1/ρ), and these dependencies are considered in our model following <cit.> and <cit.>. Furthermore, η depends on the metallicity of the newly formed stars – assumed to be inherited from the gas – and the integration time step; Fig. <ref> shows the relation between η and Z for a time-step of 1 Myr. § RESULTS We run several simulations using an idealized initial condition for a Milky Way-mass halo with a virial mass of 10^12 M_⊙, producing a galaxy with spiral morphology. We assumed different values for the input parameters which link the star formation to the atomic/molecular component (a_w and m_w) and run the simulations for 2 Gyr. In this work, we focus on the simulation assuming that star formation is only linked to the molecular fraction (m_w=1, a_w=0). Fig. <ref> shows the projected total and molecular gas densities, in a face-on view, for our simulation at the final snapshot. The gas is well-settled into a rotationally-supported structure reminiscent of a spiral galaxy, with the usual complexity of the hydrodynamical gas evolution. As expected, the molecular gas does not follow exactly the distribution of the total gas, as its formation depends not only on the gas density but also on the gas metallicity. It is important to note that it is the molecular hydrogen that, in our model, is actively participating in the process of forming new stars, implying that the stellar spatial distribution will be determined by the location of molecular clouds. Fig. <ref> compares the results of our simulation (solid line in blue) and a reference model (dashed line in red) in which the star formation rate is solely determined by the total gas density as in standard implementations. The upper two panels of this figure show the evolution of the integrated stellar mass and SFR. The most important effect of linking the SFR to the molecular fraction is a delay in the onset of star formation of around 200 Myr with respect to the reference model. This delay is a consequence of the evolution of the gas in the molecular phase, which needs some time to be created and is regulated by the level of enrichment of the gas, determined, in turn, by star formation. After this first period, when enough stars have formed and the interstellar gas is enriched, the SFR reaches higher values compared to the reference model, producing a significant rise in the total stellar mass formed. At the end of the simulation, our model produced a higher total stellar mass even though the stars started to form later compared to the reference model. The bottom panel of Fig. <ref> displays the oxygen abundance profile at the end of the simulations. Although the two profiles are similar in the outer regions, our model has a more pronounced peak at the center of the disc, similar to observed profiles (similar results are obtained for other elements). The differences follow the variations in the distribution of molecular and total gas that determine the shape and levels of the metallicity profiles. § CONCLUSIONS We presented a new model to describe the atomic and molecular fractions within a gas cloud, which is used to implement a star formation rate coupled to these fractions in arbitrary proportions. Our model has been specifically designed to be used in conjunction with simulations of galaxy formation in a cosmological context. In order to test the effectiveness of our model, we applied it to simulations of isolated halos with a mass comparable to that of the Milky Way, and compared our findings to a reference model where the star formation rate is determined solely by the total gas density. Our results indicate that the dependence of the star formation rate on the amount of molecular gas delays the onset of star formation, and could affect the metallicity profiles in disc galaxies. The authors acknowledge support provided by UBACyT 20020170100129BA baaa
http://arxiv.org/abs/2307.00806v1
20230703074256
Compositions of Knots Using Alexander Polynomial
[ "G Infant Gabriel", "Dr N Uma" ]
math.GT
[ "math.GT", "math.AT", "57K10, 57K12, 57K14, 57K45" ]
Teaching to extract spectral densities from lattice correlators to a broad audience of learning-machines Nazario Tantalo August 1, 2023 ========================================================================================================= Knot theory is the Mathematical study of knots. In this paper we have studied the Composition of two knots. Knot theory belongs to Mathematical field of Topology, where the topological concepts such as topological spaces, homeomorphisms, and homology are considered. We have studied the basics of knot theory, with special focus on Composition of knots, and knot determinants using Alexander Polynomials. And we have introduced the techniques to generalize the solution of composition of knots to present how knot determinants behave when we compose two knots. Keywords: Topology, Knot theory, Homeomorphisms, Reidemeister Moves, Alexander Polynomials and Composition of Knots. § INTRODUCTION Knot theory is a new kind of applicable Mathematics. Edward E. David, Jr. <cit.> referred that utility of Mathematics conceived symbolically. The history of Knot theory starts from 19th century physics, with the work of Gauss on computing linking numbers in a system of linked circular wires. D. Silver <cit.> also studied the knots and coined the word topology. In 1867, Kelvin’s vortex model of atom by W.T Thompson <cit.> was presented. The American Mathematician J.W Alexander <cit.> was the first to suggest that knot theory is extremely important in the study of 3-dimensional topology which was further underlined by the German mathematician H. Seifert. Later K. Murasugi <cit.> studied the relationship between algebraic geometry and the knot theory. Knot theory was first presented by the physicist and chemists William Thomson based on the study by Baron Kelvin who hypothesized that atoms of different elements can be defined by different knots. Through Thomson’s theory was later proved incorrect, his work inspired Peter Trait, who developed many concepts that are used today in application of knots theory to biology, chemistry and physics. Colin Adams <cit.> suggested that the knot theory was used in modelling of DNA, the effects of enzymes and in statistical mechanics while examining the interaction between particles in a system. By using more theoretical models, Scientists and Mathematicians can make these concrete concepts to manipulate and work within. And in this paper, we have studied knots that are embedded in S^3 . A knot can be projected onto a plane (or simply drew it on a paper). These projections are called knots diagrams (Figure 1). To avoid ambiguity the following restrictions are considered * At each crossing, the string segment passes over respectively under the other (this is usually done by drawing a gap in the bottom segment). * Each crossing involves exactly two segments of the string. * The segments must cross transversely. * At each crossing there is always one over strand and one understand. * An arc is a piece of the knot that passes from one undercrossing to another with only overcrossing in between (an unbroken line). The following diagrams are projections of the knots. § PRELIMINARIES Basic definitions and concepts are presented in this chapter <cit.> K is a knot for the embedding h:S^1→ S^3 whose image is K. If K⊆ S^3 , then it is homeomorphic to the unit circle S^1 . The actual knot is a smooth embedding of the unit circle in S^3 . A knot can only consist of one component; a link on the other hand is a finite union of disjoint knots. <cit.> Knots K_1 and K_2 in S^3 are equivalent if there exist a homeomorphism h:S^3→ S^3 such that h(K_1)=h(K_2) <cit.> If H is an isotopy between ambient spaces H:S^3× I ×→ S^3 , then H is an ambient isotopy. The knot can be deformed to any expected manner. The arcs can be bent and moved through space without passing through one another (knot can be shrunk or grown). Also, it is not permitted to pull the knot so tight that it unknots itself by disappearing into a point. <cit.> * R1 allows to remove (or introduce) a twist in a diagram. The result is that the knot will have one fewer (or one more) crossing. * R2 lets separate two strings that lie on top of each other or vice versa. This will add or remove two crossings. * R3 allows moving a strand from one side of a crossing to the other. This also works if the strand is moved above the other two strands. R3 does not affect the number of crossings in the current projection. If a deformation of the diagram uses R2 and R3, referred as Regular Isotopy (or Planar Isotopy).The regular isotopy is an equivalence relation for the knot diagrams and is not defined for the knot embedding. The Reidemeister moves describes the procedures performed on diagrams <cit.> The crossing number, c(K) of a knot K, is the minimum number of crossings, that occur in any diagram of K. <cit.> A knot is oriented if it has an orientation assigned and refered by arrows. If a knot K has a given orientation, -K refers as reverse orientation. A knot is called invertible if K and -K are equivalent. All knots in Figure 2 are invertible. <cit.> The writhe of an oriented knot, w(K), is the sum of the crossings with signs as shown in Figure 4. <cit.> A knot K is called alternating if its diagram, the undercrossing and overcrossings alternate around K. <cit.> The knot sum or connected sum K_1 # K_2 is formed by placing two knots side by side, removing a small arc from each knot and then joining the knots together with two new arcs. <cit.> A knot is called composite if it can be written as the sum of K_1 and K_2 , neither of which is the unknot and prime if it is not composite. Knot tables only show prime knots, see Figure 2. § COMPOSITION OF KNOTS Given two projections of knots, a new knot obtained by removing a small arc from each knot projection and then connecting the four endpoints by two new arcs as in Figure 5. The resulting knot is the composition of the two knots. If we denote the two knots by the symbols K_4_1 and K_3_1 , then their composition is denoted by K_4_1 # K_3_1 . The two projections should not overlap, and the two arcs are chosen to remove the outside of each projection to avoid any crossings i.e., they do not cross either the original knot projections or each other (Figure 6). A knot a composite knot if it can be expressed as the composition of two knots, neither of which is the trivial knot. The knots that makeup the composite knot is called factor knots. The composition of a knot K with the unknot, the result is again K. (Figure 7). If a knot is not the composition of any two nontrivial knots, we call it a prime knot. An orientation is defined by choosing a direction to travel around the knot. The orientation on K_3_1 matches the orientation on K_5_2 in K_3_1#K_5_2 , resulting in an orientation for K_3_1#K_5_2 , or the orientation on K_3_1 and K_5_2 do not match up in K_3_1#K_5_2 .If the orientations do match up in all the compositions of the two knots then it will yield the same composite knot. If the orientations do match up in all the compositions of the two knots then it will yield the single composite knot. § THE ALEXANDER POLYNOMIAL OF COMPOSITION In 1928, J.W Alexander <cit.> introduced polynomial invariant to compute the knot diagram entries of a matrix determinant called the Alexander polynomial. Alexander determined the entries of a matrix from the crossing and arcs of the diagram. The Alexander polynomial does not depend on the indexing of crossing and arcs. It does not also depend on the row and column eliminated from the crossing/arc matrix. The Skein relation discovered by John Conway in 1969 is to compute the Alexander polynomial. The Alexander polynomial is invariant up to multiplication by ± t^N where N is some integer. Numerical Example: Composition of two knots K_3_1#K_1_1 Applying the condition of Alexander polynomial in above composition knot K_3_1#K_1_1 we get, M_K_3_1#K_1_1=[ 1-t 0 -1 t; t 1-t 0 -1; 0 -1 t-t^2 0; -1 t 0 1-t; ] The Alexander matrix A_k is defined as the matrix M by deleting row n and column n, where A_K_3_1#K_1_1=[ 1-t 0 -1; t 1-t 0; 0 -1 t-t^2; ] The Alexander polynomial Δ_k(t) of a Knot K is the determinant of its Alexander matrix, which is Δ_K_3_1#K_1_1 =2t-3t^2+3t^3-t^4 Finally, the resulting determinant is multiplied by t^-1 to normalize the Alexander polynomial in order to have a positive constant term. Thus 2-3t+3t^2-t^3 is the Alexander Polynomial of this Composition of Knot K_3_1# K_1_1 . We observe that Alexander polynomial for compositions of K_3_1 and K_1_1 is 2-3t+3t^2-t^3. Similarly the composition ofK_3_1 and K_2_1 is 3-t^-1-5t+5t^2-t^3 . * K_4_1→ K_1_1, K_2_1 are found to be 1-3t+4t^2-2t^3 and 2+t-8t^2+8t^3-4t^4+t^5 respectively. * K_5_1→ K_1_1, K_2_1 together to form 2-5t+2t^2-7t^3+5t^4-t^5 and 3-t^-1-9t+12t^2-14t^3+10t^4+2t^5. * K_5_2→ K_1_1, K_2_1 are joined to be 2-t^-1-5t+7t^2-6t^3+2t^4 and 1-t^-1-2t+7t^2-10t^3+6t^4-2t^5. * K_6_1, K_6_2,K_6_3→ K_1_1 are found to be 4-10t+4t^2+7t^3-8t^4 , 2-2t-3t^2+3t^3+2t^4-2t^5 and 1-3t+2t^2+2t^3-2t^4. S.No Composition of two Knot Composition Solution using AP 1 K_3_1#K_1_1 2-3t+3t^2-t^3 2 K_3_1#K_2_1 3-t^-1-5t+5t^2-t^3 3 K_4_1#K_1_1 1-3t+4t^2-2t^3 4 K_4_1#K_2_1 2+t-8t^2+8t^3-4t^4+t^5 5 K_5_1#K_1_1 2-5t+2t^2-7t^3+5t^4-t^5 6 K_5_1#K_2_1 3-t^-1-9t+12t^2-14t^3+10t^4+2t^5 7 K_5_2#K_1_1 2-t^-1-5t+7t^2-6t^3+2t^4 8 K_5_2#K_2_1 1-t^-1-2t+7t^2-10t^3+6t^4-2t^5 9 K_6_1#K_1_1 4-10t+4t^2+7t^3-8t^4 10 K_6_2#K_1_1 2-2t-3t^2+3t^3+2t^4-2t^5 11 K_6_3#K_1_1 1-3t+2t^2+2t^3-2t^4 § CONCLUSION Knots have always been an integral part of real life and have found Mathematical uses recently. As we can also visualize the application of compositions of knots in DNA replication, in this paper we have explained a brief about the composition of knots. Algebraic area is more important in knot theory and it involves drawing comparison between two knot compositions, hence we have applied the Alexander polynomial to get different polynomial for compositions of knots for K_3_1#K_1_1,K_3_1#K_2_1,....etc., and in future we will extend the compositions of knots for K_6_2#K_2_1,K_7_1#K_1_1,....etc., 1 AlexandarJ.W Alexandar, Topological invariants of knots and links, Transactions of the American Mathematical Society 30 (1928), 275-306. Colin Colin Adams, “An Elementary Introduction to the Theory of Knots”, New York, WH Freeman, 1994. DavidDavid, E.E. Jr. Renewing US mathematics: An agenda to begin the second century. Notices of the A.M.S. 35 (1988), 1119-1123. Mari Mari Ahlquist, “On Knots and DNA”, Department of Mathematics, Linköping University, LiTH-MAT-EX–2017/17–SE, December 2017. MurasugiK. Murasugi, knot theory and its application, Birkhuser Boston, (1993). RichardRichard H. Cromwell and Ralph H. Fox, Introduction to knot theory, Springer-Verlog; (1963). SilverD. Silver, Knot theorys odd origins. American Scientist 94 (2006), 158-165. ThompsonW.T Thompson, Mathematical and Physical papers, III Cambridge U. Press (1890).
http://arxiv.org/abs/2307.03098v2
20230706161418
Lepton-pair scattering with an off-shell and an on-shell photon at two loops in massless QED
[ "Simon Badger", "Jakub Kryś", "Ryan Moodie", "Simone Zoia" ]
hep-ph
[ "hep-ph", "hep-th" ]
Couplet scoring for research based assessment instruments H. J. Lewandowski August 1, 2023 ========================================================= DEdifferential equation HVPhadronic vacuum polarisation IBPintegration-by-parts IRinfrared ISPirreducible scalar product ISRinitial state radiation MImaster integral MPLmultiple polylogarithm N3LO[N3LO]next-to-next-to-next-to-leading order NLOnext-to-leading order NNLOnext-to-next-to-leading order QCDquantum chromodynamics QEDquantum electrodynamics RVVreal-double-virtual SMStandard Model UVultraviolet VVdouble-virtual § INTRODUCTION The MUonE experiment <cit.> will measure the hadronic running of the electromagnetic coupling α using low-energy elastic electron-muon scattering, eμ→ eμ. This will enable a new and precise determination of the HVP contribution a_μ^HVP <cit.> to the muon anomalous magnetic moment a_μ. This is required in light of the recent tensions between experimental <cit.>, SM data-driven <cit.>, and lattice QCD <cit.> results for a_μ. Increasing the precision of the theoretical predictions for eμ→ eμ scattering is a high priority for the planned MUonE experiment <cit.>. The recent completion of full NNLO QED corrections <cit.> indicates that N3LO corrections in differential distributions are required to meet MUonE's precision goal of 10 parts per million. Electron-line corrections, meaning corrections to the subprocess with the muon line stripped off (e→ e γ^*), are the dominant corrections <cit.>, and a collaborative project was started to perform their fixed-order calculation at N3LO <cit.>. With the triple-virtual corrections now available <cit.>, the main missing ingredient is the RVV matrix element (e→ e γγ^*) at two loops. While these contributions could be extracted from amplitudes in the literature <cit.>, our direct computation provides the massless RVV contribution in a complete and compact form. Another application of the 0→ℓℓ̅γγ^* amplitudes is in electron-positron annihilation experiments <cit.>. It is required for initial-state corrections in predictions of the ratio of hadron-to-muon production in e^+e^- collisions, which is an important input for existing SM predictions of a^HVP_μ <cit.>. The two-loop amplitudes contribute to RVV corrections to e^+e^+→γ^* in direct scan measurements, while radiative return measurements concern corrections to e^+e^-→γγ^* <cit.>. In the latter configuration, the e^+e^- beam has a fixed centre-of-mass energy of a few GeV and the on-shell photon originates from ISR. The energy lost to the ISR photon is used to effectively scan over the energies of the decay of the off-shell photon. A differential cross section of, for example, γ^*→hadrons with respect to the centre-of-mass energy of the decay, σ/ s, can be extracted from measurements of the differential cross section with respect to the energy of the ISR photon, σ/ E_γ. State-of-the-art predictions for these measurements are currently at NLO <cit.>. We provide the two-loop e^+e^-→γγ^* amplitudes required for the VV corrections at NNLO, although the bottleneck remains in the hadronic decay. Our amplitudes are calculated in the approximation of massless leptons. In the NNLO massive eμ→ eμ cross section calculation <cit.>, the authors obtain photonic corrections (those with no closed fermion loops), using a small-mass expansion <cit.> applied to the two-loop amplitudes with massless electrons for the VV corrections. This approximation relies on the electron mass being much smaller than any other scale, which is valid in the bulk of phase space. Further splitting the photonic corrections, they take the subset of electron-line corrections and find that the relative difference to the true massive NNLO differential cross section is generally around 10^-3α^2, where α is the fine-structure constant, which is negligible compared to the 10^-5 precision goal. The approximation breaks down in soft and collinear regions, where they treat the amplitudes using IR factorisation <cit.>, and is not used for contributions including closed fermion loops <cit.>. Our amplitudes can be used analogously for the RVV corrections at N3LO. Our computation uses the modern technology developed for QCD amplitudes with many scales. The high-multiplicity amplitude frontier in massless QCD lies with two-loop five-particle processes, with leading-colour <cit.> and full-colour <cit.> results in a form ready for phenomenological application becoming available over the past few years. Recently, the first single-external-mass calculations are also appearing <cit.>. These computations have made extensive use of finite-field arithmetic to sidestep large intermediate expressions. This technology has had a considerable impact for solutions of systems of IBP identities <cit.> but also applies more widely to scattering amplitude computations <cit.>. Motivated by the improved algorithms, we choose to implement a complete finite-field based reduction for the 2→ 2 processes with an off-shell leg. Since the kinematics are relatively simple in comparison with other high-multiplicity configurations, this technology is not essential. It does, however, provide an opportunity to review the new techniques for readers who are not familiar with them. A key ingredient for computing the scattering amplitudes are analytic expressions for the required Feynman integrals. Complete analytic results up to two loops are already available in the literature <cit.>. Expansions of these integrals up to higher orders in the dimensional regularisation parameter ϵ have also been reconsidered recently <cit.>, in view of their usage for N3LO corrections to 2→ 2 processes in QCD <cit.>. The state of the art for integrals with this kinematic configuration has reached three loops <cit.>. We revisit the computation of the one- and two-loop integrals following the approach of Gehrmann:2018yef,Chicherin:2020oor,Badger:2021nhg,Chicherin:2021dyp,Abreu:2023rco based on the construction of a basis of independent special functions, which gives a unique and uniform representation of all the required Feynman integrals up to transcendental weight four. This enables a more efficient computation of the amplitudes using the modern workflow based on finite-field arithmetic, and leads to more compact expressions. We give explicit expressions for the basis functions in terms of MPL which can be evaluated in an efficient and stable way throughout the physical phase space. We compute all crossings of all massless one- and two-loop four-particle Feynman integrals with an external off-shell leg, so that our results for the integrals may be of use for any scattering process with these kinematics. Our paper is organised as follows. In <ref>, we describe our decomposition of the helicity amplitudes and detail how we express the off-shell currents. In <ref>, we discuss our computation of analytic amplitudes by numerical evaluations over finite fields. In <ref>, we present the computation of the Feynman integrals in terms of a basis of special functions. We draw our conclusions in <ref>. We provide useful technical details in appendices. We define the relevant families of Feynman integrals in <ref>. In <ref>, we discuss in detail how we handle permutations of the integral families in the IBP reduction. In <ref>, we describe our rational parametrisation of the kinematics. <Ref> is devoted to the UV renormalisation and IR factorisation which determine the pole structure of the amplitudes. In <ref> we discuss the analytic continuation of the special functions to the physical kinematic regions. § STRUCTURE OF THE AMPLITUDE We calculate the one- and two-loop QED corrections to the process 0 →ℓ(p_1,h_1) + ℓ̅(p_2,h_2) + γ(p_3,h_3) + γ^*(p_4) , which we call 0→ℓℓ̅γγ^* for short. Here, ℓ denotes an on-shell massless lepton and γ (γ^*) an on-shell (off-shell) photon, while h_i and p_i are the helicity and momentum of the ith particle. We take the external momenta p_i to be all outgoing. They satisfy the following momentum-conservation and on-shell conditions: ∑_i=1^4 p_i^μ = 0 , p_i^2 = 0 ∀ i=1,2,3 . The single-off-shell four-particle phase space is described by three independent scalar invariants, which we choose as s⃗{s_12, s_23, s_4} , where s_i… j (p_i+…+p_j)^2. We use dimensional regularisation in the 't Hooft-Veltman scheme <cit.>, with D=4-2 spacetime dimensions (where is the dimensional regulator) and four-dimensional external momenta. Because of the off-shell photon in the process, the helicity amplitudes ^μ(1_ℓ,2_ℓ̅,3_γ,4_γ^*) are actually off-shell currents carrying a free Lorentz index. We consider the perturbative QED expansion of the helicity amplitudes, ^μ(1_ℓ,2_ℓ̅,3_γ,4_γ^*) = g_e^2 ∑_L≥ 0( n_α/4π)^L L(1_ℓ,2_ℓ̅,3_γ,4_γ^*) , with prefactor n_=ı (4π)^e^-γ_E, electromagnetic coupling g_e, and α=g_e^2/(4π). We truncate the expansion at L=2 loops. We set the renormalisation scale μ_R to one throughout the computation and restore the dependence on it in the final analytic result by dimensional analysis. For the bare amplitudes we have that L(μ_R) = μ_R^2 LL(μ_R=1) . There are two independent helicity configurations (h_1,h_2,h_3), which we take as {-+- , -++} . We derive the analytic expressions for these helicity amplitudes. We obtain the remaining helicity configurations, {+-+ , +–}, through parity transformation (see appendix C of Badger:2023mgf). We decompose the loop-level helicity amplitudes L into gauge-invariant subamplitudes Lij, where the subscript i counts the number of closed massless fermion loops and j the number of external photons attached to closed fermion loops. The non-zero contributions are 1 = 100 + 111 , 2 = 200 + ( 210 + 211 + 212) + ^2 221 , where denotes the number of charged lepton flavours running in the loops. Representative Feynman diagrams contributing to these subamplitudes are illustrated in <ref>. Amplitudes with a closed fermion loop attached to an odd number of photons vanish by Furry's theorem. We decompose the amplitude and subamplitude currents as L = ∑_k=1^4 a_k^(L) q_k^μ , Lij = ∑_k=1^4 a_i,j;k^(L) q_k^μ , using the following basis written with the spinor-helicity formalism: q_k^μ = p_k^μ ∀ k=1,2,3 , q_4^μ = ⟨ 2|p_3p_1σ^μ|2 ] - ⟨ 1|p_3p_2σ^μ|1 ]/2 s_12 . Readers not familiar with the spinor-helicity formalism may like to consult one of the many good reviews on the subject <cit.>. Note that q_4 is orthogonal to the momenta p_i by construction; one can in fact show that q_4^μ∝ε^μνρσq_1_νq_2_ρq_3_σ. The subamplitude coefficients a_i,j;k^(L) can be related to the amplitude ones a_k^(L) through <ref>. The scattering amplitudes ℳ^(L) for fully on-shell processes (for instance, for 0 → e^-e^+ γμ^-μ^+) are obtained by contracting the amplitude currents L (for 0 → e^-e^+ γγ^*) with a suitable decay current 𝒱_μ (in this example, γ^*→μ^-μ^+), as ℳ^(L) 𝒜^(L)·𝒱 = ∑_k=1^4 a_k^(L) ( q_k·𝒱) . In this manner, the on-shell amplitudes ℳ^(L) are given by the scalar product between the vector of coefficients (a_1^(L), …, a_4^(L)), and that of decay-vector contractions (q_1·𝒱, …, q_4·𝒱 ). The coefficients a^(L)_k depend on the helicities of the three on-shell particles in <ref>, while the decay vector 𝒱_μ depends on the helicities of the particles the off-shell photon decays to. The helicity-summed interference between the L_1-loop and the L_2-loop matrix elements is then given by ℳ^(L_1,L_2) = 1/4∑_h⃗ℳ^(L_1)_h⃗^* ℳ^(L_2)_h⃗ , where the subscripts h⃗ indicates the helicities of all on-shell particles — that is, including the decay products of the off-shell photon — and the overall constant factor averages over the helicities of the incoming particles. The output of the computation described in <ref> is the set of four projections 𝒜^(L)_i,j· q_k for each helicity configuration listed in <ref>. From these, we determine the subamplitude coefficients a_i,j;k^(L) by inverting <ref>, as a_i,j ; k^(L) = ∑_m=1^4 (G^-1)_km(𝒜^(L)_i,j· q_m) , where G is the Gram matrix of the vectors q_i, that is, the matrix of entries G_ij q_i · q_j for i,j=1,…,4. At loop level, we write the subamplitude coefficients as a_i,j;k^(L) = ∑_w=-2L^4-2L∑_r c_r,w mon_r(F) ^w , where mon_r(F) are monomials of special functions F (see <ref>), and the coefficients c_r,w are rational functions of the kinematics. We drop the dependence on i, j, k, and L on the right-hand side of <ref> for compactness. We truncate the Laurent expansion around =0 to the orders required for computing NNLO predictions. We express the coefficients c_r,w as ℚ-linear combinations of a smaller set of linearly-independent coefficients (see <ref>). The analytic expressions of the latter are given explicitly in terms of momentum twistor variables (see <ref>). We simplify these expressions through a multivariate partial fraction decomposition using  <cit.>, and by collecting the common factors. In the ancillary files <cit.>, the directory amplitudes/ contains files describing the bare helicity subamplitude currents Lij by their coefficients a_i,j;k^(L) in the form of <ref>. The script current.m is a reference implementation of the numerical evaluation of the bare amplitude coefficients a_k^(L) in <ref>, including summation of subamplitudes in <ref>, treatment of dependent helicities, and renormalisation scale restoration in <ref>. The script evaluation.wl demonstrates the construction of the five-particle on-shell amplitudes in <ref> for the process 0→ e^- e^+ γμ^- μ^+, and their helicity-summation to obtain the squared matrix elements in <ref>. The results of the script are checked against a reference point included in reference_point.json. We perform the following checks of our amplitudes. Ward identity We verify the gauge invariance of the subamplitudes Lij by checking that they vanish on replacing the on-shell photon's polarisation vector with its momentum. One-loop crosscheck We successfully crosscheck our one-loop =0 helicity-summed matrix element contracted with the decay γ^*→μ^-μ^+ against the QED NLO electron-line corrections for eμ→ eμγ obtained with  <cit.>. Finite remainder We verify that the -poles of the bare amplitudes have the structure predicted by UV renormalisation and IR factorisation <cit.>. We then subtract the expected poles and define finite remainders at one and two loops as ℱ^(1)^μ = [ 1 - 3/2β_0/0] - Z^(1)0 , ℱ^(2)^μ = [ 2 - 5/2β_0/1 - (-15/8β_0^2/^2+3/4β_1/) 0] - Z^(2)0 - Z^(1)ℱ^(1)^μ , where the square brackets separate renormalisation of UV poles from subtraction of IR poles. We present the derivation of these formulae in <ref>. § SETUP OF THE CALCULATION In this section, we outline the workflow we used to calculate our amplitudes. Firstly, we generate all Feynman diagrams contributing to <ref> using  <cit.>. Each diagram is then replaced with the corresponding Feynman rules for vertices, propagators, and external states, leading to a collection of D-dimensional Feynman integrals. Next, we filter the integrals according to <ref> using a collection of and scripts <cit.>. Within each subamplitude Lij, we then collect the integrals according to their topology, by which we mean a unique set of denominators. For example, the diagrams in <ref> belong to different topologies, but those in <ref> belong to the same topology (under the assumption of massless lepton propagators). At this point, the subamplitudes are sums of Feynman integrals over distinct integral topologies, with the numerators given by linear combinations of monomials that depend on the loop as well as the external momenta. To work with the projected helicity subamplitudes ^(L)_i,j· q_k, we specify the polarisations of external particles according to <ref>, as well as the projector q_k^μ of the off-shell photon from <ref>. It is natural to express helicity-dependent objects using the spinor-helicity formalism. Then, the monomials of loop momenta contain the following scalar products and spinor strings: { k_i · k_j, k_i · p_j , ⟨ij|,⟩ ij, ⟨ i |k_i |j], ⟨ i |p_4 |j], ⟨i|k_i p_4|j⟩, [i| k_i p_4 |j] } . Their coefficients, on the other hand, are composed of the same type of objects, but do not contain any dependence on loop momenta k_i. We express these coefficients using the rational parametrisation of the kinematics discussed in <ref>. This marks the start of our finite-field sampling procedure <cit.>. The goal of this approach is to sidestep the algebraic complexity which typically plagues the intermediate stages of symbolic computations by evaluating numerically all rational coefficients. Using integers modulo some large prime number — which constitute a finite field — for the numerical evaluation allows us to avoid the loss of accuracy inherent to floating-point numbers, as well as the expensive arbitrary-precision arithmetic required by rational numbers. Manipulations needed to further process the rational coefficients are a completely separate problem from the calculation of the integrals or special functions that these coefficients multiply. In fact, they can be implemented as a series of rational transformations over finite fields. We stress that this is the methodology we follow at each step of the computation described below. In particular, we use the package  <cit.>, which is conveniently interfaced to . The analytic form of the coefficients is not known at any intermediate step. It is reconstructed from the finite-field samples only at the very end of the workflow. Firstly, we note that not all integral topologies are independent: some of them can be written as subtopologies of others. For this reason, we define the set of maximal topologies, i.e. topologies with the maximum number of propagators allowed for L-loop, n-particle diagrams. In figure <ref>, we present the maximal topologies for the process under consideration in an arbitrary ordering of the external momenta (we give their explicit definitions in <ref>). Several orderings of the external momenta are relevant for the amplitudes, and we treat them as distinct families. Next, we map all topologies present so far onto one of these maximal topologies. The loop momenta dependent objects of <ref> are then expressed through the nine inverse propagators and ISP associated with the chosen maximal topology. In this way, each subamplitude is now a sum of integrals compatible with IBP reduction <cit.>, while their coefficients depend purely on external kinematics. We generate the required IBP relations using  <cit.>. The resulting IBP system is then solved using the Laporta algorithm <cit.> with 's linear solver to yield the reduction of all the integrals present within our maximal topologies onto a much smaller subset of MI. We choose the MI such that they satisfy DE in canonical form <cit.> (see <ref>). We stress that the IBP reduction is also done numerically over finite fields, since the coefficients of the IBP relations are rational functions of external kinematics and the dimensional regulator . This is an important simplification, since analytic IBP reduction often proves to be the bottleneck of amplitude computations. For many amplitude applications, multiple permutations of the ordered topologies can appear. We outline a strategy to optimise the reduction in such situations in appendix <ref>. At this point, each projected helicity subamplitude ^(L)_i,j· q_k is written as a linear combination of MI multiplied by rational coefficients of and the kinematic variables. We now write the MI in terms of a basis of special functions up to the required order in (see <ref>). Finally, we Laurent expand the amplitude around =0, the deepest pole being 1/^2 L at L loops. The only task left is to reconstruct the rational coefficients of the special-function monomials from their samples over finite fields. In general, this might be a daunting challenge and its complexity stems from two separate factors. The workflow described so far is a series of rational operations chained together within a so-called dataflow graph <cit.>. As such, we essentially have a black-box algorithm which takes numerical values of the kinematic variables as input, and returns the corresponding numerical values of the rational coefficients of the special-function monomials. The first factor is that several sample points are necessary to infer the analytic expression of these coefficients from their values in the finite fields. The required number is correlated with the polynomial degrees of the rational functions viewed as ratios of polynomials: the higher the degree, the more sample points are required. The second factor affecting the reconstruction complexity is the time it takes to obtain the values of the coefficients at each sample point. The more complicated the dataflow graph is, i.e. the more operations are chained together and the more difficult each operation is, the longer it will take to run the black-box algorithm. The most expensive operation in this regard is the evaluation of the solution to the IBP system. The total reconstruction time can thus be estimated as: reconstruction time≈ (number of sample points) × (evaluation time per point) . We emphasise that the sample evaluations can be run in parallel. For a detailed discussion of various strategies to improve the reconstruction time, see section 4 of Badger:2021imn and section 3.5 of Badger:2022ncb. Here, we give a brief overview of the tools that proved sufficient for this work. First, we look for ℚ-linear relations among the rational coefficients of each helicity subamplitude. This typically requires few sample points with respect to the full reconstruction. We then solve these linear relations to express all coefficients in terms of a minimal subset of independent ones. Only the latter need to be reconstructed. Choosing them so that they have the lowest degrees often leads to a decrease in the complexity of the reconstruction. The second strategy we employ is to match the rational coefficients with factorised ansätze informed by the singularity structure of Feynman integrals. The singularities of Feynman integrals can in fact be read off from the DE they satisfy. For each coefficient we then write an ansatz made of the following factors: {⟨ij|,⟩ ij , ⟨i|p_4|j] , s_ij - s_k4 , s_i4 - s_4 , s_4 } , for all i, j, k = 1, 2, 3 such that i≠ j ≠ k. This list includes denominator factors of the DE satisfied by the MI (listed by <ref>), as well as spinor structures aimed at capturing the phase information of helicity amplitudes. We then determine the exponents of the ansätze by comparing them to the coefficients reconstructed on a univariate slice of the kinematic variables <cit.>, which are very cheap to obtain. We find that with this ansatz it is possible to determine all denominator factors — which indeed are linked to the singularity structure of the amplitude — and sometimes also some numerator factors. As a result, the undetermined functions yet to be reconstructed are of lower degree and require fewer sample points. We reconstruct the analytic form of the remaining rational functions using 's built-in multivariate functional reconstruction algorithm. Finally, we note that, for more computationally demanding processes, further ansatz-based techniques — for instance based on partial fraction decompositions or informed by the singularity structure of the amplitudes — may be used to optimise the functional reconstruction; see, for example, Badger:2021imn, Badger:2021ega, Badger:2022ncb,DeLaurentis:2022otd,Badger:2023mgf,Abreu:2023bdp,Liu:2023cgs. § COMPUTATION OF THE MI The MI for the relevant integral families were computed analytically in gehrmann:2000zt,gehrmann:2001ck,Gehrmann:2023etk (see also Gehrmann:2002zr for a thorough discussion of the analytic continuation). We revisit this computation to obtain expressions for the MI which are better suited for the amplitude-computation workflow discussed in <ref>. To this end, we compute the MI for all permutations of the external legs in terms of a basis of special functions, following the approach of Gehrmann:2018yef,Chicherin:2020oor,Badger:2021nhg,Chicherin:2021dyp,Abreu:2023rco. In other words, we express all the Feynman integrals contributing to the amplitudes in terms of a set of special functions which are (algebraically) independent. Having such a unified and unique representation for all permutations of the integral families allows for simplifications and cancellations among different permutations of the Feynman integrals. This leads to a simpler expression of the amplitudes and to a more efficient functional reconstruction in the finite-field setup presented in <ref>. We emphasise that our results cover all MI required for computing any two-loop four-particle amplitude with a single external off-shell leg, and not just the ones required for the amplitudes presented in this work. We discuss the construction of the basis in <ref>, and how we express it in terms of MPL in <ref>. Finally, we give some details about the numerical evaluation and the checks we performed in <ref>. §.§ Construction of the special function basis We follow the strategy presented in Abreu:2023rco. The starting point are the DE satisfied by the MI for each family <cit.>. Let τ label an integral family, e.g. the double-box in <ref> for an arbitrary permutation of the external massless momenta. We choose a basis of pure MI g⃗_τ, that is, a basis which satisfies DE in the canonical form <cit.> g⃗_τ(s⃗; ) = ( ∑_i=1^7A^(τ)_i log W_i(s⃗) ) ·g⃗_τ(s⃗; ) . Here, is the total differential, f s_12 ∂_s_12 f + s_23 ∂_s_23 f + s_4 ∂_s_4 f, A^(τ)_i are constant n_τ× n_τ matrices, with n_τ the number of MI of the family τ, and 4 W_1 = s_12 , W_2 = s_23 , W_3 = s_12 + s_23 , W_4 = s_12 - s_4 , W_5 = s_23 - s_4 , W_6 = s_12 + s_23 - s_4 , W_7 = s_4 are called letters. We find such canonical bases by a mixture of methods: the package  <cit.>, the analysis of results in the literature for related integral families (massless two-loop five-point planar integrals <cit.> and two-loop four-point integrals with two massive external legs <cit.>), and a set of heuristic rules (see e.g. Dlapa:2022nct). We normalise the MI such that their expansion around =0 starts from order ^0, g⃗_τ(s⃗ ; ) = ∑_w ≥ 0^w g⃗^(w)_τ(s⃗) . For the purpose of computing two-loop scattering amplitudes up to their finite part (i.e., up to order ^0), it suffices to restrict our attention to w ≤ 4. Since the MI satisfy canonical DE eq:canonicalDEs, the -order of the MI coefficients g⃗^(w)_τ(s⃗) equals their transcendental weight <cit.>. We compute the derivatives of the MI using  <cit.> and  <cit.>. We do so only for the integral families with the ordering of the external momenta shown in <ref>, and obtain those for all other orderings of the external massless legs by permutation. We provide the definition of the pure MI and the corresponding DE for all one- and two-loop four-point one-mass families in <ref> in the folder pure_mi_bases/ of our ancillary files <cit.>. In order to solve the DE eq:canonicalDEs we need boundary values, i.e., values of all MI up to order ^4 at a phase-space point. Due to the simplicity — by today's standards — of the integrals under consideration, an arbitrary (non-singular) phase-space point would do. Nonetheless, we make a more refined choice following some of the criteria of Chicherin:2020oor,Chicherin:2021dyp. We choose the following point in the s_12 channel (see <ref>), s⃗_0 = ( 2, -1/2, 1 ) , motivated by two principles: that it is symmetric under the permutations which preserve the s_12 channel (i.e., swapping p_1 ↔ p_2), and that it contains few distinct prime factors. The first condition reduces the number of permuted integral families we need to evaluate in order to obtain the boundary values. The second condition reduces the number of independent transcendental constants appearing in the boundary values, which simplifies the construction of the basis of special functions. The order-^0 boundary values g⃗_τ^(0) are rational constants. We obtain them up to their overall normalisation by solving the `first-entry conditions' <cit.>, i.e., by requiring the absence of unphysical branch cuts in the solutions. We fix the overall normalisation and the higher-order boundary values g⃗_τ^(w)(s⃗_0) (for 1≤ w≤ 4) by evaluating all MI with  <cit.> (interfaced to  <cit.> and  <cit.>) at s⃗_0 with at least 60-digit precision. We anticipate from <ref> that, although we use floating-point boundary values, our results in terms of MPL are fully analytic. The canonical DE eq:canonicalDEs and the boundary values for all integral families are the input for the algorithm of Abreu:2023rco for constructing a basis of special functions. We refer to the original work for a thorough discussion. Out of all MI coefficients up to transcendental weight 4, the algorithm selects a subset, denoted F {F^(w)_i(s⃗)}, which satisfy two constraints. First, they are algebraically independent, that is, there are no polynomial functional relations among them. Second, the MI coefficients of all families (including all permutations of the external massless legs) up to transcendental weight 4 are expressed as polynomials in the {F^(w)_i(s⃗)} and the zeta values ζ_2 = π^2/6 and ζ_3. For example, an arbitrary weight-2 MI coefficient g^(2)(s⃗) has the general form g^(2)(s⃗) = ∑_i=1^4 c_i F_i^(2)(s⃗) + ∑_i ≤ j =1^3 d_ij F_i^(1)(s⃗) F_j^(1)(s⃗) + e ζ_2 , with c_i, d_ij, e ∈ℚ. This special subset of MI coefficients, {F^(w)_i(s⃗)}, constitutes our special function basis. We give the number of functions in the basis in <ref>. Note that there is freedom in the choice of which MI coefficients make up the basis. We make use of this freedom to choose as many basis-elements as possible from the one-loop family, then complement them with coefficients from the planar two-loop families, and finally complete them with coefficients from the non-planar two-loop families. In this way no two-loop MI coefficients appear in the one-loop amplitudes, and no non-planar two-loop MI coefficients appear in those amplitudes where only planar diagrams contribute (as is often the case in the leading colour approximation of QCD). The folder mi2func/ of our ancillary files <cit.> contains the expression of all MI coefficients (for all one- and two-loop integral families in all permutations of the external massless legs) up to weight 4 in terms of our special function basis. This result enables the efficient amplitude-computation strategy based on finite-field arithmetic discussed in <ref>. However, at this stage the basis functions {F^(w)_i} are expressed in terms of Chen iterated integrals <cit.> and numerical boundary values g⃗^(w)(s⃗_0). This representation is excellent for investigating the analytic properties of Feynman integrals and amplitudes, but it is not readily suitable for an efficient numerical evaluation. In the next section we discuss how we construct a representation of the function basis in terms of MPL and zeta values, which is well suited for an efficient and stable numerical evaluation. §.§ Expression in terms of MPL In this section we construct a representation of our function basis in terms of MPL. The weight-n MPL of indices {a_1,…,a_n} and argument x is defined recursively as G(a_1,a_2,…,a_n; x) ∫_0^x t/t - a_1 G(a_2,…,a_n; t) , a_n ≠ 0 , for a_n ≠ 0, starting with G(;x) = 1. Trailing zeros, i.e., zeros in the right-most indices, are allowed through the definition G(0,…,0_k; x) 1/k!log^k(x) . We refer to Vollinga:2004sn for a thorough discussion. Since the letters in <ref> are rational and linear in all variables, we can solve the canonical DE in <ref> algorithmically in terms of MPL. Order by order in , the solution is given by g⃗^(w)_τ(s⃗) = ∑_i=1^7 A_i^(τ)·∫_γlog(W_i(s⃗=γ) ) g⃗^(w-1)_τ(s⃗=γ) + b⃗^(w)_τ , starting from the constant weight-0 boundary values g⃗^(0)_τ determined in the previous subsection. Here, γ is a path connecting an arbitrary base-point s⃗_base to the end-point s⃗. The weight-w constants b⃗^(w)_τ are given by the values of the integrals at the base-point, b⃗^(w)_τ = g⃗^(w)_τ(s⃗_base). For s⃗_base we may use the boundary point s⃗_0 in <ref>, so that the constants b⃗^(w)_τ coincide with the boundary values determined numerically in the previous section. We follow a different approach, which allows us to trade all numerical constants in the expressions for zeta values. We find it convenient to change variables from (s_12,s_23,s_4) to (z_1,z_2,s_4), with z_1 = s_12/s_4 , z_2 = s_23/s_4 . This way, there is only one dimensionful variable, s_4, the dependence on which is fixed as an overall factor by dimensional analysis. We then integrate the canonical DE as in <ref> along the following piece-wise path in the (z_1,z_2,s_4) space: (0,0,0) γ_1⟶ (z_1, 0, 0) γ_2⟶ (z_1, z_2, 0) γ_3⟶ (z_1, z_2, s_4) . Since the Feynman integrals are divergent at the chosen base-point, the latter is understood in a regularised sense (we refer to section 4 of Abreu:2022mfk for a thorough discussion). Choosing (0,0,0) as base-point has the important benefit of removing spurious transcendental numbers that would pollute the solution were we to choose a base-point where the integrals are finite. As we will see below, only zeta values appear. Roughly speaking, we define regularised, finite values b⃗^(w)_τReg g⃗^(w)_τ(s⃗_base) by introducing a regulator and formally setting to 0 the (divergent) logarithms of the regulator. Since the integrals are finite at a generic end-point s⃗, the divergences at the base-point must cancel out with divergences arising in the integration. We can thus drop all these divergences. Provided that we do it consistently between the integration and the base-point values b⃗^(w)_τ, this leads to a finite and unique result. In practice, we fix the finite base-point values b⃗^(w)_τ by matching the solution g⃗^(w)_τ(s⃗) evaluated at the boundary point s⃗_0 against the boundary values discussed in the previous subsection. We therefore keep the b⃗^(w)_τ as symbols and integrate the canonical DE as in <ref> along the path in <ref> up to weight 4. We parameterise each piece of the path in <ref> linearly. For instance, γ_2(t) = ( z_1, t , 0 ), with t ∈ [0,z_2]. * The γ_1 integration leads to MPL with indices in {0,1} and argument z_1. * The γ_2 integration leads to MPL with indices in {0,1, 1-z_1, -z_1} and argument z_2. * The γ_3 integration leads to powers of log(-s_4), fixed by dimensional analysis. Once we have obtained expressions for all MI in terms of MPL and symbolic constants b⃗^(w)_τ, we equate them to the numerical boundary values at s⃗_0, and solve for the b⃗^(w)_τ. We use  <cit.> to evaluate the MPL numerically. Finally, we use the algorithm <cit.> to express the ensuing values of b⃗^(w)_τ in terms of ζ_2 and ζ_3. As a result, we obtain a fully analytic representation of all MI — and thus of our special function basis {F^(w)_i} — in terms of MPL and zeta values, up to weight 4. Contrary to the functions in the basis {F^(w)_i}, the MPL in their representation satisfy functional relations. We make use of this freedom to optimise our expressions in view of their numerical evaluation by reducing the number of distinct MPL that need to be evaluated. First, we use the shuffle algebra of MPL to push all trailing zeros into logarithms through <ref> <cit.>. Next, we employ the scaling relation G(a_1, …, a_n; x) = G(a_1/x, …, a_n/x; 1) , which holds for x, a_n ≠ 0. As a result, all MPL have argument 1 and indices l_0 = 0 , l_1 = s_4/s_12 , l_2 = s_4/s_23 , l_3 = s_4-s_12/s_23 , l_4 = - s_12/s_23 . Finally, we decompose the MPL to Lyndon words <cit.> using  <cit.>; we refer to the latter work for a thorough explanation, and give here only a simple example. This procedure requires that we choose a symbolic ordering of the MPL indices. We choose l_0 ≺ l_1 ≺ l_2 ≺ l_3 ≺ l_4, meaning that l_1 is greater than l_0, and so on. Consider the MPL G(l_1, l_0; 1), whose indices are not sorted according to the ordering above, since l_1 ≻ l_0. We can use the shuffle algebra of MPL to rewrite it in terms of MPL whose indices are sorted according to the chosen ordering, as G(l_1, l_0; 1) = G(l_0;1) G(l_1;1) - G(l_0, l_1; 1) . Doing this consistently throughout all expressions reduces the number of higher-weight MPL in favour of products of lower-weight ones, which are cheaper to evaluate numerically. To maximise the impact in this sense, we tested all possible orderings of the indices and selected the one — given above — which minimises the number of weight-4 MPL. The resulting representation of the function basis contains 4 weight-1, 6 weight-2, 19 weight-3, and 25 weight-4 MPL, as well as 3 logarithms: log(s_12/s_4) , log(s_23/s_4) , log(-s_4) . We write the latter in terms of logarithms rather than MPL as they play an important role in the factorisation of the IR divergences in the scattering amplitudes (see <ref> for the IR structure of the amplitudes we compute here). We stress that log(-s_4) is the only function of a dimensionful argument in our representation of the function basis. We provide in the folder mi2func/ of our ancillary files <cit.> the expression of the basis functions {F^(w)_i} in terms of MPL, logarithms, ζ_2 and ζ_3. It is important to stress that the MPL are multi-valued functions. For unit argument, there is a pole on the integration contour whenever one of the indices lies between 0 and 1. In this case the contour must be deformed in the complex plane, either above or below the pole, leading to different branches. Our MPL are thus well-defined only in the kinematic region where all MPL indices in <ref> are either less than 0 or greater than 1, and s_4 < 0 for the argument of all logarithms in <ref> to be positive. We discuss how to analytically continue the MPL and the logarithms in <ref> to the kinematic regions of interest in <ref>. §.§ Performance and validation We validated our results for the MI of all families by crosschecking them against values obtained with  <cit.> at several random points in all the physical kinematic regions discussed in <ref>. Furthermore, we find agreement with the results of Gehrmann:2023etk. We employ  <cit.> to evaluate the MPL. Our results allow for an efficient and stable evaluation of the MI, and are thus ready for immediate deployment in phenomenology. Indeed, the amplitudes we computed in this work have already been implemented in  <cit.> to provide the RVV electron-line corrections e μ→ e μ scattering. The evaluation is efficient, running at ≈ 130 events per second in the bulk of the phase space <cit.> using  <cit.> for the evaluation of the MPL. § CONCLUSIONS In this article, we calculated analytically the two-loop QED helicity amplitudes for the process 0→ℓℓ̅γγ^* in terms of a basis of MPL that are suitable for fast and stable numerical evaluation. We employed modern finite-field evaluation techniques to reconstruct the amplitudes directly in terms of the special function basis, sidestepping the symbolic computation in all intermediate stages. As a by-product we have recomputed all two-loop master integrals for four-point functions with an off-shell leg up to transcendental weight four, and provide all the necessary ingredients needed to use them in amplitude computations with the same kinematics. We hope these new results will now open the path to N3LO predictions that can be used for the future MUonE experiment. We thank Yannick Ulrich for providing the one-loop crosschecks, correspondence on the implementation of these amplitudes, and other useful discussions. We further thank Heribertus Bayu Hartanto and Tiziano Peraro for collaboration on the codebase. SZ wishes to thank Dmitry Chicherin and Vasily Sotnikov for many useful discussions. This project received funding from the European Union's Horizon 2020 research and innovation programme High precision multi-jet dynamics at the LHC (grant agreement number 772099). § DEFINITION OF THE FEYNMAN INTEGRAL FAMILIES For each two-loop integral family τ corresponding to one of the maximal topologies shown in <ref>, the Feynman integrals have the form j^τ(a_1, …, a_9) = e^2 γ_E∫d^4-2 k_1/iπ^2-d^4-2 k_2/iπ^2-1/D_τ,1^a_1… D_τ,9^a_9 . The sets {D_τ,1, …, D_τ,9} contain seven (inverse) propagators and two ISP (a_8, a_9 ≤ 0). For the maximal topologies under consideration, they are given by:[We use a naming convention analogous to that of Abreu:2020jxa.] * penta-triangle, mzz configuration: {k_1^2,(k_1+p_1+p_2+p_3)^2,(k_1+p_2+p_3)^2,(k_1+p_3)^2,k_2^2,(k_2-p_3)^2, (k_1+k_2)^2,(k_2-p_1-p_2-p_3)^2,(k_2-p_2-p_3)^2} , * penta-triangle, zmz configuration: {k_1^2,(k_1-p_1)^2,(k_1+p_2+p_3)^2,(k_1+p_3)^2,k_2^2,(k_2-p_3)^2,(k_1+k_2)^2, (k_2+p_1)^2,(k_2-p_2-p_3)^2 } , * penta-triangle, zzz configuration: {k_1^2,(k_1-p_1)^2,(k_1-p_1-p_2)^2,(k_1-p_1-p_2-p_3)^2,k_2^2,(k_2+p_1+p_2+p_3)^2, (k_1+k_2)^2,(k_2+p_1)^2,(k_2+p_1+p_2)^2} , * planar double-box: {k_1^2,(k_1-p_1)^2,(k_1-p_1-p_2)^2,k_2^2,(k_2+p_1+p_2+p_3)^2,(k_2+p_1+p_2)^2, (k_1+k_2)^2,(k_1-p_1-p_2-p_3)^2,(k_2+p_1)^2} , * crossed double-box, mz configuration: {k_1^2,(k_1+p_1+p_2+p_3)^2,(k_1+p_2+p_3)^2,k_2^2,(k_2-p_2)^2,(k_1+k_2)^2, (k_1+k_2+p_3)^2,(k_1+p_3)^2,(k_2-p_1-p_2-p_3)^2} , * crossed double-box, zz configuration: {k_1^2,(k_1-p_1)^2,(k_1-p_1-p_2)^2,k_2^2,(k_2-p_3)^2,(k_1+k_2)^2, (k_1+k_2-p_1-p_2-p_3)^2,(k_1-p_1-p_2-p_3)^2,(k_2+p_1)^2} . We also use the one-loop (one-mass) box family, made of the following integrals: j^ box(a_1, a_2, a_3, a_4) = e^γ_E∫d^4-2 k/iπ^2-1/D_box, 1^a_1 D_box,2^a_2 D_box,3^a_3 D_box,4^a_4 , with the four inverse propagators D_box,i {k_1^2, (k_1-p_1)^2, (k_1-p_1-p_2)^2, (k_1-p_1-p_2-p_3)^2 } . Feynman's prescription for the imaginary parts of all propagators is implicit. These family definitions (strictly with the ordering of inverse propagators and ISP shown above) correspond to the integrals that build the canonical MI bases provided in the pure_mi_bases/ directory of our ancillary files <cit.>. In this notation, each represents a Feynman integral within a given integral family, while the numbers a_i refer to the powers of its propagators and ISP. § OPTIMISED IBP REDUCTION PROCEDURE FOR AMPLITUDES WITH MANY PERMUTED INTEGRAL FAMILIES An amplitude will in general have contributions from permutations of the ordered integral families shown in figure <ref>. To reduce the tensor integrals in the amplitude, IBP identities must be generated for all the permutations of these ordered families. This can lead to a very large IBP system. The performance of the reduction setup is extremely sensitive to the number of IBP identities required so, to minimise the memory consumption, we choose to generate IBP identities only for the ordered families. Next, we obtain the reduction for any permutation of these families by permuting the `ordered' reduction numerically over finite fields. The result is then given in terms of MI of each family permutation, but it is missing the symmetry relations that can be found between subsectors of different families. To express the final result in terms of a minimal set of MI, we find such relations from a separate computation. One may account for integral symmetries using automated tools such as  <cit.>. Since we use a pure basis of MI, the symmetry relations amongst them will have rational numbers as coefficients. This is because the presence of any kinematic invariant would spoil the purity of the canonical DE (see <ref>), and would mean that such a symmetry relation in fact involves non-pure integrals. Therefore, the computation of the missing symmetry relations can be performed with all kinematic invariants set to numeric values, which significantly lowers the complexity of this task. Finally, we note that even if symmetries amongst the MI were missed, a representation of the integrals in terms of a basis of special functions — as we construct in <ref> — would automatically incorporate the extra simplifications and so the same final result would be obtained. Nonetheless, in practice we do find it useful to include these symmetry relations, as they reduce the number of independent coefficients that have to be processed further. The procedure can be summarised as follows: * Generate (analytic) IBPs for the six ordered families. * Compute the mappings between permutations of the MI of the system above. * Take the tensor integrals in the amplitudes for each permutation of these families and solve the linear system over finite fields. * Apply the symmetry mappings between the MI of each family permutation to find the minimal set for the full system. Since there are a few additional bits of terminology, we can consider a concrete example to clarify everything. At one-loop, a four-point process with a single off-shell leg can be described by a single independent integral family which is simply the box topology (see <ref> for its explicit definition). Following the Laporta reduction algorithm leads to a basis of four MI, MI^ box = {j^ box(1,1,1,1), j^ box(1,0,1,0), j^ box(0,1,0,1), j^ box(1,0,0,1)} , which are the scalar box and scalar bubble integrals in channels s_12, s_23 and s_4 respectively. An amplitude will, in general, be written in terms of three permutations of this family. Let us denote these permutations as j^ box, 1234, j^ box, 2314, and j^ box, 3124, where j^ box, 1234 = j^ box as above and the additional superscript indices refer to the order of the external legs. Following our procedure we would load one set of IBP relations generated for j^ box. These identities can then be permuted numerically, for example as graphs, to reduce tensor integrals in each of the three permuted families. The result is now in terms of twelve MI: three boxes and nine bubbles. While the amplitude is already in a minimal basis of box integrals, there is clearly an over-complete set of bubbles. The independent bubbles are in the channels s_12, s_23, s_13, and s_4, so the five additional symmetry mappings are j^ box, 2314(1,0,1,0) = j^ box, 1234(0,1,0,1) , j^ box, 3124(1,0,1,0) = j^ box, 2314(0,1,0,1) , j^ box, 3124(0,1,0,1) = j^ box, 1234(1,0,1,0) , j^ box, 2314(1,0,0,1) = j^ box, 1234(1,0,0,1) , j^ box, 3124(1,0,0,1) = j^ box, 1234(1,0,0,1) . After applying these identities we arrive at the final result with seven MI which cover all permutations of the integral families. This approach would not lead to any significant performance enhancements in this simple example of course, but it can be particularly important when considering high-multiplicity examples where the number of permutations is high. § RATIONAL PARAMETRISATION OF THE KINEMATICS Since we are applying finite-field techniques to helicity amplitudes, we employ a rational parametrisation of the external kinematics using Hodges's momentum twistor formalism <cit.>. While this is not essential to combat the algebraic complexity for the kinematics considered here, it does provide a convenient parametrisation of the spinor products. The single-off-shell four-particle phase space p is obtained from a massless five-particle parametrisation q (defined in appendix A of Badger:2021imn with {x_2↔ x_4,x_3↔ x_5}) under p_i = q_i ∀ i=1,2,3, p_4=q_4+q_5 . The momentum twistor variables x_i for p are then related to the scalar invariants s⃗ through s_12 = x_1 , s_23 = x_1 x_2 , s_4 = x_1 x_3 . Momentum twistors allow us to express any spinor expression as a rational function in the variables x_i. In this representation the helicity scaling is however obscured, as we have fixed the spinor phases in order to achieve a parameterisation in terms of the minimal number of variables (see e.g. Badger:2016uuq). Therefore, we need to manually restore the phase information at the end of the computation. This can be achieved by multiplying the momentum twistor expression by an arbitrary factor Φ with the same helicity scaling as the helicity amplitude under consideration, divided by that factor written in terms of momentum twistor variables. For example, for the helicity configurations of <ref>, we can use the phase factors Φ(-++) = ⟨ 1 2 ⟩/⟨ 2 3 ⟩^2 , Φ(-+-) = [ 1 2 ]/[ 1 3 ]^2 , which in our momentum twistor parameterisation are given by Φ(-++) = x_1^2 , Φ(-+-) = - 1/x_1 (1 + x_2 - x_3)^2 . We refer to appendix C of Badger:2023mgf a thorough discussion of how to restore the phase information in a momentum twistor parameterisation. § RENORMALISATION AND IR STRUCTURE We renormalise the coupling constant by trading the bare coupling α_bare for the renormalised one α_R through α_bare = α_R(μ_R) Z_α(α_R(μ_R) ) μ_R^2 S_ , with S_=(4π)^-e^γ_E. The renormalisation factor Z_α in the MS scheme is <cit.> Z_α(α) = 1 - α/4 πβ_0/ - (α/4 π)^2 ( -β_0^2/^2 + 1/2β_1/) + 𝒪(α^3) . The β-function is defined from the renormalised coupling as α_ R(μ_R)/lnμ_R = [ -2 + β(α_ R(μ_R)) ] α_ R(μ_R) , and expanded as β( α ) = -2 α/4 π∑_k≥ 0β_k (α/4π)^k , with β_0 = -4/3 , β_1 = - 4 . The photon wavefunction renormalisation factor is Z_A=Z_α, which we include due to the external off-shell photon. The complete renormalisation procedure then is 𝒜^μ_renorm(α_R) = Z_A^1/2(α_R) 𝒜^μ_bare(α_bare) , where α_bare is expressed in terms of α_R through <ref>. The IR poles of the renormalised amplitude factorise as <cit.> 𝒜^μ_renorm(α_R) = Z(α_R) ℱ^μ(α_R) , so that Z(α_R) captures all IR poles and ℱ^μ is a finite remainder. We obtain the explicit two-loop expression of the IR factor Z(α_R) by choosing QED parameters (C_A=0, C_F=1, and T_F=1) in the non-abelian gauge-theory expressions of Becher:2009qa. We expand it as Z(α) = ∑_k≥0 Z^(L)(α/4π)^L . The coefficients Z^(L) are expressed in terms of the anomalous dimension Γ = γ^cuspln(-s_12/μ^2)+2γ^l+γ^A , and its derivative Γ^' ∂Γ/∂lnμ = -2γ^cusp . Here, γ^cusp is the cusp anomalous dimension, while γ^l and γ^A are the lepton's and the photon's collinear anomalous dimensions, respectively. We expand all anomalous dimensions y∈{Γ,γ^i} as y = α/4π∑_k≥0 y_k (α/4π)^k , with coefficients γ^l_0 = -3 , γ^l_1 = -3/2+2π^2-24 ζ_3+(130/27+2/3π^2) , γ^A_0 = -β_0 , γ^A_1 = -β_1 , γ^cusp_0 = 4 , γ^cusp_1 = -80/9 . Finally, the coefficients of the IR factor Z up to two loop are given by Z^(0) = 1 , Z^(1) = Γ_0^'/4^2+Γ_0/2 , Z^(2) = Z^(1)^2/2 -3β_0Γ_0^'/16^3+Γ_1^'-4β_0Γ_0/16^2+Γ_1/4 . Putting together the subtraction of UV and IR poles, and expanding the resulting finite remainder ℱ^μ(α_R) in α_R leads to the definitions in <ref>. § ANALYTIC CONTINUATION We analytically continue the MPL by adding a small positive (or negative) imaginary part to the MPL indices l_i in <ref> whenever they fall between 0 and 1. The imaginary part of each index prescribes how to deform the integration contour around the pole associated with it. We do similarly for the logarithms in <ref>. To this end, following Gehrmann:2002zr, we change variables from (s_12,s_23,s_4) to (s_12,s_23,s_13), with s_4 = s_12 + s_23 + s_13. We then add a small positive imaginary part to the latter variables, as s_12⟶ s_12 + i c_1 δ , s_23⟶ s_23 + i c_2 δ , s_13⟶ s_13 + i c_3 δ , where c_1, c_2 and c_3 are arbitrary positive constants, and δ is a positive infinitesimal. Finally, we check whether this substitution gives a positive or negative imaginary part to each MPL index l_i. This depends on the domain of the kinematic variables. We focus on three kinematic regions which are of phenomenological interest. The analytic continuation for any other region may be obtained similarly. Electron-line corrections to e^- μ^- → e^- μ^- γ. To define the domain of the kinematic variables relevant for this application, we embed the four-particle off-shell process of <ref> in the five-particle process e^- μ^- → e^- μ^- γ. We then determine the kinematic constraints for the five-particle process (see e.g. appendix A of Chicherin:2021dyp), and from them derive the constraints on the four-point off-shell kinematics. The result is 𝒫_eμ→eμγ{s⃗ s_12 < 0 s_23 < 0 0 < s_13 < -s_12 - s_23} . The MPL index l_4 = - s_12/s_23 is always negative in 𝒫_eμ→eμγ, hence no analytic continuation is required. The other three indices may instead fall between 0 and 1. Let us study l_1. Changing variables from s_4 to s_13 and adding imaginary parts as in <ref> gives l_1 = s_12 + s_13 + s_23/s_12 + iδ/s_12^2[ (c_2 + c_3) s_12 - c_1 (s_13 + s_23) ] + 𝒪(δ^2) . The imaginary part of l_1 may be either negative or positive in 𝒫_eμ→eμγ. However, it is strictly negative in the subregion of 𝒫_eμ→eμγ where 0<l_1<1. We therefore assign a negative imaginary part to l_1 whenever 0<l_1<1 in 𝒫_eμ→eμγ. The analysis of the other indices follows similarly, and is summarised in <ref>. The arguments of the three logarithms in <ref> are positive in 𝒫_eμ→eμγ. Corrections to e^- e^+ →γγ^*. The relevant domain of the kinematic variables in this case can be derived directly for the four-point kinematics, and is typically named the s_12 channel. It is given by 𝒫_ee̅→γγ^*{s⃗ s_23 < 0 s_13 < 0 s_12 > - s_23 - s_13} . The MPL indices l_2, l_3 and l_4 can never fall between 0 and 1 in 𝒫_ee̅→γγ^*, and hence require no analytic continuation. We instead need to add a positive imaginary part to l_1. In this region also the logarithms in <ref> need to be analytically continued. The argument of log(s_12/s_4) is positive in 𝒫_ee̅→γγ^*. By adding imaginary parts to the arguments of the other logarithms and studying them where the arguments are negative in 𝒫_ee̅→γγ^*, we determine that the analytic continuation is achieved through the following replacements: log(s_23/s_4) ⟶log(-s_23/s_4)+ iπ , log(-s_4) ⟶log(s_4)- iπ . Corrections to the decay γ^* → e^- e^+ γ. The relevant domain of the kinematic variables is 𝒫_γ^*→ e e̅γ{s⃗ s_12 > 0 s_23 > 0 s_13 > 0 } . All MPL indices l_i in <ref> are either l_i < 0 or l_i > 1, hence no analytic continuation is required. The same holds for the first two logarithms in <ref>, whose arguments are positive. The only function which needs to be analytically continued is log(-s_4). We achieve this by replacing log(-s_4) ⟶log(s_4) - iπ . The information about the imaginary parts of the MPL indices can be fed into the publicly available libraries for evaluating these functions numerically, such as  <cit.>,  <cit.>, and  <cit.>. This typically leads to longer evaluation times with respect to MPL which do not need analytic continuation. We find that this is not an issue for the planned applications of our results (see <ref>). Nonetheless, we note that a more performant evaluation may be achieved by tailoring the representation to the kinematic region of interest in such a way that no MPL require analytic continuation. We refer to Gehrmann:2002zr,Gehrmann:2023etk for a detailed discussion. JHEP
http://arxiv.org/abs/2307.02332v2
20230705144202
Co-creating a Transdisciplinary Map of Technology-mediated Harms, Risks and Vulnerabilities: Challenges, Ambivalences and Opportunities
[ "Andrés Domínguez Hernández", "Kopo M. Ramokapane", "Partha Das Chowdhury", "Ola Michalec", "Emily Johnstone", "Emily Godwin", "Alicia G Cork", "Awais Rashid" ]
cs.HC
[ "cs.HC", "cs.CY" ]
Mapping Technology-mediated Harms, Risks and Vulnerabilities]Co-creating a Transdisciplinary Map of Technology-mediated Harms, Risks and Vulnerabilities: Challenges, Ambivalences and Opportunities 0000-0001-7492-7923 University of Bristol Bristol UK andres.dominguez@bristol.ac.uk 0000-0001-8420-3929 University of Bristol Bristol UK marvin.ramokapane@bristol.ac.uk 0000-0002-5367-6659 University of Bristol Bristol UK partha.daschowdhury@bristol.ac.uk 0000-0003-3807-0197 University of Bristol Bristol UK ola.michalec@bristol.ac.uk 0009-0002-7509-5174 University of Bath Bath UK ekj27@bath.ac.uk 0009-0000-2189-7847 University of Bath Bath UK eg780@bath.ac.uk 0000-0003-2892-9615 University of Bath Bath UK ac974@bath.ac.uk 0000-0002-0109-1341 University of Bristol Bristol UK awais.rashid@bristol.ac.uk The phrase “online harms’’ has emerged in recent years out of a growing political willingness to address the ethical and social issues associated with the use of the Internet and digital technology at large. The broad landscape that surrounds online harms gathers a multitude of disciplinary, sectoral and organizational efforts while raising myriad challenges and opportunities for the crossing entrenched boundaries. In this paper we draw lessons from a journey of co-creating a transdisciplinary knowledge infrastructure within a large research initiative animated by the online harms agenda. We begin with a reflection of the implications of mapping, taxonomizing and constructing knowledge infrastructures and a brief review of how online harm and adjacent themes have been theorized and classified in the literature to date. Grounded on our own experience of co-creating a map of online harms, we then argue that the map—and the process of mapping—perform three mutually constitutive functions, acting simultaneously as method, medium and provocation. We draw lessons from how an open-ended approach to mapping, despite not guaranteeing consensus, can foster productive debate and collaboration in ethically and politically fraught areas of research. We end with a call for CSCW research to surface and engage with the multiple temporalities, social lives and political sensibilities of knowledge infrastructures. <ccs2012> <concept> <concept_id>10003120.10003130.10003233</concept_id> <concept_desc>Human-centered computing Collaborative and social computing systems and tools</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002978.10003029</concept_id> <concept_desc>Security and privacy Human and societal aspects of security and privacy</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Human-centered computing Collaborative and social computing systems and tools [300]Security and privacy Human and societal aspects of security and privacy [ Awais Rashid ================ § INTRODUCTION Recent years have seen growing debate among governments, academia, and civil society around a host of safety and ethical issues associated with the ubiquity, scale, and speed afforded by digital technologies. Some of these pertain to the widespread use of online forums and social media platforms, including the rise of mis/disinformation of various kinds, the spread of hate speech and toxic content, cyberbullying, online harassment, and other types of abuse of vulnerable groups including children <cit.>. Others stem from people’s everyday interactions with a variety of digital infrastructures and data-driven services where ethical issues manifest in injustices caused by automated decision-making; misuse, extraction, and exploitation of people’s personal data; or the ever more pervasive forms of surveillance impinging on people’s freedoms <cit.>. Investigations in these areas are led by a wide diversity of researchers across cybersecurity, data science, computer science, criminology, psychology, media and communication studies, philosophy, human-computer interaction, science and technology studies, law, among others. Much of this work has aimed at understanding the negative impacts of digital technology in society as well as developing tools to detect, predict and mitigate harmful outcomes. In the last 5 years, there have been more concerted governmental efforts in Europe, such as the EU's proposed regulations on platforms<cit.> and artificial intelligence <cit.>, and the UK's “Online Safety” bill <cit.>, which signal a willingness to deal with the global scale challenges posed by big data and social media, reign in the power of large technology companies, and regulate the digital economy <cit.>. These efforts have influenced the funding of academic research directed at tackling the most pressing individual and social harms through more evidence and tools to inform legislation, law enforcement, oversight and regulation. At the same time, research funding institutions increasingly view crossing disciplinary boundaries as an imperative for dealing with the biggest challenges of contemporary digital societies. While this reflects consistently in research funding calls and agendas, inter/trans-disciplinary collaboration is known to be challenged by entrenched academic cultures, hierarchies of knowledge, and prevailing institutional and power structures <cit.>. These issues are particularly salient in the highly complex and emerging landscape of “online harms” which is open to a diversity of conceptual definitions, terminologies, disciplinary orientations and political agendas. Research in CSCW has held a longstanding interest in studying how different designs, visualizations, and modalities of knowledge infrastructures support knowledge exchange and scientific collaboration <cit.>. In this paper, we wish to build upon and contribute to this body of work by drawing on our experience of co-creating a collaborative tool aimed at mapping and visualizing the vast area of research around online harm. We report on our work as academics within the UK National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online – REPHRAIN (hereafter the Center),[https://www.rephrain.ac.uk/about/] which was founded with a remit around “protecting citizens online”. The Center funds a wide range of theoretical, empirical, and experimental projects from various areas of research and disciplines. While each of these projects has its own timelines and deliverables, the Center encourages collaboration through the funding of cross-cutting work and spaces for co-creation among a cohort of over 100 affiliated investigators and external partners. To that end, one of the core aims of the Center was to co-create a digital knowledge resource—known as the “map of online harms”—that would provide relevant, up-to-date material to the Center’s stakeholders in academia, policy, industry, and third-sector organizations. The goal of this paper is to analyze and draw lessons from the co-creation of a collaborative digital artifact within a highly complex and contentious and evolving area of research where diverse disciplines meet. In particular, we bring attention to how an open-ended, always in the making, approach to co-creation can be generative in different ways to collaborative endeavors where consensus might be difficult to achieve. We argue that the map as an artifact—and the process of mapping itself—perform three interlocked functions for scientific collaboration and knowledge exchange, simultaneously acting as method, as medium, and as provocation. In the first part of the paper, we look at the implications of the practices of classification, mapping, and taxonomizing in settings where different epistemic communities (seek to) coalesce. We then review how different disciplines have defined, theorized, categorized, and synthesized evidence around the broad arena of online harms. In the second part, we elaborate each of the three map functions grounding the analysis on our 18-month journey of co-creating an interactive map of online harms and developing a framework of classification and visualization in collaboration with the Center stakeholders. We discuss the challenges encountered along the way throughout a series of co-creative moments including scoping sessions, data curation, language negotiation, visualization, and maintenance. We conclude by discussing how an open-ended approach to knowledge mapping, despite not guaranteeing consensus, can foster debate and collaboration in ethically and politically fraught areas of research. § MAPPING PRACTICES AND THEIR IMPLICATIONS §.§ When the map is not the territory Studying the implications of classifying and visualizing knowledge has been a long-standing area of interest across Computer-Supported Cooperative Work (CSCW), Science and Technology Studies (STS), and Human Geography. Attuned to the power dynamics present in collaboration during knowledge production, these disciplines reveal the assumptions and agenda inscribed in maps or taxonomies <cit.>. Sociotechnical studies of classification and visualization also demonstrate how the mundane work of standardization lays the ground for the creation of knowledge infrastructures, which then influences how people working together operate in society—for example, a diagnostic manual helping medical professionals decide between two similar health conditions, or a national research impact framework encouraging researchers to publish within a particular discipline <cit.>. Historical and ethnographic studies of scientific production have demonstrated that the act of classifying is not a neutral process of reflecting a “natural order’’ of the world. Instead, classifications are necessarily shaped by the goals of those who create them<cit.>. Classifying is in essence an act of sorting out, highlighting the existence of certain things at the expense of others <cit.>. Efforts to classify, systematize, and accredit knowledge are characterized by their long history spanning multiple disciplines and professions. [The outputs range from the early examples of encyclopedias in Ancient Rome <cit.>, 18th Century Linnaeus’ taxonomy of species<cit.>, to Bodies of Knowledge in contemporary professions like IT or Civil Engineering <cit.>] Many present-day classifications have become so widely accepted that they rarely get questioned in public debates, be it the metric system, diagnostic criteria for health conditions or spelling conventions <cit.>. The more standards are associated with authorities and expert gate-keeping, the more they are prone to resistance that views them as “imperialist imposition of representation, coercion, silencing, and fragmentation’’ [p.413]<cit.>. A famous remark “the map is not the territory’’ by Alfred Korzybski <cit.> points to this complex relationship between reality and its representations. The social acceptance of maps, graphs, bibliometrics and other scientific visualizations typically rely on the authority (i.e. power) of credentialed scientists and universities to tell stories with data. However, these stories are always selective, partial, and imbued with assumptions and politics which can be contested <cit.>. A case in point is the Mercator cartographic projection which inflates continents near the poles at the expense of land masses near the equator. Another example is a bibliometric measure of the h-index which reduces the “impact’’ of a researcher based solely on the ratio between the quantity of publications and citations. In contending with the above critiques of classification and visualization, some researchers and practitioners have been interested in exploring how to democratize knowledge production through more inclusive and inter/trans-disciplinary collaborations <cit.>. Theoretically, a significant body of research has been motivated by the question “how do diverse actors create a common understanding without losing the identity and autonomy of their social worlds?’’ <cit.>. Here, the concept of “boundary objects’’ has been particularly useful for understanding the dynamics of collaboration. The term originates from the foundational work by Star and Griesmer <cit.>, and since has been commonly adopted across the CSCW literature, see e.g., <cit.> on scenarios in design; <cit.> on onboarding materials, or <cit.> on healthcare records. In short, boundary objects are keywords, documents and artifacts that allow diverse groups to work together without consensus <cit.>. The key features of boundary objects are their interpretive flexibility, diverse structures of information needs, and, finally, the dynamic between ill-structured and more tailored uses of the objects <cit.>. Importantly, boundary objects do not reflect “things out there’’, rather they derive from an intention to collaborate and achieve common goals. Wenger <cit.> outlines activities necessary for successful collaboration at the boundaries of expertise: a) Abstraction facilitating a dialogue between communities of practice; b) Multi-tasking: several activities or practices are possible with a single boundary object; c) Modularity: different parts of the boundary object can serve as a basis for dialogue between actors; d) Efforts towards standardization of the information contained in the object to render the information interpretable. Collaboration, however, does not always guarantee more democratic or inclusive outcomes. Issues like institutional inertia and a lack of capabilities to maintain networks over time trouble the attempts of creating knowledge across siloes <cit.>. Collaborative approaches in research (often called co-design, co-creation or participatory research) have been criticized for the lack of conceptual clarity, the tensions they create between the open-ended nature of creative work and the requirement to tailor research proposals at an early stage, time pressures, expectations of impact, tokenism and epistemic burden, or insufficient resourcing and experience from community stakeholders <cit.>. Yet despite their numerous challenges, studies also show a promising path and growing demand for research involving co-design and participatory approaches. This is particularly the case in addressing complex issues of technology ethics, harm and injustice <cit.>. For example, recent CSCW research on the participatory classification of online harassment <cit.> argued that fully addressing online harassment requires an ongoing integration of vulnerable users' needs into the design and moderation of digital platforms. Similarly, research on participatory threat modeling encouraged traditionally marginalized people to define their own cyber security threats and preferred defense measures <cit.>. Advances in participatory methodologies have also extended to visualization, where creative techniques have been used to facilitate and illustrate conversations centered around the lay users’ experiences of computers and insecurity <cit.>. One of the main achievements of this strand of work has been a critical return to the notions of positionality and expertise, i.e., questioning who gets to frame, work on or benefit from research and classification activities. Knowledge infrastructures, if created in a collaborative way, tend to prioritize open access, continuous editorial process, and experimentation with regard to visual communication <cit.>. Collaboration also opens opportunities for productive disagreement, as stakeholders are actively encouraged to deliberate over their opinions in a structured and facilitated format. Building on this agenda, the mapping process and products we describe here, can be best understood and advanced through the lens of collaborative knowledge infrastructures and co-design. In much the same way as boundary objects, our online harms map is intended to be a gathering point between different communities not only for hosting academic literature, gathering policy evidence and scanning the research landscape but also for encouraging multi-stakeholder collaboration and dialogue beyond the academy. In the next subsection, we review extant efforts to define, classify and taxonomize online harms within different academic communities. §.§ Theorizing, taxonomizing and sorting online harm There is a vast body of research concerned with individual and social harms linked with the use of the internet and digital technology at large. The phrase “online harms” has more recently been used in academic and policy literature as a shorthand, perhaps more so in Europe following the publication of the UK government’s Online Safety Bill. In this context,the Online Safety Bill defines “online harms" as “user-generated content or behavior that is illegal or could cause significant physical or psychological harm to a person" <cit.>. We note that while we use the phrase in this paper due to its increasingly common usage in some academic, policy, and practitioner communities, we do not endorse the above definition and in fact flag its conceptual limitations. For instance, said definition focuses on “user-generated content or behavior" in an exceptionalist way while under-defining the role of institutional actors as well as other collective or social harms (e.g., harms to democracy). But there exist several other idioms referring to cognate and overlapping issues, some examples are data harms, online abuse, or cyber threats, risks, and vulnerabilities. Further, several subfields have emerged or built upon previous research in response to ethical concerns of information technology which are themselves adjacent to questions of online harm; some of these include, inter alia, data ethics, computer ethics, AI ethics, and responsible innovation. While we do not review the literature here (see <cit.> for a systematic review), it is pertinent for our purposes to make some broad observations. Because of the complexity and multiplicity of these topics of research, numerous schematizations and taxonomies of online harms, risks, and vulnerabilities have been borne out of diverse disciplines. Depending on their specific aims, these efforts seek to advance conceptual understanding, systematize empirical evidence, develop interventions, or inform policy around online harm. As noted by Cork and colleagues' <cit.> recent review, taxonomies from computational and information science disciplines tend to be broadly concerned with detecting and mitigating harmful content or cyber threats through different data-driven techniques (e.g., <cit.>), whereas taxonomies developed from social policy or social science disciplines tend to be primarily concerned with how best to define, evidence and conceptualize different types of harms (e.g.,  <cit.>), or inform the legislation of privacy and internet related harms (e.g., <cit.>). Depending on their specific aims, online harm taxonomies offer different approaches to distinguish between the “types” of harm that exist. While technical taxonomies of online harm often focus on the specific factors which can lead to harm—such as technical vulnerabilities  <cit.>, perpetrator intentions <cit.>, or methods used to inflict harm <cit.>—social science taxonomies foreground broader social impacts or dimensions of harm e.g., <cit.>. For example, Livingstone et al.  <cit.> propose four general “motivations” of online harm—aggressive, sexual, value-based, and commercial harms, whereas O’Connell and Bryce <cit.> suggest five “themes” of harm—information, human interaction, health/body/spirit, sex education/recreation and communication, and activities harms. The notion of harms associated with digital technology has already received considerable attention within the CSCW scholarship, even if not explicitly under the rubric of online harm. For example, recent papers have applied frameworks from mental health research to discuss “digital self-harm” in the context of eating disorders as well as the correlation between harmful events offline and online <cit.>. CSCW research has also taken interest in harm reduction through the provision of safe spaces online, e.g. for queer communities intending to come out or for transgender people to explore their identity <cit.>. Another major theme of research is an exploration of online harassment experiences and the provision of moderation guidelines; with key contributions emphasizing the need to integrate vulnerable users into the co-design of recommendations and prototypes <cit.>. It is worth highlighting that CSCW has a long history of research defining, measuring, understanding, and tackling discrimination and abuse online without adopting the terminology of harm, see foundational papers on racism, justice, and bias <cit.>. All in all, the landscape of research on online harm is marked by a diversity of research agendas and a lack of common vocabularies and definitive boundaries. These complexities pose numerous challenges regarding collaboration, particularly among scholars who are committed to different research paradigms, goals and methodologies, and who may disagree on concepts or interventions. A salient example of an ongoing debate is the concern by privacy advocates that tackling Child and Sexual Abuse Materials (CSAM) by weakening provisions for end-to-end encryption could legitimize more surveillance by the State or technology companies <cit.>. These challenges and tensions were part and parcel of our own attempt at building a collaborative knowledge infrastructure intended to map the terrain of “online harms” at the confluence of some of the disciplines mentioned here. While a review of the literature was a key input to the process, the goal of the map was not to develop a comprehensive inventory of harms or a static taxonomy, but a usable, configurable, and maintainable knowledge infrastructure. § CO-CREATING A MAP OF ONLINE HARMS §.§ Conception and rationale The REPHRAIN Center is a major interdisciplinary community focusing on investigating, reducing and tackling online harm. The Center was funded by UK Research and Innovation in the context of a national policy agenda around online safety. It gathers over 100 internationally leading experts from academic institutions working across 37 diverse research projects and 23 founding industry, non-profit, government, law, regulation and international research Center partners. The Center works collaboratively across disciplines on a variety of issues around privacy, security, data sharing in the digital economy, content moderation and technology-mediated harm. In addition to funding individual research projects, the Center employs “core researchers” (ADH, KMR, PDC, OM) who work on cross-cutting issues pertaining to the online harms landscape aiming to facilitate transdisciplinary work between projects, conduct scoping and horizon scanning work, integrate responsible innovation, engage policymakers, and raise the profile of the Center to external stakeholders to boost its impact and visibility. Alongside a team of core researchers, author (AC) worked on a project on defining and quantifying the notion of “online harm”, while authors (EJ, EG) were employed as research assistants reviewing and cataloging the outputs of the Center at large. A key outcome of the Center—the “map of harms”—was envisioned at the outset as a living, interactive, resource to showcase ongoing research within the Center as well as identified research gaps, relevant literature, and useful research tools and materials linked to particular themes[A live version of the map can be found on https://www.rephrain.ac.uk/rephrain-map/]. The specific format and affordances of the map were not decided a priori. Unlike the research projects funded under the Center which had a defined deadline, methodology, disciplinary orientation, and resources, the map was loosely defined and managed by the core researchers who led the co-design process in an iterative and experimental fashion. Broadly, the map was conceived with the following long-term aims in mind: * to facilitate the communication of research findings and policy recommendations to different stakeholders within and outside academia; * to boost the profiles of the researchers affiliated with the Center; * to help scope the future funding agenda, as aligned with identified research gaps. As part of the Center's bid for funding, a preliminary list of harms (see Table <ref>) was developed drawing from two sources: Daniel Solove's taxonomy of privacy paper <cit.> and the UK government's online harms white paper <cit.>. This list informed the funding of projects and the agenda of work with external partners, and served as the starting point for discussions concerning the map. In the following, we provide an account of how the map was brought to life and what utility it offered to different actors. We argue that the map—and the process of mapping—perform three mutually constitutive functions, acting simultaneously as a method, medium, and provocation. We ground our analysis on an 18-months process of co-creating a map of harms in collaboration with around 75 investigators and partners associated with the Center (see Figure <ref>). We reflect on the use of different methods of data collection and curation, standardization, collective deliberation, prototyping, synthesis, and design that contributed to the construction of the map. The co-creation process was led by the team of core researchers from different academic backgrounds (computer science, human-computer interaction and social sciences) who organized the data collection and coordinated activities with the Center stakeholders. The iterations of the map were discussed and validated regularly at Center-wide events and with the Center's leadership in strategic meetings. Throughout this period, the core team conducted a series of workshops, one-on-one interviews, online surveys, public consultations, and design sessions in collaboration with various stakeholders including mainly academics affiliated with the Center, but also industry partners, policymakers, and members of the public. Given that the map is, to the best of our knowledge, the first of its kind, the process did not follow a predictable, linear trajectory but was instead informed by an iterative and trial and error approach. The following analysis is therefore not chronological nor intended to provide a template of best practice. We draw lessons from the activities performed in order to foreground the different dimensions of collaboration in this arena as well as the mutually shaping interplay between each of the three map functions. §.§ Map as method During the launch of the Center, the core team held a series of exploratory scoping sessions aimed at identifying various types of online harm and how the Center may address them. These sessions sought to elicit views of various communities as well as inform the design of subsequent activities and the questions to be explored in them. We invited participants from existing networks of academics who had converging interests with the Center's research and those involved in its conception to discuss what online harm is and what they would expect from a map of online harms. The scoping workshops (9 in total) were attended by 40 participants (21 academics and 19 representatives from academia, industry, law enforcement, safety tech developers and policymakers. The discussions focused mainly on what counts as privacy and other related harms; what approaches, tools and methods exist to mitigate these harms; the potential misuses or malfunction of technical interventions; and how such failures could be prevented while providing adequate protections. These workshops called for more evidence and discussions around the prevalence of online harm, which harms are emerging or are yet to be addressed in the literature, their impact on different individuals and communities, what are the approaches and tools to mitigate harms, and open research, technical and regulatory challenges. They also highlighted challenges around addressing or reducing online harm in various spaces, sectors, situations, and organizations. Lastly, discussions around the map raised questions around what the map should offer, what features are critical, what audiences should the map target, as well as how the map should look. [See <cit.> for a detailed report] Throughout the different stages of mapping, and particularly during the exploratory stages, positioning the map on the horizon helped not only to scope pertinent questions for its development, but also set the scene for deeper discussions about terminology, concepts, interventions, and research methods. We found that the concept of the map was useful as a dialogical tool that enabled researchers to link up different bodies of knowledge, access research from other disciplines and translate concepts from discipline-specific jargon. Interim sketches and depictions of the map were helpful to spark discussions about how can we best visualize a complex arena of research and what are the implications of such representations. In order to materialize the first iteration of the map, we organized data collection sessions targeted at individual projects within the Center. We asked project Principal investigators (PIs) and Co-investigators (Co-Is) to complete an online form detailing (1) what harm(s) were being addressed by their projects, (2) a brief definition/description of the harm, (3) a list of research gaps, challenges or questions in relation to the identified harm(s), (4) the current state of the art including peer-reviewed academic articles, policy documents, white papers and reports, and lastly, (5) the technical, conceptual or methodological tools (both internally developed or external) to study, understand and addressing such harms. These responses were later used during face-to-face meetings to prompt investigators to expand or clarify their responses and how they could be accommodated in the form of a map. These data gathering sessions crucially helped to refine the initial list of online harms in terms of the adequacy of the terminology used (for example revising “pornography” for “image-based harm”), and they revealed a need to use lay and concise descriptions as well as add, remove or merge harms in accordance with ongoing work within the Center and the state of the academic debate (see Figure <ref>). This process of expert consultation, albeit relatively slow, [Data gathering meetings were onerous for both core researchers and project investigators with only a few meetings conducted per month.] was key to help the core-researchers curate and organise data in areas outside of their specialties. In this sense the core-researchers deferred to the project investigators to provide authoritative content yet without foreclosing further modifications and inputs from other stakeholders. A parallel, and asynchronous, data curation process was conducted with the aim of further populating the map with relevant and up-to-date literature produced by those whose work was expected to feed into the map, but might not have yet been approached through face-to-face data collection meetings. To do so, publications by all Center-affiliated researchers were manually collected, filtered, theme-coded and mapped onto the evolving list of harms. First, EJ and EG manually screened the titles and abstracts of a total of 232 papers for whether they addressed one or more harms within our list [To ensure the review of publications was as broad as possible, papers which considered online harms from the Center’s initial list as well as harms which were not listed by the Center at the time of screening were included. Papers which did not address or specify harms related to online activities or platforms were excluded.]. Then, a closer reading and depuration of the remaining papers were done to sieve out those that did not refer to or address online harm or the topic in any form. This process led to 125 papers being included in the first iteration of the map (see Table <ref>. The literature curation exercise offered a useful overview of the diversity of expertise and disciplines across the Center including, e.g., technical approaches to harm mitigation, methods for measuring or gathering evidence, policy interventions, or social scientific approaches to understanding harm. These papers were theme-coded according to the harm(s) they addressed and five high-level positive categories (developed from further discussion sessions, we discuss these in section <ref> ): privacy, safety and well-being, reputation, financial security, freedom of speech, and fairness. Although these methods of bibliographic analysis evoke (and could well lead to) a formal systematization of knowledge or literature review, that was not the primary aim. Instead, this exercise sought to be continuous and directly functional to the map: to showcase the online harms work within the Center in an useful way to different stakeholders and in relation to the evolving classification affordances of the map. In aiming to improve the functionality and usability of the map, and responding to feedback from a community consultation (see section <ref>), the team decided to further classify papers according to their methodologies (e.g., case studies, focus groups, or interviews), the type of victim (e.g., children, teenagers, sex workers, women), the type of perpetrator (e.g., romance scammers, extremist groups, sex offenders), and the technology or platform being studied (e.g., artificial intelligence systems, virtual reality, social media platforms). While these new categories emerged from a limited set of papers and are therefore not exhaustive, the expectation was that more granular information would offer users more options to navigate the map or find interconnections (or lack of) between different papers, authors, harms, technologies and attributes. Similarly, while the goal was to construct a visual representation of literature on harm, curating our collaborators' input and theme-coding the various attributes of harm in different ways led us to devise methods of synchronous and asynchronous collaboration, synthesizing previously disperse bodies of knowledge, and conducting meta-analysis in ways that were unexpected, and yet now standardized thanks to the use of codes and tags. As we will show next, this aspect of knowledge categorization was a key input for shaping the content and structure of the information presented to users. §.§ Map as medium Translating our collaborative work into various inscriptions, diagrams, mind maps, schematics, and sketches  <cit.> was a necessary endeavor in envisioning and materializing the map. Partial and interim depictions were used not only as milestones of progress toward fulfilling the intended knowledge-sharing function of the map, but as useful in unexpected and practical ways: containing a body of knowledge and definitions that informed others' research and literature reviews, linkages between researchers and pointers to their academic profiles, curated lists of papers, and areas of harm where more attention is needed. With the data collected from projects, we developed a prototype of the map which included a visual representation of the Center's list of harms, each of which would be populated with information containing definitions; identified research gaps, challenges and questions; external tools, datasets and resources related to online harms; relevant literature including peer-reviewed articles, policy documents, white papers and reports; and an inventory of expected deliverables by the Center projects. This first prototype—known as v.0 (Figure <ref>)—aimed not only to offer a visual aid for further data collection sessions by showing placeholders where data is required, but crucially to prompt further discussions about the role of the map, how it should look, its intended users and its implications. The map prototype included 6 harms containing relevant information, definitions and resources, as well as placeholders (in the form of greyed-out circles) for harms where data were still needed. After the launch of the first v.0 prototype, a public consultation was conducted seeking feedback on different aspects of the map such as look and feel, content, technical features, and other open-ended suggestions. We disseminated a link to the map prototype and an online questionnaire through various communication channels including mailing lists of allied networks of academics and social media (Twitter and LinkedIn). This was the first time the map was shared publicly to external stakeholders from industry partners, third-sector and civil society organizations. We received 7 anonymous responses during the public consultation period between 27/Nov/2021 to 14/Dec/2021. The feedback and recommendations from the public consultation were analyzed in a project management platform, grouped and theme coded into the following: map structure, layout/look and feel, content and literature, definitions of harms, use cases and features, other modifications of the map and additional comments. Recommendations were also prioritized according to their feasibility to implement as “immediately,” “in short term,” “in a second map iteration,” “for later re-consideration” or “no action.” Some of the prioritized recommendations included the need for providing users with an explicit description of what the map aims to achieve; explanations about how the list of harms came about; and how the literature was selected for each harm. The public consultation highlighted important aspects about the usefulness, purpose and implications of the map. One of the core pieces of feedback was the need to be explicit about the purpose and target audience of the map. For example, it was suggested that while some academics may be interested in the theory pertaining to online harms, policymakers would be more concerned with actionable evidence, as well as understanding the practical implementation of harm reduction and mitigation strategies, while computer scientists may be interested in the technical challenges of combating or measuring online harm. In an attempt to address this issue, we proposed that by breaking papers down into their components of harm —victims, technologies, methodologies, platforms— users of the map could more easily access a very heterogeneous knowledge base. In this way, a modular and interactive approach could be the most practical, wherein the audience could personalize their experience of navigating the map and find relevant material (see section <ref>). Breaking online harms down into their component parts could also allow users and curators to identify gaps and additional insights in the research being undertaken, for example, areas where certain types of victims may be underrepresented in the research, or identify parallels in mitigation approaches across multidisciplinary perspectives. This learning and the potential actions being proposed were later discussed in co-design sessions involving the core team and a team of designers and web developers with the aim of creating a new iteration of the map. In these sessions, different forms of visualizing and filtering data were workshopped and trialed during a period of three months. Figures <ref>, <ref>, <ref> and <ref> show the initial public release and the various search functionalities of the map. The maintainability and “future proofing” of the map were also considered at this stage: the back-end of the system was designed such that new data and edits to the current data would be conducted through GitHub. §.§ Map as provocation While the map progressed incrementally through discussion and (at least provisional) consensus between collaborators, it was also frequently a source of contention and disagreement. For instance, a recurring point of debate during co-design workshops was due to disagreement among researchers over the appropriateness of the term “online harms” for some of the issues addressed by the Center. This debate eventually led to the proposal to broaden the scope from simply a “map of online harms” to a “map of technology-mediated harms, risks and vulnerabilities” in order to encompass issues that cannot strictly be conceptualized as harms in their own right but could nonetheless lead to, or be linked with, harm (e.g., surveillance or misinformation). Much of this debate was informed by ongoing research on harm within the Center by AC which found that harm and risk are both ill-defined in terms of causes and outcomes. Risk is often defined as a factor that may cause harm, but the risk is identified post-hoc—after harm has occurred. Risk is also thought of as the potential for something to happen —be that positive or negative. Harm is also subjective—both when experienced at an individual level and when thought of in relation to social values being harmed. By contrast, issues relating to abuse often involves a perpetrator and a level of intentionality which are not intrinsic features of risk and harm. Another conceptual challenge was to define “technology-mediated” or whether the idea of online harm stems from presumed causal relations between technology (or specific technology affordances, features, platforms, systems, business models) and harm. The modifier “online” was in this sense contentious in that it raised questions about the specific nature of harm being addressed by the map, the implication of a hierarchy of harms that could privilege online vs offline, and what harms might and might not be construed as pertinent in this academic program as a result. Several co-design workshops were organized where researchers and participants of all-hands meetings (quarterly center-wide events to showcase progress and discuss the strategic direction of the Center) were invited to provide feedback on the map and validate its iterations. These workshops aimed to explore different intuitive visual interfaces and ways to sort, categorize, visualize, cross-reference and represent information on the map. Inputs were sought from participants in the form of design sketches, recommendations to group/add/revise harms, and relevant factors which could help filter information and navigate the map. One of the key outcomes of these deliberations was the need to improve the visualization and grouping of harms, risks, and vulnerabilities such that they provide more useful information to users, and if possible, show the links between them. Rather than merely a matter of usability and aesthetics, linking and grouping items raised key issues about what knowledge claims are advanced by the map and what are their implications for different users. These issues were explored in an internal co-design workshop (among the authors of this paper) where we asked “what are the harms, risks, and vulnerabilities we are studying a threat to?” and “What are the interventions we design aiming to protect or guarantee?”. In asking and answering these questions we drew inspiration from the threat modeling approach widely used in cybersecurity, as well as the United Nations human rights list. The result of this exercise was a framework for categorizing harms, risks, and vulnerabilities into five high-level positive categories or social goods, namely: privacy, safety and well-being, reputation, financial security, freedom of speech, and fairness. We mapped each of the harms, risks and vulnerabilities to one or more of the five positive categories as shown in Table <ref>. The framework was then validated in a follow up co-design workshop (for the record of the workshop process, see Figures <ref>, <ref>, <ref> ) as part of the Center's all-hands meeting where we asked participants to validate the utility of the categories. For this workshop, we asked participants working in three different groups (of 4 to 5 people each) to classify the list of harms against the positive categories or social goods using our suggestions as examples but not limited to them (i.e., harms could be mapped onto more than one category). We explained that while the five categories were not meant to be exhaustive they were intended to subsume the themes addressed by Center projects. Participants were encouraged to add new categories, remove irrelevant ones or change the terminology if needed. Similarly, participants were not bound to the suggested list of harms and were free to add new harms or refine the terminology. Each group then presented their results and rationale to the other groups. Using the collected material from each of the tables the core team analyzed and consolidated the results with updated terminology and classifications. The need for new categories and/or terminology would be evaluated as needed if new research did not fit the existing ones. On Survivability and Maintenability. A key feature of the map, as envisioned in its conception, was its open-endedness and future relevancy. The ambition was to offer researchers the ability to update and refine the contents and structure of the map and ensure its survivability and maintainability beyond the lifespan of the Center, thereby prompting the need for a system for contributions and curation of new data entries. We discussed the design of such a system in all-hands meeting workshops with the Center researchers who raised questions of gate-keeping (who can contribute to the map and in what capacity?), frequency and types of contributions, technical requirements for updates and additions, and ongoing maintenance costs. These discussions led to the following conclusions: First, a streamlined system to populate the map should be used as a way to replace the manual and time-consuming process of requesting contributions from investigators via one-on-one meetings. Such a system should encourage contributions and cater to researchers with different technical skills and preferably rely on open-source software. Second, the system would require a data curation role to accept or reject contributions to populate and update the map, as well as contributor roles assigned to investigators within the Center and invited external contributors. Third, a workflow and other relevant documentation should be written to make the process transparent and guide decision-making and future updates to the map. Fourth, funds would need to be secured to ensure the map continues to be hosted, maintained and updated in the future. These goals remain open challenges at the time of writing of this article, not least due to ongoing debate among contributors with different technical skills about the choice of user interface for facilitating updates (e.g., wiki vs git), the policies to vet contributors of the map, and the availability of funds to maintain the project in the long term. § DISCUSSION: MORE THAN JUST A MAP The process of co-creating a map of online harms taught us valuable lessons about how knowledge representations emerge and how they get challenged or stabilized in transdisciplinary and inter-organizational collaborations. The purpose of the map was loosely defined from the outset with the expectation that all of the Center investigators and external contributors would help to shape and populate it. As we have shown, the map transcended its original scope and fixed temporality as a deliverable, not only serving as an open repository of knowledge about the range of research and the projects' outcomes, but allowing, throughout its construction, to uncover new insights and spark debate. The process of sourcing data and feedback from investigators opened up previously unforeseen challenges and opportunities, conceptual, methodological, and epistemic contentions or disagreements. It also highlighted competing views about the function and implications of the map. The process of mapping was by no means linear as interim findings, failures and moments of learning, importantly altered the initial goals and ambitions. Here, we have brought attention to three mutually constitutive functions performed by the map beyond its original aim as a deliverable. We demonstrated that our map of online harms simultaneously operates first as a  method of scientific collaboration, acting as a motive for dialogue between different communities and catalyzing modes of asynchronous cross-referencing of academic work. Second, it provides a  medium for knowledge representation and a repository that allows different stakeholders to sort out and find relevant information and a bird's eye view of multiple interconnections between harms, technologies, researchers, and disciplinary outputs. Finally, the map serves as a  provocation encouraging contention, dispute and disagreement, which in turn challenges the work of data curation in deciding the content, the form, the timing, the survivability, and the provenance of the knowledge that is represented in the map. These three functions necessarily inflect one another and render the map an always unfinished endeavor. In grappling with these facets of mapping an area of research that calls for sometimes urgent social and political action, the role of curators and facilitators are critical for dealing with the lack of consensus, recognizing provisional milestones, instigating and facilitating collaboration over time, taking pragmatic decisions and ensuring the maintainability of the map. By bringing attention to continuous iteration and feedback, we foreground the living aspect of creating a knowledge infrastructure. Instead of treating the map of harms as a static, one off taxonomy, this effort shows the value of knowledge that gets updated, expanded, or even challenged within an existing network of collaborators. Ultimately, knowledge infrastructures—especially if pertaining to transdisciplinary and contested issues—display a “rhizomatic” character, that is, with multiple points of exit and entry and connected in multiple and surprising ways <cit.>. Our experience of knowledge co-production reveals a networked, ambivalent and highly unpredictable process simultaneously opening space for the co-existence of plural and sometimes diverging views, and the evolution of ideas, counterposed with the need for pragmatic utility, closure and standardized categories. The emphasis on survivability evolved into design principles of modularity, customizability, transparency in the editorial policy, and concurrently an invitation to challenge the evolving knowledge base. Without it being the original purpose of the project, the emerging product evokes some of the affordances of knowledge management systems like Obsidian [https://obsidian.md/] or even Wikipedia [https://en.wikipedia.org/wiki/Main_Page], while still remaining restricted to a relatively small research community rather than aiming to be a universal taxonomy. Quite crucially, although our process of co-design has been generative in various ways, it is not without drawbacks. Despite the fact that the Center brings many experts together, the knowledge base is not intended to be exhaustive and indeed gives more visibility to emerging work, much of which from focused projects and early career researchers. So too, despite efforts to garner inputs from diverse stakeholders, the map was principally shaped by academic expertise and those knowledgeable of the Center but less so by non-academic groups and experts with lived experience of online harm. There are many practical reasons for this including the complexities with obtaining resources, ethical clearance and a strong case for involving external participants given the wide scope of the map in terms of domains of online harm. As a result the views from lay users and non-academic groups could only be indirectly represented by investigators engaging with such groups within their projects. The map is also not representative of all possible online harms but only those for which there is data and ongoing work in specific contexts and locations linked with the Center and allied collaborators elsewhere. Yet at the same time, there is an implied expectation of authoritativeness and generalized utility of the information it offers. These ambiguities and gaps are not easy to reconcile and might not be always transparent to users. As such, the map poses a challenge of communicating clearly the limitations and scope of the knowledge base without undermining its value to inform technology users and academics, the funding of further research, and policy and regulation. Another limitation is that the broad intended audience of the map risks addressing everyone and no one at the same time. The map might not meet the expectations of all its stakeholders, containing material that may only be useful to some, and that is biased towards the views, terminologies, mental models, and interface preferences of its curators and contributors. As much as the process of co-production has tried to be as open and inclusive without falling into the trap of endless debate, a challenge remains to enable the map to have useful entry points for various users and forms of expertise in the future. The success of the co-creation process is difficult to measure in this regard because there are no established benchmarks for evaluation and because the map can have intangible benefits as it is used and appropriated by different stakeholders in unexpected ways. This is an important question that calls for continual assessment in use. Going forward, the future of the map remains open and we envisage multiple possible applications of the resource. First, recalling Star's theorization of the cycle of life of boundary objects <cit.>, we anticipate that some of our stakeholders (e.g., our partners in civil service) would advocate for standardization and maintanance of (some of) the map content. For example, our knowledge base could be integrated into governmental documents and inform the work on around online safety and online harms regulation<cit.>. So too we hope that other non-academic users and contributors (e.g., civil rights organizations or victims of harm) can benefit from the open-ended format of the resource and input ideas from communities that are most vulnerable to online harms. Finally, the uncertainties associated with the research funding landscape put the long-term maintenance of the resource into question. Ultimately, this highlights the never finished and precarious nature of maintenance and curation, an issue deserving of care and appropriate funding in its own right <cit.>. § CONCLUSION In this paper, we reflect on the 18-month-long process of co-creating a knowledge infrastructure in the transdisciplinary context of online harms. We wish to bring to the fore the challenges of mapping an emerging body of work on highly contentious, unsettled, multifarious and pressing matters of concern. After an unstructured (and messy) co-design journey, we arrived at a malleable, collaborative and contestable resource that highlights several dimensions of technology-mediated harms, risks and vulnerabilities. Among other features, the map includes six desirable social goods, outstanding research challenges, signposting to foundational resources and researchers in each area, and modular filtering of resources. Our contribution exemplifies how CSCW research could broaden discussions about transdiscplinary and inter-organizational collaborations to include useful reflections about the politics, discomforts, failures, pressures, residual prototypes, and lessons arising along the way in such co-productive efforts. By highlighting the three interrelated functions of the map (method, medium, and provocation), we were able to show the opportunities and challenges associated with collaborations across social worlds, the negotiation of boundary objects and the ambiguities of establishing an unfinished yet variously useful knowledge infrastructure. This is an important call for CSCW to foreground and engage with the multiple temporalities, social lives and political sensibilities of knowledge infrastructures. This work was supported by REPHRAIN: The National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, under UKRI grant: EP/V011189/1. We would like to thank all contributors to the map and Yvonne Rigby and Robert Schultz-Graham for their support throughout the research. ACM-Reference-Format
http://arxiv.org/abs/2307.02788v1
20230706055307
Resolving cosmic star formation histories of present-day bulges, disks, and spheroids with ProFuse
[ "Sabine Bellstedt", "Aaron S. G. Robotham", "Simon P. Driver", "Claudia del P. Lagos", "Luke J. M. Davies", "Robin H. W. Cook" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage 2023 Photometric observations of flares on AD Leo from GWAC-F30 and TESS Jian-Yan Wei =================================================================== We present the first look at star formation histories of galaxy components using ProFuse, a new technique to model the 2D distribution of light across multiple wavelengths using simultaneous spectral and spatial fitting of purely imaging data. We present a number of methods to classify galaxies structurally/morphologically, showing the similarities and discrepancies between these schemes. Rather than identifying the best-performing scheme, we use the spread of classifications to quantify uncertainty in our results. We study the cosmic star formation history (CSFH), forensically derived using ProFuse with a sample of  7,000 galaxies from the Galaxy And Mass Assembly (GAMA) survey. Remarkably, the forensic CSFH recovered via both our method (ProFuse) and traditional SED fitting (ProSpect) are not only exactly consistent with each other over the past 8 Gyr, but also with the in-situ CSFH measured using ProSpect. Furthermore, we separate the CSFH by contributions from spheroids, bulges and disks. While the vast majority (70%) of present-day star formation takes place in the disk population, we show that 50% of the stars that formed at cosmic noon (8-12 Gyr ago) now reside in spheroids, and present-day bulges are composed of stars that were primarily formed in the very early Universe, with half their stars already formed ∼12 Gyr ago. galaxies: bulges – galaxies: elliptical and lenticular, cD – galaxies: evolution – galaxies: general – galaxies: luminosity function, mass function – galaxies: spiral – galaxies: star formation – galaxies: structure § INTRODUCTION The extraordinary diversity of the present-day galaxy population is marked by a wide variety of galaxy properties. One such property is the physical structure of galaxies, in terms of galaxies that are disk-like in structure, spheroid-like, or the large population that contains multiple structural components. The build-up of structure in the very early Universe (prior to cosmic noon) is extremely chaotic, with the infall of gas causing immense star formation in large clumps, and lots of galaxy mergers making the definition of galaxy structure difficult <cit.>. By cosmic noon, however, structure is sufficiently well defined to describe galaxies in the context of substructures such as bulges and disks. As an example, <cit.> presented visual classifications of galaxies out to z=1, beyond which the fraction of structurally chaotic galaxies increases. From this point, there is an array of physical mechanisms that can result in the transformation and growth of structure with time. Disks can grow a bulge with time, either via disk instabilities funnelling disk material into a central bulge concentration (a structure often referred to as a pseudobulge, ), or through mergers that add material straight into a bulge <cit.>. Such a merger-origin bulge is frequently referred to as a “classical" bulge. This two-phase mode is frequently implemented in semi-analytic models to grow bulges <cit.>. Spheroidal galaxies can be formed from either disk or two-component systems via larger mergers that destroy all existing structure <cit.>. Potentially, spheroidal systems could rebuild a disk through the accretion of gas, whose angular momentum creates star formation in a disk-like structure (predicted to occur in simulations by , and evidence for which is potentially seen in studies by ). Finally, galaxies may experience no morphological change with time, awith components instead growing through mechanisms like star formation (the case for disks), or via stellar mass build-up from mergers (usually for spheroidal-like structures, as is the case for the mass growth in brightest cluster galaxies since z∼1-2 ). These galaxy structures/morphologies have been strongly linked to the star formation properties of the galaxies themselves. This was originally inferred very simply through a strong link between galaxy colour and its shape <cit.>, and then through measurements of the star formation rates as well <cit.>. By analysing the galaxy components themselves rather than simply characterising the overall galaxy shape, it was observed that the bulge fraction was also linked to the overall star formation <cit.>. This led to significant discussion and debate as to the impact of structural components like bulges on the overall star formation in galaxies <cit.>. Understanding the structural evolution of galaxies in the context of their star formation is therefore needed to build a more consistent picture of galaxy evolution. Pinpointing not only which evolutionary pathways have occurred across cosmic time, but also in what relative fractions, is an ongoing challenge in the field of galaxy evolution <cit.>. Making progress on this question observationally is hindered by a number of factors, the major one being that we are limited to observing individual galaxies in only a single snapshot. Given this limitation, two separate approaches must be taken to actually infer the temporal evolution of structure. The first of these is to compare galaxy populations across different epochs, to see how they are changing. This approach is relied upon across the field of galaxy evolution to infer the evolution of most properties, including star formation rates <cit.>, metallicities <cit.>, velocity dispersion <cit.>, velocity profile shapes <cit.>, mass density profiles <cit.>, and a multitude of other properties, as well as the structure of galaxies <cit.>. There are significant challenges in inferring evolution from different properties though, originating from progenitor bias <cit.>, selection effects, and observational limits. The other approach in inferring temporal evolution is to study the forensic histories of nearby galaxies themselves. What evidence is there in individual galaxies of evolving properties with time? This two-pronged “in-situ" versus “forensic" approach has been well demonstrated in the pursuit of accurately constraining the cosmic star formation history (CSFH). The now-famous review by <cit.> brought together a suite of observational star formation rate density (SFRD) measurements across a wide redshift range to present a CSFH that showed the decline in star formation in the Universe since “cosmic noon" at z∼2. The exact nature of the CSFH prior to cosmic noon is still under debate, in particular because characterising the obscuration of star formation due to dust at high-z is challenging <cit.>. The alternate mechanism of deriving the CSFH comes from a forensic analysis of a volume-complete sample of low-redshift galaxies. By deriving the star formation histories (SFH) of all galaxies within a volume of the Universe, the CSFH as a whole can be inferred. While limitations in stellar populations analysis (usually in the form of SED modelling) makes it challenging to accurately derive the CSFH in this manner <cit.>, improvements in SED modelling assumptions have made this approach more viable in recent years <cit.>. By applying this forensic-style approach not only to galaxies as a whole, but to their structural subcomponents, it is possible to study when the stars in different galaxy structures formed. Actually isolating the stellar populations of galaxy components has historically required the successful completion of multiple steps. The first of these, is identifying the structural components of a galaxy through an analysis of the galaxy light profiles — a process known as structural decomposition <cit.>. Galaxy light can either be modelled as a single component <cit.>, or with two or more components <cit.>. Multi-wavelength imaging has also been used to increase the quality of galaxy decompositions, as shown by <cit.>, using the galapagos-2 code. Delving even deeper, two-dimensional decompositions can be extracted across multiple wavelengths to generate an SED per component. In a separate step, this can then be modelled with an SED-fitting code to extract forensic properties like star formation histories. Works that have used this approach include for example <cit.> who applied this to 17,600 galaxies from the CANDELS survey, to produce catalogues of bulge and disk properties. Because of this multi-step approach, such studies are complex and intricate, with potentially limited room for scientific interpretation. With the recent development of ProFuse <cit.>, it is now possible to conduct image decomposition and SED fitting in a single, self-consistent and physically motivated step. This is quite distinct to similar approaches that have been developed recently to extract bulge and disk stellar populations from IFU data (see for a more detailed discussion of these other techniques). In this work, we aim to present the first analysis applying this technique to a volume-limited sample, extracting the cosmic star formation history forensically for bulges, disks, and spheroids directly. The data used in our analysis are described in Sec. <ref>, with our ProFuse analysis technique and method described in Sec. <ref>. We present a detailed discussion of how our structural classifications compare to other methods in Sec. <ref>, and the results from our analysis are presented in Sec. <ref>. We discuss implications of our results in Sec. <ref>, and summarise our conclusions in Sec. <ref>. The cosmology assumed throughout this paper is H_0 = 67.8 km s^-1 Mpc^-1, Ω_m = 0.308 and Ω_Λ = 0.692 <cit.>. § DATA This study utilises the wealth of data from the now-public Galaxy And Mass Assembly (GAMA) survey[<http://www.gama-survey.org/>] <cit.>, which is a galaxy redshift survey conducted on the Anglo Australian Telescope covering 250 square degrees, with a total of 303,542 redshifts <cit.>. We focus on a sample of 6,664 z<0.06 galaxies from the three equatorial regions of the GAMA survey (G09, G12, and G15) with high-quality redshift measurements, analysed in <cit.> and also <cit.>. We use multi-wavelength imaging from the GAMA DR4 <cit.>, as outlined by <cit.> in the u/g/r/i/Z/Y/J/H/Ks bands. In the optical bands, the imaging originates from the VST <cit.>, collected through the KiDS survey <cit.>, and in the infrared the imaging originated from VISTA <cit.>, collected through VIKING <cit.>. This imaging is all aligned using SWarp <cit.> to a common pixel scale of 0.339 arcseconds (matching the native pixel scale of the VISTA image). To conduct various completeness corrections related to the use of a volume-limited sample throughout this work, we make use of ProSpect-derived z_ max values, which estimate the redshift to which any given galaxy is observable given the GAMA DR4 95% completeness limit of m_r < 19.65 <cit.>. These were derived by generating the best-fitting SED for each galaxy (as derived by the ProSpect fits from using the updated photometry presented by ), and then regenerating this SED at a range of redshifts, identifying the value at which the magnitude limit is reached. Given the unmasked area of 169.29 square degrees covered by the sample in this work, the z_ max value can then be converted to a V_ max, to represent the fraction of the observed volume within which the galaxy is observable. § METHOD §.§ ProFuse modelling The tool that we use to conduct the simultaneous spectral and photometric decomposition of our sources is ProFuse <cit.>. This combines the SED-fitting capabilities of ProSpect <cit.>, the structural modelling capabilities of ProFit <cit.>, and the source-finding capabilities of ProFound <cit.>. §.§.§ Structural models Separate structural models have been applied to each galaxy, similar to those used by <cit.>. We generally use a Sérsic profile to model individual strutural components in galaxies. The separate models are: * BD: Bulge+Disk mode, with an exponential disk component with Sérsic n=1 and a circular de Vaucouleurs bulge with n=4; * FS: Free Sérsic mode, featuring a single component with a free Sérsic index; * PD: PSF bulge + Disk mode, where the disk component has Sérsic n=1, however the bulge is modelled by a point source that is convolved with the image PSF (which is different in each band). * DD: Disk + Disk mode, where both disk components have Sérsic n=1, but the axial ratios and position angles of the disks are free. We expect that this is an infrequently preferred model, however may be appropriate in cases with very prominent bars or colour gradients. The DD run was not presented in <cit.>, however as we will show it is the best-selected model for only a small handful of sources. The implementation of a two-component model with a disk and a free-Sérsic bulge was explored, but with the average sizes of bulges (∼1 kpc) and the resolution of GAMA, the Sérsic index would be poorly constrained (given the quality of sky subtraction and accuracy of the PSF). This poor constraint would add sufficient degeneracy to the fit, and hence for this work we have deemed it more favourable to simply fix the Sérsic index of the bulge in the BD mode. §.§.§ SED-fitting models To constrain the relative brightness of the two-dimensional models across different wavelengths, an SED for each component is generated, in much the same way as traditional SED fitting. Therefore, free parameters associated with ProSpect are included in our modelling. The SED modelling implementation used for all four ProFuse configurations is identical, and mirrors the ProSpect modelling approach used by <cit.> and <cit.>. In short, a parametric star formation history is implemented, using the parametrisation, i.e. a skewed Normal star formation history with a truncation implemented in the early Universe (forcing the SFR to be 0 at the start of cosmic time). Metallicity is allowed to evolve linearly (meaning that the metallicity growth is mapped directly to the growth in stellar mass, acheved using the parametrisation), with the final gas-phase metallicity modelled as a free parameter. §.§ Structural nomenclature It is essential to be explicitly clear about the nomenclature adopted in our work, as the usage of structural terms varies across the literature. The three terms that we will heavily use are disks, bulges, and spheroids. Disk terminology is consistent with the common use, describing any flattened, circular structure. Disks can either be one part of a two-component system, or galaxies can be purely disks. Bulge is used to describe the ellipsoidal structure at the centre of a two-component system. Spheroid, in our work, is used to describe the single-component structures that are ellipsoidal. Note that with this usage, bulges are not deemed to be a subset of spheroids (which is how the term is often used in the literature). In this sense, disks, bulges, and spheroids are not treated as overlapping categories, and are instead viewed as “eigenstructures" of galaxies. §.§ Model Selection Regardless of the manner in which a galaxy decomposition is made, an essential part of the process is determining which model best describes a galaxy's two-dimensional distribution of light. This can be the most challenging part of structural decomposition, as in many cases a single-component model can describe the light distribution of a galaxy just as well as a bulge plus disk model. What should then determine the choice? This choice has frequently been made using visual classifications or inspections <cit.>. While a completely numerical quantifier is desired (to remove subjective visual decisions), this approach has remained elusive <cit.>. Because reliable and purely numerical discriminators remain controversial, visual classifications have remained important in much morphology/structure-based work. As datasets are increasing in size dramatically though, requiring visual inspections of galaxies is becoming increasingly feasible. An alternate approach increasingly being used is machine learning techniques, where effectively a computer is trained to quickly and efficiently replicate a visual classification on a very large scale. Many galaxy morphological classifications have been made in this manner <cit.>. At this stage these classifications have not yet readily been used for much in the way of follow-on science, although an exception to this is <cit.>, who used machine learning morphologies to study the evolution of lenticular galaxies with cosmic time. The benefit from a machine learning approach is usually simply that larger sample sizes can be processed. Machine learning morphologies will suffer the same potential biases that any visual classification scheme will, due do its inherently qualitative nature, and because nearly all machine learning classifictions have been trained on visually classified data. For our work, we apply a numeric quantifier from our ProFuse outputs to quantitatively categorise our sample into bulges, disks, and spheroids. Acknowledging however that any classification scheme (whether numerical or visual) still bears some ambiguity, a key aim of this work is to demonstrate the inherent uncertainties that still accompany any attempt to classify galaxies into structural classes. We therefore also incorporate a number of different visual morphological classifications that have been conducted for this sample of galaxies. These different classification schemes are outlined in the following subsections. §.§.§ ProFuse numerical model selection The best model selection is expanded from <cit.>. Using the parameters derived from each of the models, simple arguments are used to decide whether one model is clearly more physical (for example, if the B/T is very low, then there is motivation to assume that only a single component is required to produce a good fit to the galaxy). In cases where clear physical motivation is not found, the Deviation Information Criterion (DIC) (which is smaller in the case of a better fit to the data) is used to determine preferred models. A galaxy is best-fitted by FS if it satisfies any of the following criteria: 1) if spurious bulges are fit using BD, for galaxies with disk-like profiles using FS (Re_BD, bulge/Re_FS>3) & (B/T_PD<0.1) & (n_FS<1.5) 2) if negligibly tiny bulges are fit using PD, for galaxies with disk-like profiles using FS (B/T_PD<0.01) &(n_FS<1.5) &(n_FS≥ 0.5) 3) if all two-component models show very high bulge fractions [ (B/T_PD>0.7) | (B/T_BD>0.7) ] &(Re_BD, bulge/Re_FS>3) 4) if the fitted disk in a BD system is negligibly small, indicating only the bulge component fits the whole galaxy Re_BD, disk=1 Then, of the remaining unclassified galaxies, PD is selected if 0.1 < Re_BD, bulge≤ 1.1 (measured in pixels), ensuring that a PSF bulge is only used if the fitted bulge in BD mode is significant, but smaller than the PSF. Finally, all remaining galaxies are classified based on the preferred DIC for each of the models (where a lower value indicates a better fit). The criteria for this are as follows. FS is selected if: log(DIC_BD/DIC_FS) > 0.02 & log(DIC_PD/DIC_FS) > 0.02 BD is selected if: log(DIC_FS/DIC_BD) > 0.02 & log(DIC_PD/DIC_BD) > 0.0 PD is selected if: log(DIC_FS/DIC_PD) > 0.03 & log(DIC_BD/DIC_PD) > 0.0 Finally, of the remaining unclassified galaxies, DD is only selected for a small number of objects for which the model is significantly preferred: log(DIC_FS/DIC_DD) > 0.2 & log(DIC_BD/DIC_DD) > 0.2 & log(DIC_PD/DIC_DD) > 0.2 The above criteria have been slightly modified from the similar implementation by <cit.>. The final DD criterion has been added (as the DD model had not yet been implemented for that work), and additionally further minor optimisation changes were made. The values that are different have been indicated using bold font. This approach results in a purely numerical mechanism of determining the structure of galaxies. In total, 5036 galaxies were selected as best modelled by FS (free Sérsic), 443 by BD (bulge and disk), 1168 by PD (PSF bulge and disk), and 17 by DD (two disks). For all sources deemed best-described by a single component with a free Sérsic index (FS), it is necessary to make a separate decision on the type of structure that this component is consistent with. Such galaxies with a very low n are pure disk systems, whereas those with the highest n are purely spheroidal. For the sake of the structural analysis in this paper, we use the following criteria to classify these structures. If n ≤ 1.5, the structure is a disk. If n > 2.5, the structure is a spheroid. If 1.5 < n ≤ 2.5, the structure is treated as ambiguous. Here, the ambiguous classification is simply included as an acknowledgement that the exact Sérsic cut used to separate disks and spheroids contributes to the classification uncertainty, and hence throughout the analysis in this work, the ambiguous sources will be used to contribute to the uncertainty measurements of each population. §.§.§ Visual morphological classifications Visual_SB Visual_Hubble 4Visual_DR4 A visual re-classification of the 6,664 galaxies of this work has been conducted (by SB) into structural classes that are equivalent to those defined by the automatic ProFuse classifications described in the previous section. The classes used in this classification are much broader and simpler than previously-used schemes, and are limited to three categories: disk, spheroid, and two-component galaxies. These have been called “classes" throughout the paper. These classes complement visual classifications that have been previously conducted on this sample by <cit.> (“Classes" hereafter), sorting the galaxies into their Hubble types. The Hubble classes used are ellipticals (), , , , , , and Little Blue Spheroid () classes. As part of GAMA DR4, <cit.> conducted an additional visual classification that was based on the visual structure of these galaxies, rather than the visual morphology. We refer to these classifications as the “4 Classes" hereafter. The DR4 classes separate galaxies into pure disks (), ellipticals (), compact sources (), and then two-component systems, either with a compact bulge (), or with a diffuse bulge (). Galaxies that are too difficult to classify due to irregular structure such as from mergers are labelled “hard" (), or “hard elliptical" (). These account for a tiny fraction of the sample, and as such are not significnt in this analysis. When presenting results from visual classification methods throughout this paper, we use properties derived from our ProFuse modelling. When visual classes have described a galaxy as having a single-component fit, we assign the FS properties to the galaxy. When a two-component system has instead been selected, then we can either assign the BD or PD properties to the galaxy. Wherever relevant for the duration of this paper, we present results with both options, to indicate the plausible uncertainty originating from model selection. We highlight that this is the most pessimistic uncertainty, as a likelihood criterion (as used in the automatic ProFuse classes) can be used to isolate the better-fitting description to shrink this uncertainty. This same approach is taken when presenting the bulge and disk properties using the 4 classes. § COMPARING CLASSIFICATION SCHEMES The literature is littered with different methods of classifying galaxies into classes that describe their structural composition. For the sake of making comparisons between the results derived from such classifications, it is necessary for us to comment on their relative agreement, or disagreement. We show a comparison between our ProFuse model classifications and the Classes, 4 Classes, and Classes in Fig. <ref>. The bijective agreement (which combines the fractional occupation of every class along both the row and the column) between the ProFuse classes and each of the visual classifications is shown in Fig. <ref>. The darker the matrix element is coloured (as indicated by the colourbar), the better the agreement. For each matrix class combination, we plot the log(M_*/M_⊙) – log(n) distribution of the subset, coloured by the g-i colour (the relevant subset of the left-hand panel). For each matrix entry, a dashed box indicates regions that we would reasonably expect to be populated if the classification schemes were working perfectly as intended. The fact that the majority of these regions are highly populated generally suggests that the classification schemes are working fairly consistently. The agreement between visual disks from DR4, and single components from ProFuse is shown to be high, with a bijective agreement value of 59.2%. The class that is most similar to pure-disk systems is (where there is no prominent bulge), and so it is reassuring to see that the bijective agreement between ProFuse Disk and galaxies is also high (with a value of 50.1%). It is notable that there is also a substantial overlap between pure disks and systems (which are regarded as having bulges, albeit not as prominent as lenticular galaxies), albeit smaller (9.5%). The overlap between spheroid and ellipticals in both relevant schemes is substantial (with bijective agreement values of 17.4% and 15.8% respectively), which is to be expected. Of interest though, is that our spheroidal population present a bimodality in both stellar mass and colour, which is contrasted to both visual elliptical classes that tend to only include high-mass, red galaxies. We find that when compared with the classes, the majority of these low-mass, blue spheroids are either labelled as (little blue spheroid) galaxies, or galaxies. This shows that our definition of spheroidal is a much broader class than simply the elliptical class. For this reason it is important not to use the “Elliptical" label for our spheroidal class. ProFuse-classified two-component systems tend to mostly overlap with either or galaxies (as should be expected), and while there is a slight preference for them to overlap with the classes (which would make sense given these are disk galaxies with a prominent bulge), there is a broad range of classes that also overlap with this class. Regions that are more darkly shaded, but that do not have a dashed outline, represent regions where the classifications start to diverge. The vast majority of such regions actually fall within the ProFuse category, which accurately reflects the ambiguous nature of these sources. In the visual classifications of this work, they most closely relate to the visual spheroidal sources, suggesting that the n cut used to separate disks and spheroids in the ProFuse classifiers is perhaps skewed slightly further toward disk than suggested by eye. These sources are most closely linked to sources in the DR4 scheme. This is not to be expected, and suggests that the ProFuse classifications either miss diffuse bulges, or the visual classification is unnecessarily associating a bulge to a single-component system. Because bulges in our current implementation of ProFuse are always modelled as circular (in both the BD and PD models), it is possible that we are missing pseudobulge-style structures that are inherently elongated (as the two-component model may not be preferred in this scenario). A random sample of galaxies in this category is shown in Fig. <ref>. Other categories that are perhaps surprisingly populated are the intersection between ProFuse two-component systems and each of the visual disks and spheroids. Random samples of each category are presented in Figs. <ref> and <ref> respectively. A second component in each image is only subtle, however it is distinctly arguably that they do exist, suggesting that classification fatigue has resulted in these being missed when classifications were conducted by eye. In this scenario, the automatic classification from ProFuse seems to be superior. The conclusion of this comparative analysis is that no single classification scheme appears to be objectively superior to the others, and there are benefits and drawbacks to each scheme. For this reason, this work does not aim to provide a new classification that is superior to previous, but rather to use differing (but complementary) classification schemes to bracket the range of true classifications, as a manner of estimating classification uncertainty. For the remainder of this paper, we will present results using each of the ProFuse, SB and DR4 classifications. Throughout this paper, when we present statistics relating to the visual classes, we show outputs from our ProFuse modelling. For all single-component visual classes ( , , , and / [Note that this category was not included in Figs. <ref> and <ref> simply because they are infrequently selected classes at our redshift range. ]) we use the FS ProFuse model, and for all two-component classses ( and ), we use the typical BD model. It is important to note therefore that the “total" sample properties according to the two classes will therefore vary (even though the sample itself is the same). § RESULTS §.§ Galaxy Fits An example of full ProFuse outputs can be seen in figure 8 of <cit.>, which demonstrates how the SED and 2D fitting elements of this technique come together to produce an accurate model with a wealth of data for an individual galaxy and its structutral components. As a brief demonstration of the galaxy fits achieved our ProFuse implementation in this work, we show three-band (g/r/Z) thumbnails of the best-fitting model and corresponding residual for a handful of sources in Figs. <ref>-<ref> (with a further selection presented in Appendix <ref>). Residuals shown are the absolute residual, highlighting both the negative and positive features. For each galaxy, the image, model and residual thumbnails are identically scaled, to make all visible features directly comparable. Fig. <ref> shows a handful of examples for which the ProFuse model results in little residual flux, largely due to the lack of visual substructures within the galaxies (seen in the right-most column of the figure). Because these galaxies are smaller on the sky, the lower effective resolution likely smooths over any substructure in the galaxies, making them easier to model. Structure types that have not been explicitly modelled in this work include components such as bars and spiral arms. Galaxies that very strongly have bar features are shown in Fig. <ref>. Here, it is clear in the right column that there is a notable portion of the galaxy light that has not been well-modelled by ProFuse. In particular, the bar is clearly visible in the residual images, and the red colour is dramatically different from the blue spiral arms, hinting at very distinct stellar populations within these structures. While we find that the total SED of the galaxy is still well-approximated on average by ProFuse (and hence the global properties are not likely to be dramatically affected), the 2D model is non-optimal. It has been shown by <cit.> that a classical bulge plus disk decomposition of a barred galaxy can overestimate the spheroid components by factor of 4-100. Using the SDSS <cit.>, <cit.> estimate that 26% of galaxies with g<16 have bars, which is higher than the results that come from a visual inspection of our colour residuals, in which we identify bars in 9% of our sample in the same magnitude range. The resulting impact of neglecting bars in our fitting therefore has the potential to be quite significant, although the light fraction of a bar in a galaxy can vary dramatically, so the true impact is difficult to estimate. Interestingly, while the residual structure for the galaxies in Fig. <ref> seems to be similar, a mix of models (BD, FS and PD) were selected for the galaxies themselves. This highlights that the presence of a bar does not tend to influence the automatic classifications from ProFuse, as bars are badly modelled by all model configurations in general. To better model the galaxies in future, it will be necessary to incorporate a bar model. Interestingly, the residual features have a very strong colour signature. This indicates that an SED analysis of the residual features with ProSpect will already be possible with existing ProFuse outputs, allowing the stellar populations of features such as bars and spiral arms to be studied. While it is beyond the scope of this work to conduct such an analysis, we feel it is a natural future extension. Additional samples of galaxies are presented in the same manner in Appendix <ref>, to demonstrate some of the interesting galaxy phenomena that are identifiable from this modelling method. §.§ Stellar mass distributions The stellar mass distributions of the full sample, as well as the disk, bulge, and spheroid populations are shown in Fig. <ref>, for each of the three classification schemes used in this work. While the sample itself is identical in each of the schemes, the overall stellar mass distribution displays minor variation, thanks to the different combinations of ProFuse models that were selected to best describe each galaxy in each scheme. As mentioned in Sec. <ref>, either PD or BD configurations could be assigned to galaxies that were visually deemed to be two-component systems. Rather than attempt assign a better model numerically to each individual galaxy, we conduct the analysis with both, which gives the most pessimistic estimate of the error to originate from this modelling uncertainty. The uncertainty ranges in the bulge and two-component disk components in the middle and bottom panels of Fig. <ref> are the extremes bounded by the scenarios where either all two-component systems are described by PD, or all two-component systems are described by BD. The use of PD generally results in a lower-mass bulge (because of the limited size that a bulge can have), and conversely the use of BD can produce higher-mass bulges. This is what is responsible for the substantial uncertaintly in the bulge mass distribution at the low- and high-mass extremes. From the automatic ProFuse classes, it seems that lower-mass bulges are generally best described by the PD configuration, whereas high-mass bulges are best described by BD. There are significant differences that demonstrate the fundamentally different approaches used to classify galaxies. The double-peaked distribution of disks separates relatively cleanly between the pure disk and the two-component disk systems. This is true for all three classification schemes, although to different degrees. The 4 classes produce the highest low-mass peak in two-component disks, whereas the classes have the lowest number of low-mass two-component disks (with the ProFuse classes somewhere in the middle). This behaviour is mirrored in the bulge population, with the 4 classes having the most low-mass bulges, the classes the fewest, with the ProFuse classes somewhere in the middle. In general, it seems that the ProFuse classes produce a result that is the compromise between the different visual schemes at play in the 4 and classes. To compare the overall mass distribution of our sample with previous work, we present the total mass function in Fig. <ref>. This is compared against the recent mass function from <cit.>, and the previous GAMA mass function from <cit.>. We see that our mass function appears to be shifted slightly towards higher stellar masses than that of both <cit.> and <cit.>. This is to be expected, given that the ProFuse-derived stellar masses are on average 0.1 dex larger than that of ProSpect (which is where the stellar masses originated from for the analysis). This offset is shown in fig. 11 of <cit.>, with more detailed discussion than we repeat here. The mass function we derive when dividing by structure types is presented in Fig. <ref>, where the indicated uncertainty ranges convey the error produced through different classification approaches (compressing the information conveyed in the three panels of Fig. <ref>). Here, a completeness correction was conducted on a per-galaxy basis using the 1/V_max values for each galaxy computed using ProSpect. For ease of interpretation, the left panel of Fig. <ref> focuses on disk components, whereas the right panel focuses on the bulge and spheroid components. We include the Schechter fits to equivalent structure mass functions from <cit.> in coloured lines. The total disk mass functions are generally consistent, as is the two-component disk distribution. Equivalently, the spheroid distribution is also entirely consistent. Some differences exist when comparing the pure disk systems, where we note that our ProFuse pure disk mass function extends to much higher stellar masses than that of <cit.>. Similarly, our mass function for bulges is much less peaked, and contains a higher density of low-mass bulges than noted by <cit.>. We demonstrate the contribution of bulges, disks, and spheroids to the overall mass budget as a function of stellar mass in Fig. <ref>, alongside the corresponding curves from <cit.>. It is remarkable that our mass fractions are almost entirely consistent with those measured in the past using Hubble-type classifications alone. At high stellar masses (>10^10M_⊙), our values are indeed entirely consistent. It is just at lower stellar masses where we tend to assign more mass to both bulges and spheroids, and less mass to disks, than <cit.>. Note that objects are not explicitly considered in the curves of <cit.> (potentially accounting for the difference in the spheroidal populations). §.§ Cosmic star formation history Using our volume-limited sample, we derive the star formation rate density (SFRD) using the z<0.06 volume to represent the Universe. The SFRD is derived by summing the star formation histories of all the galaxies in the sample, normalised by the volume. To account for incompleteness at the lowest stellar masses (below 10^9 M_⊙), we scale the contribution of each galaxy to the SFRD by V/V_max,[Often referred to simply as a 1/V_max correction] to account for the portion of the studied volume over which the galaxy is undetectable within the selection limits. Here, V is the volume of the obervational sample, and V_max is the lower between the maximum volume within which the object is observable, or the full volume of the sample. The resulting total SFRD of our sample derived using our ProFuse method is shown in Fig. <ref> as the thick black line. The standard compilation of data points[Measurements included in this compilation come from the following studies: <cit.>] by <cit.> are shown as faint grey data points, as well as the more recent observational measurements of the CSFH by <cit.>. The forensically-derived CSFH using only broadband SED fitting via ProSpect by <cit.>[We present here the SFRD presented in appendix B of <cit.> rather than the main body of the paper, to compare results using the equivalent metallicity evolution prescription.] is shown in Fig. <ref> as a dashed magenta line. While both this derivation and the in-situ measurement by <cit.> uses the modelling power of ProSpect, the implemented methods of deriving the SFRD are entirely different. Although the same parametrisation for the SFH and metallicity is used within both ProSpect and ProFuse, the fact that the SFHs can effectively be multi-modal using ProFuse means that these CSFH derivations were by no means destined to look the same. Despite this, the results do agree for the most part, except for slight differences in the early Universe and small discrepancies at low redshift. This lowest redshift range has been shown in Fig. <ref> by the inset, highlighted in orange, so that discrepancies are clearer to see. At a lookback time of 1 Gyr, the ProSpect SFRD is larger than that derived by ProFuse by 13% (although the two are still entirely consistent within the uncertainty range introduced by different classification schemes). A possible explanation for the recent dip in the ProFuse CSFH is that star-forming clumps in galaxies (such as in their spiral arms) are not explicitly fitted. This is evident in the residuals shown in Fig. <ref>. If recent star formation is systematically under-fitted, because the structure is clumpy and not well-described by our model configurations, then we may be systematically underestimating the recent SFR in some cases. An additional contribution to the potential difference at recent times is the manner in which a completeness correction is conducted. A more simplistic correction was conducted by <cit.>, which may account for some differences in the lowest lookback time bins of the SFRD. That the resulting CSFH is still generally consistent despite the addition of extra flexibility speaks to the robustness of this result for this sample of galaxies. The only difference between the total CSFH in this work and that derived using ProSpect by <cit.> can be seen at lookback times of greater than 6 Gyr, where the added flexibility of multiple components in this work has resulted in more star formation being recovered. Through a comparison of SED-fitting outputs from GAMA, galaxies from the semi-analytic model Shark <cit.>, and SED-fitted outputs of the Shark galaxies, <cit.> conducted a very thorough analysis to determine the reliability of forensic properties derived from ProSpect. They determined that the evolution of galaxy properties could be accurately reproduced to lookback times of ∼6 Gyr, but that beyond this epoch, the results start to be dominated by the modelling assumptions. Given that the light from the oldest stellar populations is much fainter than from younger ones, it is unsurprising that the constraint from an SED-fitting tool is much lower here. For this reason, the deviation we see in the CSFH between the ProSpect-only and ProFuse derivations may well simply be the consequence of low constraints from these stellar populations. The measurements by <cit.> are also derived using ProSpect with a linear metallicity evolution model, consistent with the SED modelling approach in this paper. <cit.> also include AGN in their modelling as described by <cit.>, resulting in an improved estimation of galaxy properties, especially at high-z. Our forensic derivation is consistent with these observations within uncertainty in the past 8 Gyr, however we note that our forensic SFRD is perhaps underestimated in the 8-10 Gyr range, and then overestimated beyond 10 Gyr. The agreement between each of the ProSpect-derived approaches in measuring the SFRD is remarkable within the most recent 8 Gyr of lookback time. The SFRD with contributions by different components is presented in Fig. <ref>, with the total SFRD shown in black, and in colour the contributions toward the total SFRD by different galaxy structures, including disks (blue), bulges (red), and spheroids (orange). Values plotted in Fig. <ref> are presented in Table <ref>, with full data available in the supplementary material. The corresponding evolution in the stellar mass density inferred by our star formation history is shown in Fig. <ref>. The impact of the slight difference in total SFRD between ProSpect (from ) and ProFuse is a 16% factor increase in the total stellar mass density. Note that despite this increase, the present-day stellar mass density is still entirely consistent with the compilation data[This compilation includes SMD measurements by <cit.>] of <cit.>. Values plotted in Fig. <ref> are presented in Table <ref>, with full data available in the supplementary material. We discuss the results for each component individually in the following subsections, highlighting that all trends refer to the stars that are within each component at the present day, as opposed to the component that the stars may have been in at previous epochs. §.§.§ Cosmic star formation history of disks Present-day disks are host to the vast majority of stars formed within the last 8 Gyr. We further divide this contribution into that from pure disks, and disks in two-component systems (shown in Fig. <ref> and <ref> as dashed, and dashed-dotted blue lines respectively). While pure disk systems are responsible for more present-day star formation, this is only very recently true, due to the decline in star formation in two-component disk systems in the past 2 Gyr. The appearance of a recent downturn in the star formation rate density of two-component disks is unlikely to be an indication that bulges cause quenching, and likely simply the consequence of the fact that these disks are more massive, and quenched fractions are greater at higher stellar mass <cit.>. While disks do have some old stars, the fraction of stars from the early Universe (z>2) that resides in present-day disks is very small, at only ∼15%. §.§.§ Cosmic star formation history of spheroids The more recent contribution to the CSFH by spheroids is substantially higher than that of bulges. The epoch in which spheroid stars were dominantly formed was cosmic noon (around 9-13 Gyr ago), and while there has been a steady decline in the number of spheroid stars formed since a lookback time of ∼11 Gyr, spheroids are still responsible for ∼10-40% of present-day star formation. In fact, the shape of the spheroid SFRD with time very closely resembles that of the total CSFH. This suggests that spheroid population is likely a mixed-bag in terms of SFH properties. §.§.§ Cosmic star formation history of bulges Bulges are seen to be a very old population, with the vast majority of stellar mass in present-day bulges already formed by cosmic noon. In fact, half the stars currently in bulges had already been formed by a lookback time of 11.8 Gyrs. These bulge stars are distinctly older than spheroid stars. To confirm that this is not the consequence of dust in highly inclined two-component systems, we compared the bulge CSFH for inclined and face-on systems (based on the axial ratio of the corresponding disk), seeing no bias based on galaxy inclination. This lends confidence that bulges truly are measurably older than spheroids. It is generally accepted that observationally-identified bulges encompass a range of different physical structures, including “classical" bulges, and also “pseduo-" bulges. These pseudobulges have different physical structure <cit.>, and are thought to be younger than their classical counterparts <cit.>. It is possible that we are missing such pseudobulges in our automatic ProFuse classes of structure, as we do not include a model configuration that has a non-circular bulge with a free Sérsic index. If pseudobulge-hosting galaxies were specified as two-component objects from the visual classifications, however, then a bulge complex would have been forcibly modelled, even if it did not produce the best fit to the imaging data. Therefore, we expect that the stellar populations of these sources would still be represented in the visually identified bulge samples. Any differences between the CSFH derived by the different classification schemes would then be apparent in the uncertainty range presented in Fig. <ref>. Even considering these uncertainties, bulges are still consistently substantially older than the disk population. A small subset of the bulges in our analysis are younger than their corresponding disks (8-21%, depending on the classification scheme[Using the ProFuse classifications, this value is 16%.]), and so to specifically isolate these bulges from the rest of the population, we show in Fig. <ref> the contribution to the bulge CSFH by “normal" older bulges in the red dashed-dotted line, and the contribution by the relatively younger bulges in the red dashed line. The bulges that are relatively younger than their disks, while generally older than the disk population, actually more closely resembles the spheroid stellar populations than the typical bulges. A more detailed analysis of this population, and the potential implications for galaxy formation theories, are left for a future work in which this can be explored in greater depth. Our results indicate that either young pseudobulges are a very subdominant fraction of the bulge population (as measured by using SDSS), or pseudobulge structures have similar stellar population properties to classical bulges. Further investigation is required to discriminate between these scenarios from our modelling. Note as well that our bulge population is quite possibly contaminated by bars (as discussed in Sec. <ref>), which is capable of biasing the bulge CSFH. §.§ Future improvements in CSFH measurement Despite the substantial effort that has been invested in recent years to better constrain the CSFH prior to cosmic noon, there is still significant uncertainty. This is driven by the inherent difficulty in measuring SFRs of galaxies in this epoch, due to uncertainties in the substantial dust corrections required at this epoch <cit.>. The analysis presented in Fig. <ref> suggests that the capacity for codes like ProSpect to forensically measure the SFRD is excellent over the last ∼8 Gyrs. We therefore speculate that in future, one of the most promising methods used to constrain the z>3 SFRD will be to conduct a forensic analysis of galaxies at z∼0.8. This is realistically the greatest distance at which forensic-style CSFH extractions are possible, as the stellar mass of the completeness limits becomes too high at greater lookback times, and morphologies become too irregular to model the galaxies well. This is also the epoch at which the uncertainties of the derived stellar mass function increase dramatically in the analysis by <cit.>. With this forensic approach at higher redshifts, there should then be enough constraining power within high-quality data to access cosmic dawn. The dataset that will make this possible is the complete spectroscopic redshift survey WAVES-deep <cit.> to be conducted on 4MOST <cit.>, and other redshift surveys conducted with facilities like MOONS <cit.>, in combination with high-quality imaging from facilities such as Euclid <cit.>. § DISCUSSION §.§ Literature comparison of bulge and disk formation epochs The overall trends presented for bulges, spheroids and disks are qualitatively similar to the parametric model presented by <cit.>, where star formation in present-day spheroids (referring in that work to all ellipsoidals) was deemed responsible for the bulk of star formation at and before cosmic noon, followed by an increase in star formation in disks. The subtle differences we identify are more prolonged star formation in spheroids, with a slight increase in the fraction of earlier formed stars that end up in present-day disks. Comparing directly with bulge/disk star formation histories from IFU-based work is difficult, because of the substantially smaller (and incomplete) samples currently studied in that level of detail. The main takeaway from an analysis of lenticular galaxies by <cit.> is that bulges in general seem to be pretty old, but that the spread in ages for disks is much broader (and on average younger than bulges). This is consistent with the picture we see from our complete sample of 7,000 galaxies. Even for individual subsets of galaxies however, a quantitative comparison is hampered by the fundamental differences derived in the star formation histories (for example the star formation begins at epochs seemingly earlier than the start of the Universe). These differences are attributable not necessarily to the bulge/disk decomposition method, but to the origin of the stellar populations. <cit.> use ppxf <cit.> to measure stellar populations, which can be very sensitive to implementation treatments such as the use of regularization. Various observational studies have identified “young" bulges, the existence of which seems to be at odds with our results at first glance. Such young bulge-like structures have been seen at the low-mass end <cit.>, which would not impact the overall bulge CSFH due to the small amount of stellar mass involved. Young star-forming bulges were identified by <cit.> in an analysis of cluster galaxies. Such galaxies with bulges younger than disks do exist in our sample <cit.>, however as this only accounts for ∼12% of the two-component sample, the impact is insufficient to have an overall effect on the mass-weighted bulge CSFH. Nonetheless in Fig. <ref> we separate the contribution of these younger bulges, and show that it is distinct to that of the rest of the bulge population. Recent work by <cit.> applied SED fitting to multi-wavelength photometric decompositions of 156 high-z galaxies to study the stellar populations of bulges, disks and spheroids. In their work, a bimodality was identified in the formation epoch of bulges, which contributed to the interpretation by <cit.> of two distinct phases of bulge formation in cosmic time. The bulge SFRD we present in Fig. <ref> demonstrates no hint of two distinct phases. Instead, it suggests that bulges are simply a continuously old population, with only a small tail of star formation at recent times (contributed to by the subset of bulges that are younger than their disks). Even when analysing the age distribution of bulges identified in our work <cit.>, we do not see any clear bimodality. The different redshift epochs studied hamper a direct comparison though, as our forensic approach may not be sensitive to two separate epochs that are both quite old. If so, this suggests that any potential high-z bimodality is not reflected in the properties of present-day bulges. Furthermore, the two studies use different SFH parametrisations ( use a declining delayed exponential parametrisation), which can also contribute to differences in recovered ages. §.§ The ellipsoidals: spheroids and bulges As demonstrated by the need to clarify our nomenclature in Sec. <ref>, the physical interpretation of the “ellipsoidal" structures in the Universe (bulges and spheroids) varies across the literature. Are they in fact distinguishable structures with distinct evolutionary pathways (true “eigenstructures"), or are their structural and stellar population properties similar enough to warrant interpretation as a single entity? Our analysis with ProFuse consistently suggests that spheroids and bulges are distinct populations, whether that be due to distinct size–mass relations as shown by <cit.>, the stellar mass distributions, or the overall age of the populations as shown by their distinct contributions to the CSFH[Despite taking care to indicate these CSFH resulting from a range of classification schemes, any contamination of disk-like sources into the spheroidal population could still be influencing the distinction between the bulge and spheroid CSFH. ], as shown in Fig. <ref>. This conclusion is similar to that of <cit.>, who finds that high-mass bulges and ellipticals are offset in the size–mass relation, also concluding that they are in fact separate populations[Despite the similar conclusion, the size–mass relation for bulges measured by <cit.> is different to <cit.>.]. It is difficult to make clear comparisons, as the spheroidal population that we analyse is a broader population than ellipticals studies by . This impact of this is seen when comparing (for example) the spheroid CSFH in Fig. <ref> with that of the elliptical CSFH presented in fig. 8 of <cit.>, where the elliptical contribution to the CSFH dominates much earlier than the spheroids studies in this work generally. Our spheroids contain a larger portion of younger galaxies with more active star formation, typically at lower stellar masses. In the past, such galaxies have been separated out into distinct classes, such as <cit.> who treat elliptical and dwarf elliptical (dE) galaxies as separate classes with distinct luminosity functions (see their fig. 3). This was also presented in the luminocity functions of <cit.>, where elliptical galaxies at the high-luminosity end are held distinct from dE and irregular (Irr) galaxies at the low-luminosity end. Contrastingly, studies of SMBH-bulge correlations treat ellipticals and bulges not as separate populations, but as a continuous population described collectively as “spheroids" <cit.>. Ellipsoidals were also treated as a common-origin structure in the simple model by <cit.> (also referred to collectively as simply spheroids in that work). This complexity of interpretation is not clearer in simulations. In analysing the IllustrisTNG simulation, <cit.> study spheroids and bulges as one continuous class. This is contrasted by <cit.>, who separate classical bulges and spheroids in their EAGLE-based analysis. Despite separating these populations into categories that seem more consistent with our observational approach, however, the spheroids in their analysis tend to be much more extended, and with a higher Sérsic index, suggesting that this divide is still not quite the same. In semi-analytic models like Galform <cit.> and Shark <cit.>, pure spheroids aren't explicitly formed. Instead, bulge-like structures are formed via two main mechanisms: mergers or disk instabilities. For this reason, ellipsoidals in these models are usually divided into disk-instability and merger-origin bulges, which are again different to any other definition of this structure. Yet the labels assigned to these structures are often “pseudobulges" and “classical spheroids" respectively <cit.>, which again makes direct comparisons between studies difficult. The analysis from this paper, and how it relates to the literature, suggests that finding comparable definitions for disks is relatively straightforward, but that bulges/spheroids are much less clearly defined at this stage. This highlights that our definitions are far from universal, and it is essential that more care is taken in future to characterise the specific elements of the “bulge complex", to facilitate more direct comparisons. §.§ Implications for morphological evolution Our forensic CSFH suggests that around 60% of all stars formed at z∼3 end up in present-day spheroidal systems, with the other 40% having ended up pretty evenly in both disks and bulges. Because our analysis is a forensic one, it simply suggests in which structures stars end up in the present day, not what type of structures they were formed in. Early results from JWST indicate that substantial morphological evolution has occurred since cosmic noon, with morphological classification of 850 galaxies by <cit.> suggesting that >56% of galaxies at z>3 have a visually identifiable disk, with only 38% of galaxies visually identified as spheroids. To really quantify this evolution however, complete studies at high redshift are needed to estimate the mass fraction of each structure, not just the fraction of galaxies that include these structures. With the quality of imaging now available from JWST, it is possible to conduct bulge/disk decomposition <cit.>. The contrast between high-z measurements and our forensic view of the early Universe clearly indicates that some morphological change must be occurring. The exact mechanism by which this tranformation is occurring is hard to tell from our forensic analysis. We use the outputs from the semi-analytic model Shark <cit.> to further explore the differences between “true" mass fractions of bulges, disks and spheroids with cosmic time, as compared with forensically-derived mass fractions like the ones measured in this work. Seeing how these measurements differ provides some intuition for how better to interpret forensic analyses. This analysis is presented in Fig. <ref>, where our results from the bottom panel of Fig. <ref> are renormalised to show the change in mass fraction for each structural component, relative to the z=0 epoch. The absolute fractions of mass in disks and bulges is quite different in Shark to that seen in GAMA, which is quite possibly caused by the tuning of the semi-analytic model. For this reason we do not focus on the absolute fractions, but on the relative build-up of disk and bulge mass, which is more fundamentally linked to the modelled physics. The ProFuse results and the equivalent forensic view from Shark shown as solid lines. The “true" view from Shark is shown as dashed lines. What the comparison of the true and forensic mass fractions from Shark shows, is that the historical mass in ellipsoidals is likely overestimated with a forensic approach, whereas the historical disk mass is likely underestimated. This points to a morphological evolution pathway, in which stars that are formed in disks likely end up in ellipsoidal structures. We caution that in different semi-analytic models, the “true" picture can differ dramatically to that shown in Shark. For example, <cit.> uses Galform to present the fraction of mass in bulges with cosmic time, and these curves differ substantially. In particular, a much lower fraction of mass exists within bulges at the present day, but rises substantially beyond z=2. § CONCLUSIONS We have presented here the first star formation history-based analysis to originate from an application of ProFuse, which simultaneously models the two-dimensional structure and SED of the components of galaxies from multiwavelength imaging. The analysis presented has the following, clear conclusions: * The capacity for ProFuse to produce a realistic 2D model of galaxies across multiple wavelengths simultaneously is remarkable, as demonstrated most instinctively by the colour models and residuals shown in Figs. <ref> and <ref> (and also <ref>, <ref>, and <ref>). * Using the outputs from ProFuse alone through application of multiple structural configurations[We use a single component with a free Sérsic index (FS), a bulge+disk system (BD), a disk+PSF central component system (PD), or a disk+disk system (DD).], we show that it is possible to automatically define a best-fitting model per individual galaxy. We present the differences in these best models as compared with visual classifications, and discuss the subtleties of any discrepancies. This demonstration shows that we have a pathway for moving beyond visual inspection of galaxies for the purpose of structural classification. * We showed in this paper that no two classification schemes are fully consistent, and that there are very predictable reasons why this is the case. This highlights why literature results based on galaxy decompositions may differ. Rather than justifying why one method is superior to another, we took the approach of using three separate classification schemes, and using the differences to estimate uncertainty in the scientific results. We argue that this provides a fairer estimate of true uncertainties, while also lending greater confidence to the final results. * Using the star formation histories for individual galaxy components, we measure forensically the cosmic star formation history. Remarkably, the ProFuse CSFH is almost entirely consistent with that derived using global SED fitting of galaxies from ProSpect <cit.>, and also the in-situ measured CSFH from high-z galaxies <cit.>. In particular, we find that the last 8 Gyr of the CSFH are measured to be perfectly consistent across all methods. This consistency highlights not only that we have an excellent understanding of this fundamental property of our Universe, but also that the techniques we are using to study galaxies are producing consistent results, which is essential in trusting the derived properties themselves. * Finally, we present the component-wise CSFH, divided by the contributions of present-day disks, bulges, and spheroids. Bulges contain the oldest stars in the Universe, with half of all bulge stars having formed by a lookback time of 11.8 Gyrs. At the present day, a negligible fraction of all star formation occurs in bulges. Disks tell a very different story, accounting for 60-90% of all present-day star formation, with half of disk mass in place by a much later lookback time of 7.9 Gyr. Spheroids are made of stars that have formed relatively consistently with the overall CSFH, being only slightly older than the average stars, with half the mass formed by 10.8 Gyr (as compared with an overall half-mass epoch of 9.9 Gyr). Spheroids today still account for 10-40% of all star formation, whereas bulges posess almost no star formation in the z=0 Universe. § DATA AVAILABILITY The data used as an input to this analysis are all publicly available via the GAMA webpage[<http://www.gama-survey.org>]. The data presented in this paper can be made available upon reasonable request. § ACKNOWLEDGEMENTS SB and ASGR acknowledge support from the ARC Future Fellowship scheme (FT200100375). SPD acknowledges support from the ARC Laureate Fellowship scheme (FL220100191). LJWD and RHWC acknowledge support from the ARC Future Fellowship scheme (FT200100055). This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. GAMA was funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. GAMA photometry is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 179.A-2004, ID 177.A-3016 We have used R <cit.> and python for our data analysis, and acknowledge the use of Matplotlib <cit.> for the generation of plots in this paper. This research made use of Astropy,[<http://www.astropy.org>] a community-developed core python package for astronomy <cit.>, Pandas <cit.>, and NumPy <cit.>. mnras § PROFUSE INDIVIDUAL FITTING OUTPUTS In the main body of the text, we presented some examples of the colour images, models and residuals, for a handful of galaxies that have very little residual structure (Fig. <ref>), and also galaxies that have clear bar structures in their residual (Fig. <ref>). As a further indication of the wealth of information extractable through these residual visualisations (a task that we will leave for follow-up studies), we have collated a number of examples for different features. Fig. <ref> shows examples of galaxies where the different colours of the modelled structural components have created clear colour gradients in the modelled galaxy. When comparing this to the image, this seems to be sensible in most cases. An additional type of feature identified in the examples shown in Fig. <ref>, is coloured asymmetry, usually on either side of the galaxy long axis, where one side is noticeably redder than the other. While we do not explore this feature, we suspect it is likely a signature of dust in the galaxy. Finally, an assortment of miscellanous additional features is presented in Fig. <ref>. This includes features like very thin disks, rings, and spiral arms. These residuals are generally not an indication of a poor fit, but rather a sign that there is substructure present that has not been accounted for by the four ProFuse model configurations applied in this work. § ROBUSTNESS OF FULLY AUTOMATED PROFUSE CLASSIFICATIONS As a demonstration of how well the purely-ProFuse classifications work (as opposed to the visual classification), we show a modified version of Fig. <ref> in Fig. <ref>.
http://arxiv.org/abs/2307.02934v1
20230706115232
Simple Anosov representations of closed surface groups
[ "Nicolas Tholozan", "Tianqi Wang" ]
math.GT
[ "math.GT", "math.DG", "math.DS" ]
École Normale Supérieure PSL, CNRS nicolas.tholozan@ens.fr National University of Singapore twang@u.nus.edu Wang was partially supported by the NUS-MOE grant R-146-000-270-133 and by the Merlion PhD program 2021 We introduce and study simple Anosov representations of closed hyperbolic surface groups, analogous to Minsky's primitive stable representations of free groups. We prove that the set of simple Anosov representations into (d,) with d ⩾ 4 strictly contains the set of Anosov representations. As a consequence, we construct domains of discontinuity for the mapping class group action on character varieties which contain non-discrete representations. Simple Anosov representations of closed surface groups Tianqi Wang August 1, 2023 ====================================================== § INTRODUCTION Given a finitely generated group Γ and a complex linear group G, the group of outer automorphisms (Γ) of Γ acts on the character variety χ(Γ,G), the GIT quotient of (Γ,G) under the conjugation action of G, by precomposition. This action is of primordial interest in various topics, such as the study of locally homogeneous geometric structures on manifolds, or isomonodromic deformations of complex differential equations. When Γ has a large automorphism group (e.g. when Γ is a free group or a surface group), the action of (Γ) on character varieties can be very chaotic, and a first interesting question is whether one can construct large domains of discontinuity for this action. A broad family of examples have been produced by the theory of Anosov representations of hyperbolic groups, see <cit.>. Anosov representations are quasi-isometrically embedded (they are equivalent for (2,)) and stable under small deformations. They thus form open domains of character varieties on which (Γ) acts properly discontinuously (see <cit.>). However, these domains of discontinuity are not necessarily maximal. In <cit.>, Minsky constructed examples of so-called primitive stable representations of a free group into (2,), and proved that they form an open domain of discontinuity which contains strictly the set of quasi-isometric embeddings. Minsky's construction has been generalized to higher rank by Kim–Kim <cit.> and by the second author in <cit.> who developed the notion of primitive Anosov representation. Roughly speaking, primitive Anosov representations of a non-abelian free group are representations with an Anosov behaviour “in restriction to” primitive elements. More precisely, the second author proved in <cit.> that primitive Anosov representations are representations such that the associated local system over the geodesic flow of the free group admits a dominated splitting in restriction to the closure of the union of all closed orbits corresponding to primitive elements. This motivates the more general study of restricted Anosov representations, initiated in <cit.>, which we carry on in the present paper. With this point of view, primitive Anosov representations have a natural analog for closed surface groups, which we call simple Anosov: A representation ρ of the fundamental group Γ of a closed connected hyperbolic surface S into (d,) is called simple k-Anosov if the local system associated to ρ over the geodesic flow of S admits a dominated splitting of rank k in restriction to the closure of the union of simple closed geodesics. We will call the closure of simple closed geodesics the Birman–Series set, in reference to Birman and Series <cit.>, who proved that this set is “sparse” in the unit tangent bundle of the hyperbolic surface (it has Hausdorff dimension 1). We refer to Section <ref> for precisions about the above definition (in particular, the definition of dominated splitting of rank k). We prove in Section <ref> the following expected property: Let Γ be the fundamental group of a closed connected hyperbolic surface S. Then the set of simple Anosov representations Γ into (d,) modulo conjugation is an open domain of discontinuity for the action of (Γ)= (S). For non-oriented surfaces, Lee proved in <cit.> that the domain of simple Anosov representations into (2,) contains stricly the domain of Anosov (i.e. quasifuchsian) representations. In contrast, for oriented surfaces, the domain of quasifuchsian representations is known to be a maximal domain of discontinuity by results of Lee <cit.> and Souto–Storm <cit.>. The main goal of the present paper is to construct new examples of simple Anosov representations in higher rank: Let Γ be the fundamental group of a closed connected hyperbolic surface. Then, for every d≥ 2, there exists a simple d-Anosov representation of Γ into (2d,) in the boundary of the domain of d-Anosov representations. Taking a “generic deformation”, we obtain the following: Let Γ be the fundamental group of a closed connected hyperbolic surface. Then, for every d≥ 2, there exist simple d-Anosov representation of Γ into (2d,) with analytically dense image. As a consequence, we obtain domains of discontinuity of the mapping class group action that properly contain the domain of Anosov representations. Let Γ be the fundamental group of a closed connected hyperbolic surface. Then, for every d≥ 2, there exists an open domain of discontinuity for the action of (Γ) on (Γ, (2d,))/(2d,) which contains points of the boundary of domain of Anosov representations. In particular, the domain of Anosov representations modulo conjugation is not a maximal domain of discontinuity. §.§ Further results and open questions §.§.§ Simple P-Anosov representations into G More generally, there is a notion of simple P-Anosov representation into G, for any pair of a semisimple (real or complex) linear group G and a parabolic subgroup P. Here we will focus on simple P_d-Anosov representations into (2d,), where P_d is the stabilizer of a d-dimensional subspace of ^d, but one could easily elaborate on these to build more examples. For instance, the direct sum of one our representations with a trivial representation will give simple d-Anosov representations into (d',) for any d'≥ 2d. Another example is the following: Among the exotic simple 2-Anosov representations into (4,) that we construct, some take values in the complex symplectic group (4,). Through the isomorphism (4,) ≃(5,), one obtains simple 1-Anosov representations into (5,)⊂(5,). These constructions, however, do not seem to exhaust all the possibilities. In particular, it seems that our construction cannot provide simple Borel Anosov representations, i.e. simple Anosov with respect to a minimal parabolic subgroup. This raises the following question: Does there exist a representation of a closed oriented surface group into (d,), d≥ 2 that is simple Borel Anosov but not Borel Anosov? In particular, does there exist a representation of a closed oriented surface group into (3,) which is simple Anosov but not Anosov ? On a related topic, Maloni, Martone, Mazzoli and Zhang have initiated in <cit.> the study of representations which are Borel Anosov in restriction to a fixed lamination. §.§.§ Other mapping class group invariant subflows We develop more generally the basic properties of Anosov representations in restriction to a subflow of the geodesic flow. In particular, any subflow which is globally preserved by the mapping class group gives rise to a domain of discontinuity for the mapping class group on the character variety. We mention various examples in Section <ref>. There, we also prove that any closed subflow of the geodesic flow that is preserved by a finite index subgroup of the mapping class group contains the Birman–Series set. As a consequence, all these other domains of discontinuity associated to subflows are contained in the domain of simple Anosov representations. This provides evidence for a positive answer to the following question: Is the domain of simple Anosov representations a maximal domain of discontinuity for the mapping class group action on a character variety of a surface group? §.§ Structure of the paper In Section <ref>, we introduce a general notion of Γ-flow of a finitely generated group, of which the main examples are subflows of geodesic flows of hyperbolic or relatively hyperbolic groups. In Section <ref>, we develop the general notion of Anosov representation in restriction to a Γ-flow, which specially emphasis on relatively Anosov and simple Anosov representations. This section contains several general results of independent interest which will make the main construction rather natural. Section <ref> presents our main construction of exotic simple Anosov representations. In a word, these are obtain as the induced representations of a geometrically finite but not quasifuchsian representation into (2,) of the fundamental group a covering of degree d. Finally, Section <ref> develops the applications to mapping class group actions on character varieties. §.§ Acknowledgements We thank Samuel Bronstein, Frédéric Paulin, Juan Souto, Tengren Zhang and Feng Zhu for references and interesting discussions related to our work. § GEODESIC FLOWS OF HYPERBOLIC AND RELATIVELY HYPERBOLIC GROUPS §.§ Hyperbolic and relatively hyperbolic groups Let (X,d_X) be a metric space. Recall that a map ℓ from (resp. _≥ 0, [a,b]⊂) to X is a geodesic (resp. geodesic ray, geodesic segment) if d_X(ℓ(t),ℓ(s)) = | t-s| for all s,t in the interval of definition. The space X is called geodesic if any two points are joined by a geodesic segment, and called proper if closed balls are compact. We say that X is taut if every point is at uniformly bounded distance from a bi-infinite geodesic. When (X,d_X) is proper and geodesic, we say that it is δ-hyperbolic, if for any geodesic triangle in X, there exists a point at distance at most δ from all three sides of the triangle, which is called a center of the triangle. The (Gromov) boundary of a δ-hyperbolic space X, denoted by ∂_∞ X, is defined to be the asymptotic classes of geodesic rays. When (X,d_X) is a δ-hyperbolic space, a horofunction about a boundary point p∈∂_∞ X is a function h: X→ such that for any x,x'∈ X, |(h(x)-d_X(x,x_0))-(h(x')-d_X(x',x_0))| is uniformly bounded, where x_0 is a center of the ideal triangle with vertices x, x' and p. A subset B⊂ X is called a horoball centered at p if there exists a horofunction h about p, such that B = {x∈ X| h(x)< 0}. Finally, a finitely generated group Γ is (Gromov) hyperbolic if it admits a properly discontinuous and cocompact isometric action on a δ-hyperbolic space (X,d_X) (for some constant δ). The space X is called a Gromov model of Γ. The notion of Gromov hyperbolic groups is a far-reaching generalization of convex-cocompact Kleinian groups. Similarly, the notion of relatively hyperbolic groups generalizes that of geometrically finite Kleinian groups. We follow here the definition given by Gromov <cit.>. Let Γ be a finitely generated group and let Π = {Π_i}_i∈ I be a finite collection of finitely generated, infinite subgroups of Γ. Let Π^Γ = {γΠ_iγ^-1 | γ∈Γ,Π_i∈Π} be the collection of conjugates of the subgroups in Π. The pair (Γ,Π) is called a relatively hyperbolic pair, and Γ is called hyperbolic relative to Π, if there exists a δ-hyperbolic space (X,d_X) with a properly discontinuous isometric action of Γ such that (1). X is either taut or a horoball; (2). There exists ℬ={B_i}_i∈ I, a collection of horoballs of X, such that ℬ^Γ = {γ B_i | γ∈Γ, i∈ I} is a collection of disjoint open horoballs with γΠ_iγ^-1 the stabilizer of γ B_i for each γ∈Γ and i∈ I; (3). Γ acts on X^th = X ∖⋃_i∈ I,γ∈Γγ B_i, the thick part of X, cocompactly. The space X is called a Gromov model of (Γ,Π). The Gromov boundary of X is called the Bowditch boundary of (Γ,Π), denoted by ∂_∞ (Γ,Π) = ∂_∞ X. The subgroups in Π^Γ are called peripheral subgroups of Γ and the centers of horoballs in ℬ^Γ are called parabolic points. While two Gromov models X and X' of a relatively hyperbolic pair (Γ,Π) are not necessarily quasi-isometric, there always exists a Γ-equivariant homeomorphism between their Gromov boundaries (see <cit.>). The Bowditch boundary ∂_∞ (Γ,Π) is thus well-defined independently of choice of the Gromov models. Recall the definition of a convergence group action. Let Z be a metrizable compact set. An action of a discrete group Γ by homeomorphisms on Z is a convergence group action if for every unbounded sequence (γ_n)_n∈ in Γ, there exists a subsequence (γ_n_k)_k∈ and a pair of points (x_-,x_+)∈ Z^2 such that γ_n_k z → x_+ as k→ +∞ for all z∈ K∖{x_-}. The points x_- and x_+ are called respectively the repelling and attracting points of the subsequence (γ_n_k)_k∈. In this case, we say a point p ∈ Z is (a) a conical limit point if there exists a sequence (γ_n)_n∈ with attracting point p and repelling point distinct from p; (b) a bounded parabolic point if the stabilizer of p in Γ is infinite and acts properly discontinuously and cocompactly on Z∖{p}. We say that the Γ-action on Z is geometrically finite if Z consists of only conical limit points and bounded parabolic points. If Γ is a discrete group of isometries of a δ-hyperbolic space (X,d_X), then the action of Γ on ∂_∞Γ is a convergence group action. Moreover, a point p∈∂_∞ X is a conical limit point if and only if there exists a sequence (γ_n)_n∈ such that, for some (hence any) point o∈ X and some (hence any) geodesic ray ℓ converging to p, the sequence (γ_n o)_n∈ converges to p and remains at bounded distance from ℓ. When (Γ,Π) is a relatively hyperbolic pair, Γ acts on ∂_∞(Γ,Π) geometrically finitely (see <cit.>). Conversely, Yaman <cit.> proved the following theorem. Let Γ be a finitely generated group with a convergence group action on a perfect metrizable compact space Z. If the Γ-action is geometrically finite, then there are finitely many orbits {Γ p_i}_i∈ I of bounded parabolic points, and Γ is hyperbolic relative to Π = {_Γ(p_i)}_i∈ I. Moreover, Z is Γ-equivariantly homeomorphic to the Bowditch boundary of (Γ,Π). A relatively hyperbolic pair (Γ,Π) is called elementary if ∂_∞ (Γ,Π) contains at most two points. This happens when the Π_i are finite, and Γ is finite or virtually isomorphic to , and when Π consists of only one subgroup of Γ of finite index. Otherwise, ∂_∞ (Γ,Π) is a perfect metrizable compact set and the pair (Γ, Π) is called non-elementary. From now on, we always assume that the hyperbolic groups and hyperbolic pairs we consider are non-elementary. We will be interested in the situation where a Gromov hyperbolic group also admits a relatively hyperbolic structure. This was studied by Osin <cit.>, Bowditch <cit.>, Tran <cit.>, Manning <cit.> etc. Let Γ be a group, a finite collection Π = {Π_i}_i∈ I of subgroups of Γ is called almost malnormal if for any i,j∈ I, Π_i∩γΠ_jγ^-1 is infinite only when i=j and γ∈Π_i. Let Γ be a nonelementary hyperbolic group and Π a finite, almost malnormal collection of quasiconvex subgroups. Then (Γ,Π) is relatively hyperbolic. Conversely, if Γ is both hyperbolic and hyperbolic relative to Π, then Π is an almost malnormal collection of quasiconvex subgroups. In the situation of Theorem <ref>, the relation between the Gromov and Bowditch boundaries is described in the following way. Let Γ be a nonelementary hyperbolic group and let Π be a finite almost malnormal collection of quasiconvex subgroups. Then there is a Γ-equivariant continuous surjective map η:∂_∞(Γ)→∂_∞ (Γ,Π) , such that, (a) the preimage of a conical limit point by η is a singleton; (b) the preimage of the bounded parabolic point γ p_i fixed by γΠ_iγ^-1 is γ∂_∞Π_i ⊂∂_∞Γ for any i∈ I and γ∈Γ. Informally, this theorem states that the Bowditch boundary of (Γ,Π) is obtained from the Gromov boundary of Γ by contracting the Gromov boundary of each conjugate of some Π_i, i∈ I to a point. §.§ Geodesic flows We now introduce a very general notion of flow space for a finitely generated group Γ, of which an important source of examples will be given by geodesic flows of Gromov models of relatively hyperbolic groups. Let Γ be a discrete group. A Γ-flow is the data of a Hausdorff topological space Y with a continuous flow ϕ and a properly discontinuous action of Γ on Y that commutes with ϕ. This Γ-flow is called cocompact if the quotient Γ\ Y is compact. A Γ-subflow of a Γ-flow (Y,ϕ) is a Γ-invariant subflow of (Y,ϕ). If Γ' is a subgroup of Γ, then every Γ-flow is automatically a Γ'-flow by restriction of the action. We now introduce a weak notion of morphism between Γ-flows. Importantly, while such morphisms map orbits of one flow to orbits of the other, we do not require that they preserve the time of the flow. A morphism σ between Γ-flows (Y_1,ϕ_1) and (Y_2,ϕ_2) is a Γ-equivariant continuous map σ:Y_1→ Y_2 for which there exists a map c_σ: Y_1×→ such that σ(ϕ_1^t(y))= ϕ_2^c_σ(y,t)(σ(x)) for all (y,t)∈ Y_1 ×. Such a morphism σ is quasi-isometric if there exist constants λ⩾ 1 and ϵ⩾ 0, such that for any y∈ Y_1, the map c_σ(y,·):→ is (λ,ϵ)-quasi-isometric and c(y,t)→ +∞ as t→ +∞. A morphism σ is an isomorphism if it is a homeomorphism. In that case, σ^-1 is automatically a morphism from (Y_2,ϕ_2) to (Y_1,ϕ_1). The map c_σ: Y_1×→ in the above definition is Γ-invariant and satisfies the cocycle rule c_σ(y,t+s)=c_σ(y,t)+ c_σ(ϕ_1^t(y),s) for all y∈ Y_1 and all s,t∈. If Γ acts properly discontinuously by isometries on a δ-hyperbolic space (X,d_X), a direct way to construct a Γ-flow is to consider the collection of parametrized geodesics of X. Define 𝒢(X)={ℓ | ℓ:→ X geodesic} , equipped with the flow ψ defined by ψ^t(ℓ): s ↦ℓ(s + t) . Then (𝒢(X), ψ) is a Γ-flow. One can define the metric d' on 𝒢(X) by d'(ℓ,ℓ') = ∫_e^-t/2d_X(ℓ(s),ℓ'(s))ds , for any ℓ,ℓ'∈𝒢(X) and verify that the projection π':𝒢(X)→ X defined by π'(ℓ)=ℓ(0) is a (X)-equivariant quasi-isometry. The drawback of this construction is that quasi-isometries between hyperbolic spaces do not induce morphisms of flows. For our purposes, we will need a more “canonical” notion of geodesic flow, typically one for which pairs of distinct points in the boundary at infinity define a unique orbit of the flow. Such a flow is provided to us by a general theorem of Mineyev. We denote by A^(2) the set of ordered pairs of distinct points in a set A. Let (X,d_X) be a taut hyperbolic metric space and Γ a discrete subgroup of (X). Then there exists a metric d on the topological space ℱ(X)=∂_∞^(2)X × and a continuous cocycle c:Γ×∂_∞ X^(2)→ with the following properties: (1). The action of Γ on ℱ(X) given by γ (z_-,z_+,t) = (γ z_-, γ z_+, t+ c(z_-,z_+,γ)) is properly discontinuous and isometric; (2). There is a Γ-equivariant quasi-isometry π:(ℱ(X),d)→ (X,d_X); (3). There exist constants λ⩾ 1 and ϵ⩾ 0 such that for any (z_-,z_+) ∈∂_∞ X^(2), the curve {π(z_-,z_+,t), t∈} is a (λ,ϵ)-quasi-geodesic in X with backward endpoint z_- and forward endpoint z_+. Note that the definition of the action of Γ on ℱ(X) implies that it commutes with the flow ϕ defined by ϕ^t(z_-,z_+,s) = (z_-,z_+,t+s) . Hence (ℱ(X),ϕ) is a Γ-flow. When Γ is a hyperbolic group and X is a Gromov model of Γ (e.g. its Cayley graph with respect to a finite generating set), we define geodesic flow of Γ to be the space ℱ(Γ) ℱ(X) equipped with the flow ϕ and the action of Γ. This Γ-flow is well-defined up to quasi-isometric isomorphisms. When X is a Gromov model of a relatively hyperbolic pair (Γ,Π) (e.g. its Groves–Manning cusp space as defined in <cit.>), we call geodesic flow of (Γ,Π) the space ℱ(Γ,Π) ℱ(X) equipped with the flow ϕ and the action of Γ. Now, it is only well-defined up to isomorphisms of Γ-flow, since different Gromov models of (Γ,Π) may not be quasi-isometric to each other. The next proposition is independent to the choice of the Gromov models for defining ℱ(Γ,Π). Let Γ be a hyperbolic group and Π a finite almost malnormal collection of quasi-convex subgroups. Then there is a morphism of Γ-flows σ: ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) ⟶ℱ(Γ,Π) that extends the quotient map η:∂_∞Γ→∂_∞(Γ,Π) from Theorem <ref>. Moreover, the restriction of σ to any cocompact Γ-subflow is a quasi-isometric morphism. Let X be any Gromov model of the relatively hyperbolic pair (Γ,Π). We fix ℱ(Γ,Π) to be the Γ-flow ℱ(X). Consider the space U = {((z_-,z_+,s_1),(η z_-,η z_+,s_2)): z_-,z_+∈∂_∞Γ, s_1, s_2 ∈, η z_- ≠η z_+ }⊂ℱ(Γ)×ℱ(Γ,Π) There is a natural Γ-action on U by coordinates since η is Γ-equivariant. Note that the projection to the first coordinate maps U to ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) by Theorem <ref>. This projection factors to a fiber bundle p_1: Γ\ U →Γ\(ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i)) with fibers homeomorphic to . Let σ̂ be a continuous section of p_1. Lifting σ̂ to ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) and composing with the projection to the second coordinate, we obtain a morphism of Γ-flows σ: ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) ⟶ℱ(X(Γ,Π)) which extends the boundary map η. It remains to prove that σ is quasi-isometric in restriction to any cocompact Γ-subflow. Let us thus introduce the cocyle c: (ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i))×⟶ such that σ(ϕ^t(z))=ψ^c(z,t)(σ(z)) for any z∈ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) and t∈. Let K be a cocompact Γ-subflow of ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i). Since c satisfies the cocycle rule c(x,t+s) = c(x,t) + c(ϕ^t(x),s) , in order to prove that it is quasi-isometric, it suffices to verify the following statement. There exist constants T,m,M>0, such that m⩽ c(z,T) ⩽ M for any z∈K. Let K be a compact subset of ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) such that Γ K = K. Since c is continuous and Γ-invariant, we only need to show that there exists a constant T>0 such that c(z,T)>0 for any z∈ K. We argue by contradiction. Suppose that there is a sequence (z^n)_n∈⊂ K and a sequence (T_n)_n∈⊂_+ with T_n→ +∞ as n→ +∞, such that c(z^n,T_n)⩽ 0. Up to a subsequence, we may assume z^n=(z^n_-,z^n_+,s_n) converges to z=(z_-,z_+,s) ∈ K as n→ +∞. For each n, there exists γ_n∈Γ such that γ_n^-1ϕ^T_n(z^n)∈ K by Γ K=K. Then for a base point z^0∈ K, we have γ_n z^0 → z_+ as n→ +∞ since d(γ_n z^0,ϕ^T_n(z^n)) = d( z^0,γ_n^-1ϕ^T_n(z^n)) is bounded by the diameter of K and ϕ^T_n(z^n)→ z_+ as n→ +∞. Therefore, the sequence (γ_n)_n∈ has attracting point z_+ for the convergence group action of Γ on ∂_∞Γ. We apply the argument above by replacing z^n by σ(z^n) in the compact set σ(K), and replacing T_n by c(z^n,T_n). Since c(z^n,T_n)⩽ 0, after further extraction, either γ_n σ(z^0) is bounded in ℱ(X(Γ,Π)) or γ_n σ(z^0)→η(z_-) as n→ +∞, depending on whether c(z^n,T_n) is bounded or diverges to -∞. Since γ_n is unbounded and Γ acts properly on ℱ(X), we conclude that γ_n σ(z^0)→η(z_-) as n→ +∞ and η(z_-) is the attracting point of (γ_n) in ∂_∞(Γ,Π). On the other hand, since η is Γ-equivariant, it must send the attracting point of γ_n in ∂_∞Γ to its attracting point in ∂_∞ (Γ, Π). We deduce that η(z_+) = η(z_-). By Proposition <ref>, this implies that z_- and z_+ both belong to γ∂_∞ P_i for some i∈ I and some γ∈Γ, contradicting the assumption that z∈ K ⊂ℱ(Γ) ∖⋃_i ∈ I,γ∈Γγℱ(Π_i) . § RESTRICTED ANOSOV AND RELATIVELY ANOSOV REPRESENTATIONS We always fix the field to be or . The notion of Anosov representation of a hyperbolic group Γ admits many equivalent definitions. One of them, which is close to Labourie's original definition and was developed by Bochi–Potrie–Sambarino <cit.>, states that a linear representation ρ:Γ→(d,) is k-Anosov if the associated flat bundle admits a dominated splitting over the geodesic flow of Γ. The purpose of this section is to investigate the generalization of this definition when the geodesic flow is replaced by any Γ-flow. §.§ Linear representations and dominated splittings Let Γ be a countable group and let (Y,ϕ) be a Γ-flow. For a representation ρ:Γ→(d,), we consider the flat bundle E_ρ(Y) = Γ\ (Y×^d), where the Γ-action is given by γ(y,v)=(γ y, ρ(γ)v) for all γ∈Γ, y∈ Y and v∈^d. The flow ϕ on Y lifts to a flow on Y×^d by parallel transformations, which we still denote by ϕ, namely, ϕ^t(y,v) = (ϕ^t(y),ϕ^t_y(v)) = (ϕ^t(y), v) for all y∈ Y, v∈^d and t∈. This flow commutes with the Γ-action and thus factors to a flow on E_ρ(Y), which we again denote by ϕ. A representation ρ of Γ into (d,) is k-Anosov in restriction to the Γ-flow (Y,ϕ) if there exists a metric ‖·‖ on the vector bundle E_ρ(Y) such that E_ρ(Y) admits a dominated splitting of rank k, that is, a continuous ϕ-invariant decomposition E_ρ(Y) = E_ρ^s ⊕ E_ρ^u with E_ρ^s of rank k, for which there exist constants C,λ >0, such that ‖ϕ^t_y(v)‖‖ϕ^t_y(w)‖⩽ C e^-λ t‖ v‖‖ w‖ , for all y∈Γ\ Y, t∈_+ and all non zero vectors v ∈ (E^s_ρ)_yand w∈ (E^u_ρ)_y. We respectively call E^s_ρ and E^u_ρ the stable direction and unstable direction of E_ρ(Y) with respect to the metric ‖·‖. When the Γ-flow (Y,ϕ) is cocompact, the dominated splitting is unique and does not depend on the choice of the metric, since any two metrics on E_ρ(Y) are uniformly bi-Lipschitz. By abuse of notations, we will write Y×^d = E^s_ρ⊕ E^u_ρ to represent the lift of the dominated splitting over Y, as a decomposition of Y×^d into Γ-invariant, ϕ-invariant subbundles. The metric ‖·‖ will lift to a Γ-invariant one, still denoted by ‖·‖. We call it a dominated splitting of Y×^d of rank k associated to ρ and ‖·‖. Let ‖·‖_0 denote the standard metric on ^d. Since Y×^d is a trivial bundle, for any metric ‖·‖, there exists a continuous map A: Y→(d,), such that at any point y∈ Y, the metric ‖·‖ is expressed by ‖ A(y)·‖_0. We will say that ‖·‖ is of unit volume if there exists such a map A takes values in (d,). Since a rescaling of the metric preserves the ratio of the norms of two vectors, we may assume without loss of generality that the metric in Definition <ref> is always of unit volume. One of the good properties of the restricted Anosov definition is that it is preserved under pull-back by quasi-isometric morphisms of Γ-flows. Let σ: (Y_2, ϕ_2) → (Y_1, ϕ_1) be a quasi-isometric morphism of Γ-flows, and let ρ: Γ→(d,) be a k-Anosov representation in restriction to (Y_1,ϕ_1). Then ρ is k-Anosov in restriction to (Y_2,ϕ_2). Let c: Y×→ be the cocycle such that σ(ϕ_2^t(y)) = ϕ_1^c(y,t)(σ(y)) . By definition of a quasi-isometric morphism, there exist constants a>0,b>0 such that c(y,t) ≥ a t -b for all y∈ Y and all t≥ 0. The morphism σ naturally lifts to a continuous bundle morphism σ: E_ρ(Y_1) → E_ρ(Y_2). Let ‖·‖ be a continuous metric for which E_ρ(Y_1) has a dominated splitting of rank k E_ρ(Y_1) = E_ρ^s(Y_1) ⊕ E_ρ^u(Y_1) . Pulling back this splitting by σ, we get a continuous ϕ_1-invariant splitting E_ρ(Y_2) = E_ρ^s(Y_2) ⊕ E_ρ^u(Y_2) . Let us still denote by ‖·‖ the pull-back by σ of the metric on E_ρ(Y_1). With these choices, we have for all y∈Γ∖ Y_2 and all v∈ E_ρ^s(Y_2), w∈ E_ρ^u(Y_2): ‖ϕ_2^t(v)‖/‖ϕ_2^t(w)‖ = ‖σ(ϕ_2^t(v))‖/‖σ(ϕ_2^t(w))‖ = ‖ϕ_1^c(y,t)(σ (v))‖/‖ϕ_1^c(y,t)(σ(w))‖ ⩽ C e^-λ c(y,t)‖σ(v)‖/‖σ(w)‖ ⩽ C e^b e^-aλ t‖ v‖/‖ w‖ , showing that the splitting E_ρ^s(Y_2) ⊕ E_ρ^u(Y_2) is dominated. Another good property of the notion of restricted Anosov representation is its stability under passing to a subgroup. Let Γ be a countable group, (Y,ϕ) a Γ-flow and Γ' a subgroup of Γ. Then (Y,ϕ) can be seen as a Γ'-flow by resctricting the Γ-action to Γ'. Let ρ: Γ→(n,) be a linear representation. (1)If ρ is k-Anosov in restriction to (Y,ϕ). Then ρ|_Γ' is a k-Anosov in restriction to (Y,ϕ). (2) Conversely, assume that (Y,ϕ) is cocompact and Γ' has finite index in Γ. If ρ|_Γ' is k-Anosov in restriction to (Y,ϕ), then ρ is k-Anosov in restriction to (Y,ϕ). For part (1), let ‖·‖ be a Γ-invariant norm on E_ρ(Y) and let E_ρ(Y)= E_ρ^s(Y)⊕ E_ρ^u(Y) be a Γ-invariant and ϕ-invariant splitting of rank k that is dominated for ‖·‖. Then the splitting and the norm are in particular Γ'-invariant and define a dominated splitting over Γ'∖ Y. Now we prove part (2). By (1) we can restrict to a smaller subgroup of finite index and assume that Γ' is normal in Γ. let E_ρ(Y) = E_ρ^s(Y)⊕ E_ρ^u(Y) be a ϕ-invariant and Γ'-invariant splitting of rank k over Y, which is dominated for a Γ'-invariant norm ‖·‖. Since Γ' acts cocompactly on Y, the domination condition does not depend on the choice of the norm and we can assume without loss of generality that ‖·‖ is in fact Γ-invariant. For γ∈Γ, consider the push-forward of the splitting, defined by γ_* E_ρ^s(Y)_x⊕γ_* E_ρ^u(Y)_x = ρ(γ) E_ρ^s(Y)_γ^-1x⊕ρ(γ) E_ρ^u(Y)_γ^-1x Since the action of Γ on Y×^d commutes with the flow ϕ, the push-forward splitting is again ϕ-invariant. Moreover, for η∈Γ', we have γ_* E_ρ^s(Y)_η x = ρ(γ) E_ρ^s(Y)_γ^-1η x = ρ(γ) E_ρ^s(Y)_γ^-1ηγ (γ^-1x) = ρ(γ) ρ(γ^-1ηγ) E_ρ^s(Y)_γ^-1 x since Γ' is normal in Γ and E_ρ^s(Y) is Γ' invariant = ρ(η) γ_* E_ρ^s(Y)_x . The same holds for γ_*E_ρ^u(Y), showing that the push-forward splitting is again Γ' invariant. Finally, since ‖·‖ is Γ-invariant, we get that the push-forward splitting is a dominated splitting over Γ'∖ Y. By uniqueness of such a splitting we conclude that γ_* E_ρ^s(Y) ⊕γ_* E_ρ^u(Y) = E_ρ^s(Y) ⊕ E_ρ^u(Y) . Hence E_ρ^s(Y) ⊕ E_ρ^u(Y) is a dominated splitting over Γ∖ Y and ρ is k-Anosov in restriction to Y. Finally, one of the main properties of restricted Anosov representations over cocompact Γ-flows is their stability under small deformation. Let (Y,ϕ) be a cocompact Γ-flow. Then the space A^k_Y(Γ, (d,)) = {ρ: Γ→(d,)k-Anosov in restriction to Y} is open in (Γ,(d,)). Proposition <ref> essentially follows from a general stability theorem for dominated splittings (see for instance <cit.> Corollary 5.19. <cit.> Theorem 8.1 or <cit.> Theorem 7.1). A key point is that one can see the linear flows associated to representations in the neighbourhood of a representation ρ_0 as perturbations of the flow ϕ on the fixed vector bundle E_ρ(Y). This is ensured by the following Lemma. Let ρ_0:Γ→(d,) be a representation. Then there exists a neighborhood O of ρ_0 in (Γ,(d,)) and a continuous map g: O× Y →(d,), such that ρ(γ)g(ρ,y)=g(ρ,γ y)ρ_0(γ) and g|_ρ_0× Y≡ Id. Let K be a compact subset of Y such that Γ K=Y. Let U be an open, relatively compact subset of Y that contains K. Then there exists a continuous function f:Y→_⩾ 0 such that f=1 on K and f=0 on Y∖ U. Define [ g: (Γ,(d,))× Y → Mat_d× d(); (ρ,y) ↦ 1∑_γ∈Γ f(γ^-1y)∑_γ∈Γ f(γ^-1y)ρ(γ)∘ρ_0(γ)^-1 . ] Note that f(γ^-1y)= 0 for all but finitely many γ (by the properness of the Γ action and relative compactness of U), ∑_γ∈Γ f(γ^-1y) ≥ 1 since Γ y ∩ K ≠∅ and f ≡ 1 on K. Hence g is well-defined and continuous. One easily verifies that ρ(γ) g(ρ, y) = g(ρ, γ y) ρ_0(y) and that g|_ρ_0 × Y≡. By the continuity of g, there is a neighborhood O of ρ_0 such that g|_O× K takes values in (d,). Finally the equivariance property (<ref>) implies that g|_O× Y takes values in (d,). The equivariance property of g precisely means that g(ρ,·) factors to a bundle isomorphism from E_ρ_0(Y) to E_ρ(Y) which depends continuously on ρ. The rest of the proof of Proposition <ref> follows classical stability arguments for dominated splittings. We sketch here for completeness. Let ι_0 denote the Γ-action on O× Y×^d defined by γ(ρ,y,v)=(ρ,γ y,ρ_0(γ)y) for any ρ∈ O, γ∈Γ and v∈^d, and let ι denote the Γ-action on O× Y×^d defined by γ(ρ,y)=(ρ,y,ρ(γ)v) ρ∈ O, γ∈Γ and v∈^d. It is natural to extend ϕ on O× Y by ϕ^t(ρ,y)=(ρ,ϕ^t(y)). The map g given by the lemma above induces a ι_0-ι-equivariant map O× Y ×^d ≃ O× Y ×^d by (ρ,y,v)→ (ρ,y,g(ρ,y)v) for any ρ∈ O, y∈ Y and v∈^d, which is an isomorphism of vector bundles fibring over id_O× Y, and hence induces a continuous isomorphism ĝ: ι_0(Γ)\(O× Y ×^d) ≃ι(Γ)\(O× Y ×^d) . A dominated splitting E_ρ_0(Y)= E^s_ρ_0⊕ E^u_ρ_0 induces a dominated splitting ι_0(Γ)\ (O× Y×^d) = (O× E^s_ρ_0)⊕ (O× E^u_ρ_0) with respect to a Γ-invariant metric. Then there is a decomposition ι(Γ)\ (O× Y×^d) = ĝ(O× E^s_ρ_0) ⊕ĝ(O× E^u_ρ_0) . We denote E_O^s=ĝ(O× E^s_ρ_0) and E_O^u=ĝ(O× E^u_ρ_0) in brief. The decomposition is ϕ-invariant over {ρ_0}×Γ\ Y, but may not be ϕ-invariant over the whole O×Γ\ Y. We wish to make it ϕ-invariant by small deformation. More concretely, we define a flow Φ (respectively, Ψ) on the space of continuous sections of (E_O^u, E_O^s) (respectively, (E_O^s, E_O^u)) with norm at most 1, such that ϕ^t maps the graph of f_(ρ,y) to the graph of Φ^t(f_(ρ,y)) (respectively, Ψ^t(f_(ρ,y)))for any section f with norm at most 1 and (ρ,y)∈ O×Γ\ Y. Following the argument of Lemma 7.4 in <cit.>, Φ and Ψ are well-defined contraction maps. The images of the unique fixed point of Φ and the unique fixed point of Ψ, which are independent to t, give a new decomposition of ι(Γ)\ (O× Y×^d), which is ϕ-invariant. Up to replacing O by a smaller neighborhood of ρ_0, this decomposition of ι(Γ)\ (O× Y×^d) gives a dominated splitting for each piece ι(Γ)\ ({ρ}× Y×^d)≃ E_ρ(Y), which completes the proof. §.§ Relatively Anosov representations While Anosov representations are meant to generalize convex-cocompact representations to higher rank Lie groups, the notion of relatively Anosov representation introduced by Zhu <cit.> and Zhu–Zimmer <cit.>, which is equivalent to that of asymptotically embedded representation introduced previously by Kapovich–Leeb <cit.>, is meant to extend to higher rank the geometrically finite representations of relatively hyperbolic groups. Let (Γ,Π) be a relatively hyperbolic pair. A representation ρ:Γ→(d,) is k-Anosov relative to Π if there exists a pair of continuous maps ξ = (ξ^k,ξ^d-k):∂_∞ (Γ, Π) →_K(^d) × Gr_d-k(^d) which is * ρ-equivariant, that is, ρ(γ)ξ (·) = ξ(γ( ·)) for any γ∈Γ; * transverse, that is, ξ^k(x)⊕ξ^d-k(y) = ^d for any x y ∈∂_∞ (Γ,Π); * strongly dynamics preserving, that is, for any sequence (γ_n)_n∈⊂Γ such that γ_n→ x∈∂_∞ (Γ,Π) and γ_n^-1→ y∈∂_∞ (Γ,Π) as n→ +∞, one has ρ(γ_n)V →ξ^k(x) as n→ +∞ for any V∈_k(^d) that transverse to ξ^d-k(y). Let X be a Gromov model of (Γ,Π). Let ρ:Γ→(d,) be a k-Anosov representation relative to Π. The pair of transverse boundary maps associated to ρ defines a Γ-invariant and ϕ-invariant splitting of E_ρ(𝒢(X)), which Zhu and Zimmer prove to be dominated: Let ρ:Γ→(d,) be a k-Anosov representation relative to Π. Then ρ is k-Anosov in restriction to the Γ-flow 𝒢(X). The notion of being “Anosov in restriction 𝒢(X)” is called “Anosov relative to X” in <cit.>. The main example of a relatively representation is the inclusion of a geometrically finite subgroup of (2,). Recall that a subgroup Γ of (2,) is called a geometrically finite subgroup if the action of Γ on its limit set Λ(Γ) ⊂𝐏^1 is geometrically finite in the sense of Definition <ref> (see <cit.>). In particular, such a group Γ is hyperbolic relatively to the collection Π of stabilizers of its parabolic points. Let Γ be a geometrically finite subgroup of (2,) and Π the collection of its parabolic stabilizers. Then the inclusion Γ↪(2,) is 1-Anosov relative to Π. When Γ⊂(2,) is geometrically finite with Π a set of representatives of the conjugacy classes of maximal parabolic subgroups of Γ, the convex hull of Λ(Γ) in ^3, denoted by 𝒞(Γ), is a Gromov model of the relatively hyperbolic pair (Γ,Π). Since ℍ^3 is uniquely geodesic, 𝒞(Γ) contains a unique bi-infinite geodesic between two given points of Λ(Γ) ≃∂_∞ (Γ,Π), and one can thus identify 𝒢(𝒞(Γ)) with ℱ(Γ,Π) by an isomorphism of Γ-flows. In particular, the inclusion Γ↪(2,) is also 1-Anosov in restriction to ℱ(Γ,Π). While the relatively property is independent of the Gromov model, Zhu and Zimmer do not state that it implies the Anosov property in restriction to ℱ(Γ,Π). This will be proven by the second author in a forthcoming paper. Here, we do not care about this subtlety because we will ultimately consider geometrically finite subgroups of (2,), for ℱ(Γ,Π) is isomorphic to the geodesic flow of the convex core in ^3. §.§ Simple Anosov representations An example that motivates the study of restricted Anosov representations is the notion of primitive-stable representations introduced by Minsky <cit.>. Let F_n be a free group of order n. Let _Prim⊂ℱ(F_n) denote the primitive geodesic flow, which is the closure of the collection of all geodesic axes of primitive elements of F_n. The second author proved in <cit.> Section 8.1 that a representation ρ:F_n →(d,) is k-primitive-stable in the sense of Minsky <cit.>, Guéritaud–Guichard–Kassel–Wienhard <cit.> and Kim–Kim <cit.>, if and only if ρ is k-Anosov in restriction to _Prim. Here we introduce the notion of simple Anosov representations which can be thought of as an analogue of primitive-stable representations for closed surface groups. Let π_1(S) be the fundamental group of a closed connected oriented surface S of genus g≥ 2. Such a surface carries hyperbolic metrics, and for each hyperbolic metric h, there is a discrete and faithful representation j: π_1(S) →(2,) (a Fuchsian representation) such that (S,h) is isomorphic to j(π_1(S))∖^2. Since ^2 is uniquely geodesic with the π_1(S)-action cocompact, the π_1(S)-flow ℱ(π_1(S)) is isomorphic to the geodesic flow ψ on the unit tangent bundle T^1(^2) equipped with the π_1(S)-action given by ρ.[Concretely, one can fix a base point x_0∈^2 and parametrize each oriented geodesic in ℍ^2 by length, with the projection of the base point at 0. This gives flow-equivariant homomorphism from ∂_∞π_1(S)^(2) to T^1(^2).] Therefore, by Proposition <ref> and <cit.> Section 4, a representation ρ:π_1(S)→(d,) is k-Anosov (in the classical sense) if and only if ρ is k-Anosov in restriction to (T^1(^2),ψ). Now, let ℱ_p(S) ⊂ T^1(S) be the closure of the union of all closed geodesics with at most p self-intersections, and denote by ℱ_p(π_1(S)) its preimage in T^1(^2) ≃ℱ(π_1(S)). Then ℱ_p(π_1(S)) is again a cocompact π_1(S)-subflow of ℱ(π_1(S)). Let c ⊂ T^1(S) be a closed geodesic with more than p self-intersections. Then c⊂ T^1(S) ∖ℱ_p(S). Let v be a point in c and let T be the first positive time such that ψ_T(v) = v. By assumption, there exist p+1 pairs of times (t_i, t'_i) ∈ [0,T)^2 with t_i ≠ t_i' such that ψ_t_i(v) and ψ_t'_i(v) have the same projection to S. Up to replacing the initial vector v by another point on c, we can assume that none of the t_i is 0. The corresponding self-intersection in S is transverse and thus stable by small perturbation. In particular, if v_n converges to v, then for n large enough, one can find times (t_i,n, t'_i,n) ∈ (0,T)^2, 1⩽ i⩽ p+1 such that ψ_t_i,n(v_n) and ψ_t'_i,n(v_n) have the same projection to S. In other words, for n large enough, the geodesic arc ψ_(0,T)(v_n) has at least p+1 self-intersections. In particular, v cannot be approximated by points belonging to a closed geodesic with at most p self-intersections, showing that c⊂ T^1(S) ∖ℱ_p(S). Note that ℱ_p(S) ⊂ℱ_q(S) for p⩽ q. In particular, all these subflows contain ℱ_0(S), the closure of the union of all simple closed geodesics in S. This set has been studied by Birman and Series <cit.>, who proved in particular that it has Hausdorff dimension 1. It is thus a very “small” subset of the geodesic flow of S. We call it the Birman–Series set of S. A representation ρ:π_1(S)→(d,) is called simple k-Anosov if ρ is k-Anosov in restriction to ℱ_0(π_1(S)). § CONSTRUCTION OF SIMPLE ANOSOV REPRESENTATIONS We can now state more precisely the main result of the paper: For every p≥ 0 and every d≥ 2, there exists a representation ρ: Γ→(2d,) that is d-Anosov in restriction to ℱ_p(Γ) but not in restriction to ℱ_p+1(Γ). By stability of the restricted Anosov property, we can deform such a representation in order to guarantee further generic properties. In particular, we have the following: For every d≥ 2, there exists a non-empty open set of (Γ, (2d,)) consisting of Zariski dense representations that are either non-discrete or unfaithful. The rest of the section is devoted to the proof of these results. §.§ Geometrically finite representations of surface groups Let us start by recalling that there exist geometrically finite representations of closed surface groups with parabolic subgroups given by any prescribed simple closed curve. More precisely, let S be a closed connected oriented surface of genus g≥ 2, c a simple closed curve on S and Π = ⟨γ⟩ the cyclic subgroup of Γ = π_1(S) generated by an element of γ representing c. The following proposition is well-known to Kleinian group experts: There exists ρ: Γ→(2,) discrete and faithful with geometrically finite image and whose stabilizers of parabolic points are exactly the conjugates of Π. In other terms, there exists a relative 1-Anosov representation of the relatively hyperbolic pair (Γ,Π) into (2,). Such representations can be constructed using the Maskit combination theorem for amalgamated products (when γ is separating) and HNN extensions (see Theorem 4.104, Theorem 4.105 and Example 4.106 of Kapovich's book <cit.>). A priori, the representations obtained take values in _+(^3)≃(2,). However, discrete and faithful representations of surface groups into (2,) have vanishing second Stiefel–Whitney class and can thus be lifted to (2,). §.§ The induced representation Let us now recall classical construction of an induced representation from a finite index subgroup to the whole group. Let Γ be a countable group and Γ' be a subgroup of Γ of finite index. Let V be a finite dimensional complex vector space and let ρ:Γ'→(V) be a linear representation. Recall that in representation theory of groups, giving such representation ρ is equivalent to equipping V with a structure of [Γ']-module. The induced representation Ind^Γ_Γ'(ρ) of Γ is the representation associated to the [Γ]-module structure of [Γ]⊗_[Γ'] V. If Γ' has index d in Γ, then [Γ] is a free [Γ']-module of rank d, hence Ind^Γ_Γ'(ρ) is a representation of rank d·dim_(V). In more concrete terms, pick a collection {γ_1=id, γ_2, ..., γ_d}⊂Γ of representatives of left cosets of Γ', so that Γ=_i=1^dγ_iΓ' . The [Γ]-module [Γ]⊗_[Γ'] V can be identified with ⊕_i=1^dγ_iV, where each γ_i V is a copy of V. For any 1⩽ i ⩽ d, we denote the copy of v∈ V in γ_i V by (γ_iv). Then the induced Γ-action is defined by γ(γ_iv) = (γ_j(γ'v)) for any γ∈Γ, where γ_j and γ'∈Γ' are such that γγ_i=γ_jγ'. In particular, when Γ' is a normal subgroup in Γ, the restriction of Ind^Γ_Γ'(ρ) to Γ' is precisely ⊕_i=1^dρ_i, where ρ_i is the representation of Γ' defined by ρ_i(γ) = ρ(γ_i^-1γγ_i) for all γ∈Γ'. Let Y be a cocompact Γ-flow. If Γ'⊂Γ is a normal subgroup of index d and ρ: Γ' →(2,ℂ) is 1-Anosov in restriction to Y, then Ind^Γ_Γ'(ρ): Γ→(2d,ℂ) is d-Anosov in restriction to Y. Since ρ: Γ' →(2,ℂ) is 1-Anosov in restriction to Y, there exists a dominated splitting of rank 1, denoted by Y×^2 = E^s_ρ⊕ E^u_ρ, where E^s_ρ is the stable direction and E^u_ρ is the unstable direction, with respect to a ρ-invariant metric ‖·‖ on Y×^2. Following Remark <ref>, we may assume that ‖·‖ is of unit volume and that E^s_ρ and E^u_ρ are orthogonal with respect to ‖·‖. Fix y∈ Y, v∈ (E^s_ρ)_y and w∈ (E^u_ρ)_y. Since both E^s_ρ and E^u_ρ have rank 1, the previous conditions on ‖·‖ imply that the product ‖ϕ^t(v)‖·‖ϕ^t(w) ‖ is constant, and the dominated splitting condition then gives us constants C,λ >0 (independent of y,v,w), such that ‖ϕ^t(v)‖⩽ C e^-λ t‖ v‖ and ‖ϕ^t(w)‖⩾ C e^λ t‖ w‖ , for all t∈_+. This will imply that the direct sum of several such dominated splittings is again a dominated splitting. Let {γ_1= 𝕀, γ_2, …, γ_d} be a collection of representatives of the left cosets of Γ'⊂Γ. For each i∈{1,… ,d}, we define the subbundles E^s_i and E^u_i of Y×^2 by (E^s_i)_y=(E^s_ρ)_γ_i^-1y , (E^u_i)_y=(E^u_ρ)_γ_i^-1y . The splitting E^s_i⊕ E^u_i is the pull-back of the splitting E^s_ρ⊕ E^u_ρ by γ_i^-1 acting on Y. One can easily show that Y×^2 = E^s_i⊕ E^u_i is a ϕ-invariant and ρ_i-equivariant splitting, where ρ_i(γ) = ρ(γ_i^-1γγ_i) for any γ∈Γ'. Moreover, this splitting is dominated with respect to the pull-back metric γ_i^-1^*‖·‖. Consider Y×^2d = ⊕_i=1^d Y×^2, the direct sum of d copies of a the trivial rank 2 bundle, where the i. Setting F^s = ⊕_i=1^d E^s_i and F^u = ⊕_i=1^d E^u_i, on obtains a rank d splitting which is equivariant for the representation ⊕_i=1^d ρ_i. By (<ref>), this splitting is dominated (for the metric ⊕_i=1^d γ_i^* ‖·‖, for instance). Then we conclude that ⊕_i=1^d ρ_i= Γ'→(2d,) is d-Anosov in restriction to Y. Since ⊕_i=1^d ρ_i is the restriction to Γ' of Ind^Γ_Γ'(ρ), we conclude that Ind^Γ_Γ'(ρ) is d-Anosov in restriction to Y by Proposition <ref>. §.§ Construction of simple Anosov representations We now have all the tools to prove Theorem <ref>. Let S = Γ∖^2 be a closed connected oriented hyperbolic surface. Fix p≥ 0 and d≥ 2. There exists a Galois covering π:S→ S of degree d and a simple closed geodesic c ∈S such that π(γ) has p+1 self-intersections. An example of such a pair (S,c) is shown in Figure <ref>. Let Γ'⊂Γ be the fundamental group of S and let Π be the cyclic subgroup generated by a representative γ of c in Γ'. By Proposition <ref>, there exists a representation ρ':Γ'→(2,) which is 1-Anosov relative to Π. We set ρ = Ind^Γ_Γ'(ρ'): Γ→(2d,) . The representation ρ is d-Anosov in restriction to ℱ_p and not d-Anosov in restriction to ℱ_p+1. Denote by Y ⊂ T^1(^2) the preimage of T^1(S) ∖π(c). It is an open Γ-subflow of T^1(^2). By Proposition <ref> and Corollary <ref>, the representation ρ' is 1-Anosov in restriction to any cocompact subflow of Y. By Lemma <ref>, the induced representation ρ is d-Anosov in restriction to any cocompact subflow of Y. Finally, by Proposition <ref>, the curve c is disjoint from ℱ_p(S). Hence ℱ_p(Γ) is a cocompact subflow of Y, and ρ is d-Anosov in restriction to ℱ_p. On the other hand, ρ(γ) is a direct sum of d matrices in (2,), hence its eigenvalues have the form λ_1, …, λ_d, λ_d^-1, …, λ_1^-1, with |λ_1|≥…≥|λ_d |≥ 1. One of these matrices is ρ'(γ) which has eigenvalues ± 1. We deduce that |λ_d| =1 = |λ_d^-1| and ρ(γ) is not d-proximal. Since π(c) ⊂ℱ_p+1(S), this implies that ρ is not d-Anosov in restriction to ℱ_p+1(S). This concludes the proof of Theorem <ref>, hence that of Theorem <ref>. §.§ Deformations and generic simple Anosov representations In this section, we consider small deformations of the above constructed representation to deduce Corollary <ref>. Let _df(Γ,(n,)) ⊂(Γ,(n,)) denote the set of discrete and faithful representations and _ndf(Γ,(n,)) its complement. As a consequence of the Zassenhaus lemma, _df(Γ,(n,)) is closed, hence _ndf(Γ,(n,)) is open in (Γ,(n,)). Let ^Z(Γ,(n,)) ⊂(Γ,(n,)) denote the set of representations with Zariski dense image. It is a Zariski open subset of the representation variety which intersects every irreducible component (see for instance <cit.>). Finally, denote by An^d_ℱ_p(Γ, (2d,)) the set of representations that are d-Anosov in restriction to ℱ_p, which is open by Proposition <ref>. Corollary <ref> admits the following reformulation: The intersection _ndf(Γ,(2d,)) ∩^Z(Γ,(2d,)) ∩An^d_ℱ_p(Γ, (2d,)) is non-empty. In the previous section, we constructed a representation ρ∈An^d_ℱ_p(Γ, (2d,) of the form Ind^Γ_Γ'(ρ') where Γ' is a subgroup of Γ of index d and ρ':Γ'→(2,) is discrete and faithful but not convex-cocompact. By Sullivan's stability theorem for Kleinian groups <cit.>, there exists a sequence ρ'_n ∈_ndf(Γ', (2,)) converging to ρ'.[In this precise case, there are more explicit ways to construct the sequence ρ_n than to invoke Sullivan's stability.] Then ρ_n Ind^Γ_Γ'(ρ'_n) belongs to _ndf(Γ, (2d,)) and converges to ρ. For n large enough, ρ_n is d-Anosov in restriction to ℱ_p and we conclude that _ndf(Γ, (2d,))∩An^d_ℱ_p(Γ, (2d,))≠∅ . Finally, since ^Z(Γ, (2d,)) is the complement of a subvariety of (Γ, (2d,)) that does not contain an irreducible component, ^Z(Γ, (2d,)) intersects every non-empty open subset, hence ^Z(Γ, (2d,))∩_ndf(Γ, (2d,))∩An^d_ℱ_p(Γ, (2d,))≠∅ . § MAPPING CLASS GROUP DYNAMICS In the final section of this paper, we introduce the action of the mapping class group (S) on geodesic flows and character varieties. We remark that the subflows ℱ_p(S) are “invariant under the Mapping Class Group”, and that ℱ_0(S) is the unique minimal subflow with this property. We then deduce that the domains of Anosov representations in restriction to ℱ_p form domains of discontinuity for the mapping class group action on character varieties, among which the domains of Anosov representations in restriction to the Birman–Series flow are maximal. §.§ Mapping class group invariant closed subflows Let Γ be a finitely generated hyperbolic group. Recall that every automorphism of Γ is a quasi-isometry, and thus extends to a homeomorphism of ∂_∞Γ. This defines an action of (Γ) on ∂_∞Γ. The restriction of this action to the inner automorphism group (Γ) on ∂_∞Γ is the standard action of Γ (i.e. it coincides with the action on the boundary extending left translations on the Cayley graph). The action of (Γ) on the boundary at infinity naturally induces an action on ∂_∞^(2)Γ. Now, every cocompact Γ-subflow of ℱ(Γ), has the form Y_P = P×, where P is a closed, Γ-invariant subset of ∂_∞^(2)Γ. Given a subgroup H of (Γ) with Ĥ its preimage in (Γ), we say that a cocompact Γ-subflow Y_P is H-invariant if P is Ĥ-invariant. This is well-defined since P is always (Γ)-invariant by the definition. Recall the notations of Section <ref>, * for F_n, the free group of order n, the primitive geodesic flow _Prim is an (F_n)-invariant F_n-subflow, since the set of primitive elements is (F_n)-invariant; * for a compact hyperbolic surface S (possibly with boundary), the flow of geodesics with at most p self-intersections ℱ_p(S) is a (S)-invariant π_1(S)-subflow, where (S) denotes the mapping class group of S. In particular, the Birman–Series flow is (S)-invariant. Here we prove that it is the unique minimal (S)-invariant subflow. Let S be a closed connected hyperbolic surface, H a finite index subgroup of (S) and Y a (non-empty) H-invariant cocompact subflow of ℱ(π_1(S)). Then Y contains the Birman–Series set ℱ_0(π_1(S)). Let ℓ be a geodesic contained in Y and let c be a simple closed geodesic that intersects with ℓ. Let T_c denote the Dehn twist along c. Then there exists k>0 such that T_c^k ∈ H. Since Y is closed and H-invariant, it contains ⋃_n∈T_c^nk(ℓ) ⊃ c . We conclude that Y contains any simple closed geodesic that intersects it. In particular it contains a simple closed curve c_0. Let c be any simple closed geodesic. There exists a simple closed geodesic c' that intersects both c and c_0. By the preceding argument, Y contains c', hence it contains c. We conclude that Y contains every simple closed curve, and thus contains the Birman–Series set ℱ_0(S). §.§ Action on character varieties Recall that for a Γ-flow (Y,ϕ), An^k_Y(Γ, (d,)) ⊂(Γ,(d,)) denotes the collection of representations of Γ into (d,) that are k-Anosov in restriction to Y. We denote the collection of such representations modulo conjugations by 𝒜^k_Y(Γ, (d,)) = An^k_Y(Γ, (d,))/ (d,) ⊂χ(Γ, (d,)). Now we are ready to finish the proof of Proposition <ref>. Let S be a closed connected hyperbolic surface. Then the set 𝒜^d_ℱ_p(S)(π_1(S),(2d,)) is open in χ(π_1(S),(2d,)), and (S) acts properly discontinuously on it. We will need the following theorem for the proof. Let 𝒮 be a finite, symmetric generating set of Γ and λ⩾ 1, ϵ,b⩾ 0 are any given constants. Let Q_P,b={ℓ:ℝ→(Γ,𝒮) | ℓ is a geodesic with d(ℓ(0),𝕀)⩽ b and (ℓ(-∞),ℓ(+∞))∈ P } and Γ^+_P,b={ℓ(t) | ℓ∈ Q_P,b, t∈ℝ_+ with ℓ(t)∈Γ a vertex of (Γ,𝒮) }⊂Γ . Let O be an open, relatively compact subset in A^k_S_P(Γ, (d,)). Then there exist constants A⩾ 1 and B⩾ 0, such that logσ_k(ρ(γ))σ_k+1(ρ(γ))⩾ A^-1γ - B , for any ρ∈ O and γ∈Γ^+_P,b, where γ is the word length of γ with respect to 𝒮, and σ_k(ρ(γ)) is the kth singular value of ρ(γ). Let 𝒮 = {s_1,s_2,...,s_n} be a generating set of π_1(S) such that * Each s_i represents a simple closed geodesic; * For any i j, at least one of s_is_j and s_is_j^-1 represents a simple closed geodesic. Let D denote the collection of all s_i and all s_is_j^± 1 that represent simple closed geodesics. Let P⊂∂_∞^(2)π_1(S) be the closed subset such that S_P = ℱ_p(S). Since A^d_ℱ_p(S)(Γ, (2d,)) is open in (Γ, (2d,)) (by Proposition <ref>) and conjugation invariant, we have 𝒜^d_ℱ_p(S)(Γ, (2d,)) is open in χ(Γ, (2d,)). Let ℓ be a compact subset of 𝒜^d_ℱ_p(S)(π_1(S),(2d,)). We pick b⩾ 0 large enough such that all powers of elements in D contains in Γ^+_P,b. By Theorem <ref>, for any ρ_0∈ A^d_ℱ_p(S)(Γ, (2d,)) with [ρ_0]∈ L, there exists a relatively compact neighborhood O and constants A_O⩾ 1 and B_O⩾ 0, such that A_O^-1γ- B_O⩽logσ_d(ρ(γ))σ_d+1(ρ(γ))⩽logσ_1(ρ(γ))σ_2d(ρ(γ))⩽ A_Oγ , for any ρ∈ O and γ∈Γ^+_P,b. Here the third inequality follows from that π_1(S) is finitely generated. Then for any γ∈ D, we have A_O^-1‖γ‖⩽log|λ_d(ρ(γ))||λ_d+1(ρ(γ))| = lim_n→∞1nlogσ_d(ρ(γ^n))σ_d+1(ρ(γ^n))⩽ A_O‖γ‖ , where ‖γ‖ = lim_n→∞γ^nn is the stable length of γ in (π_1(S),𝒮) and λ_k denote the kth eigenvalue (ranking by absolute values). Since both stable lengths in (π_1(S),𝒮) and eigenvalues are invariant by conjugation, the inequality holds for all ρ∈(d,)· O. Let [O] denote the conjugation classes of (d,)· O, then ℓ is covered by finitely many such open set [O]. Therefore, there exists a constant A, such that A^-1‖γ‖⩽log|λ_d(ρ(γ))||λ_d+1(ρ(γ))|⩽ A‖γ‖ , for all γ∈ D. Suppose there exists [f]∈(π_1(S)) with a representative f∈(π_1(S)), such that there is a representation [ρ]∈ L with [ρ∘ f]∈ L, where ρ is a representative of [ρ]. Then we have ‖ f(γ)‖⩽ A^2 ‖γ‖ , for any γ∈ D. Therefore, to show the (S)-action is properly discontinuous, it suffices to check {[f]∈(π_1(S)) | ‖ f(γ)‖⩽ A^2 ‖γ‖ for all γ∈ D } is finite. This immediately follows from the proof of Lemma 12 in <cit.>. It provides a sequence of domains of discontinuous 𝒜^d_ℱ_p+1(S)(π_1(S),(2d,))⊂𝒜^d_ℱ_p(S)(π_1(S),(2d,)) , all of which are contained in 𝒜^d_ℱ_0(S)(π_1(S),(2d,)). alpha
http://arxiv.org/abs/2307.00646v1
20230702195022
Inflationary gravitational waves, pulsar timing data and low-scale-leptogenesis
[ "Satyabrata Datta" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
=1 =10000 =10000 [],n, addtoresetequationsectionsection satyabrata.datta@saha.ac.inSaha Institute of Nuclear Physics, 1/AF, Bidhannagar, Kolkata 700064, India.Homi Bhabha National Institute, 2nd floor, BARC Training School Complex, Anushaktinagar, Mumbai, Maharashtra 400094, India. We show that the low-scale leptogenesis mechanisms that exhibit right-handed neutrino mass-dependent non-standard cosmology, can make blue-tilted inflationary gravitational waves compatible with recent findings of stochastic gravitational wave (GW) background by the pulsar-timing arrays (PTAs). Right-handed neutrino mass scale has to be 𝒪( GeV), to bring down the amplitude of such gravitational waves at the level of PTAs via entropy production. Besides generating one GW peak in the nHz range, such a scenario creates another one in the LIGO ballpark. Thus the recent detection by PTAs is not only exciting for GWs in the nHz range; it paves the way to test and constrain mechanisms such as low-scale-leptogenesis with a low-frequency and correlated measurement at high-frequencies. Inflationary gravitational waves, pulsar timing data and low-scale-leptogenesis Satyabrata Datta August 1, 2023 =============================================================================== § INTRODUCTION In 2020, different pulsar timing array (PTA) collaborations such as NANOGrav, EPTA, and PPTA reported strong evidence for a stochastic common spectrum process over independent pulsar red noises <cit.>. Interestingly, recently these PTA collaborations, along with the InPTA, CPTA have released their latest data asserting significant evidence for a stochastic gravitational wave background (SGWB) <cit.>. The signal, this time, is strengthened by the characteristic pulsar angular correlations, known as the quadrupolar Hellings-Downs correlation <cit.>, unique to an SGWB. The observed signal can originate from various sources, including astrophysical and primordial ones. On the astrophysical side, the most plausible explanation would be GWs from supermassive black hole binary (SMBHB) mergers <cit.>. However, various well-motivated cosmological sources better fit the recent data <cit.>. As expected, after the recent release of PTA data, several interesting explanations came forward, which include, e.g., topological defects such as cosmic strings <cit.>[Note that, unlike the previous release of 12.5 yrs NANGrav data, which stable cosmic string provide a good fit <cit.>, the recent data slightly disfavors stable cosmic strings <cit.>.], domain walls <cit.>, quantum fluctuations during inflation leading to primordial black holes (PBHs) <cit.>, and dark first-order phase transitions around MeV <cit.>. There are also several explanations of this observation from the point of view of axions and axion-like particles <cit.>, dark matter <cit.>, QCD crossover <cit.>, astrophysical neutrino oscillations <cit.>, primordial magnetic fields <cit.> etc. In this article, from an infrared perspective, we discuss the possibility of inflationary blue-tilted gravitational waves (BGWs) as one of the possible sources of GWs in the nano-frequency range reported by the PTAs. Generally, to explain PTA data, a large tensor spectral index is required, which, saturates big bang nucleosynthesis (BBN) bound on the effective number of neutrino species for high reheating temperature. However, if a non-standard matter epoch leading to entropy production is present before the most recent radiation domination, BGWs with a sizeable spectral index become a viable option even for a high reheating temperature. Not only that, such intermediate matter domination also leaves characteristic imprints of the GWs spectrum, that could be tested or constrained by high-frequency detectors such as LIGO <cit.>. We exploit this idea within the low-scale leptogenesis (LSL) framework, which provides such matter domination due to a long-lived scalar field leading to small right-handed (RH) neutrino mass via phase transition. In this framework, the lifetime of the scalar field, and the amount of entropy production depend on the right-handed neutrino masses, relating BGWs and the scale of leptogenesis <cit.>. The RH neutrino mass window corresponding to a successful leptogenesis spans a wide range from 10^15 GeV down to a few MeV <cit.>. However, the electroweak naturalness condition puts an upper bound on the RH neutrino masses: M_i < 10^7 GeV <cit.>, favoring LSLs. In addition, because terrestrial experiments such as LHC can reach the energy scale only up to a few TeV, leptogenesis mechanisms with smaller RH neutrino masses M_i < TeV have much better experimental prospects. Before we proceed to the discussion on LSL, right-handed neutrino mass-dependent non-standard cosmology, and an LSL-fit to the PTA data with BGWs, let us note that amplitudes of the GWs from the simplest single-field slow-roll inflation are nearly scale-invariant and not large enough to be detectable with the present sensitivities of the GW detectors <cit.>. Nonetheless, plenty of models, beyond the simplest ones, predict enhanced or blue-tilted GWs <cit.>, which are detectable. If the findings of the PTAs are due to BGWs <cit.>, one should refer to those models. In our previous study on the same topic <cit.>, we fitted LSL to NANOGrav 12.5 yrs data with a scalar to tensor ratio r=0.06, and n_T∼ 0.8. The recent NANOGrav data changed at the level of amplitude as well as spectral index <cit.>. This makes us rethink on the LSL fit to the BGWs with the new data. § HEAVY NEUTRINO MASS DEPENDENT NON-STANDARD COSMIC EVOLUTION To illustrate the scalar dynamics in a realistic scenario, we examine the breaking of a gauged U(1)_B-L symmetry <cit.>, which serves as the mechanism responsible for generating non-zero masses for RH neutrinos. As the temperature drops, the scalar field transits from Φ=0 towards its vacuum expectation value Φ=v_Φ. The finite temperature potential that restores the symmetry at higher temperatures is given by <cit.> V(Φ,T)=λ/4Φ^4+D(T^2-T_0^2)Φ^2-ETΦ^3, where D=3 g^'^2+4λ/24,  E=3 g^'^3+ g^'λ+3λ^3/2/24 π, T_0=√(12λ)v_Φ/√(3 g^'^2+4λ), g^' is the gauge coupling[Seesaw models with U(1) gauge symmetry is highly motivated by Grand Unified Theory (GUT). In such cases, the scalar field possesses a gauge charge, leading to the natural inclusion of the gauge coupling in the potential at finite temperature.], and the vacuum expectation value v_Φ=μ/√(λ) has been determined from the zero temperature potential V(Φ,0)=-μ^2/2Φ^2+λ/4Φ^4. The structure of the finite temperature potential plays a crucial role in determining the nature of the transition process. The last term in Eq.(<ref>) generates a potential barrier causing a secondary minimum at Φ≠0, which at T=T_c becomes degenerate with the Φ=0 one. At T_0  (≲ T_c), the potential barrier vanishes, making the minimum at Φ=0 a maximum <cit.>. The critical temperature T_c and the field value Φ_c≡Φ(T_c) are given by <cit.> T_c=T_0√(λ D)/√(λ D-E^2),  Φ_c = √(4 D/λ(T_c^2-T_0^2)). A non-zero value of E in Eq.(<ref>) generally leads to a first-order transition with a strength determined roughly by the order parameter Φ_c/T_c<cit.>. Nonetheless, if Φ_c/T_c≪ 1, the transition is weakly first-order, which can be treated as a second-order transition because the potential barrier disappears quickly. In this case, the transition can be described by rolling of the field Φ from Φ=0 to Φ = v_Φ, which we consider in this article. We work with the values of λ and g^' so that Φ_c/T_c≪ 1 is fulfilled. For this, we choose λ≃ g^'^3 and g^'≲ 10^-2, which correspond to the order parameter Φ_c/T_c≲ 0.08. Once the field rolls down to the true vacuum, it oscillates around v_Φ. For generic potential V(Φ)=αΦ^β, the equation of state of such a coherent oscillation can be computed as <cit.>ω=(β-2)(β+2)^-1. Assuming the oscillation of the scalar field is driven by the dominant quadratic term in the potential and expanding the zero temperature potential around the true vacuum, we obtain α=λ v_Φ^2 and β=2. Therefore, the scalar field behaves like matter (w=0, cf. Eq.(<ref>)). One can also compute the angular frequency of oscillation, which is m_Φ=√(2λ) v_Φ. The lifetime of the scalar field is determined by the decay channels it undergoes. Because we assume λ≃g^' ^3 and g^'≪ 1, Φ→ Z^' Z^' is not allowed from kinematic consideration. Additionally, as we assume the absence of coupling with the SM Higgs, the dominant competing decay channels are Φ→ N_iN_i and Φ→ ff̅V, where f and V are SM fermions and vector bosons. The former corresponds to a tree-level process, whereas the latter involves a one-loop triangle process. The strength of these two processes is governed by the couplings f_N and g^' correspondingly. Since we want the dynamics to be controlled by f_N (to achieve both the suppression of the RH neutrino masses (M_i=f_N v_Φ) to initiate LSL and to introduce non-standard cosmological evolution), we shall always work with f_N and g^' such that Φ→ N_iN_i process dominates (Γ_N^Φ≳Γ_ff̅V), i.e., this process determines the lifetime of Φ. The amount of entropy produced due to the late time decay of Φ is given by <cit.>, κ^-1≃(90/π^2 g_*)^1/4ρ_R( T_c )√(Γ_N^ΦM̃_Pl)/ρ_Φ( T_c ) T_c ≃3^1/4(30/π^2 g_*)^-3/4 T_c^3√(Γ_N^ΦM̃_Pl)/V_ eff( 0,T_c ), where M̃_Pl=2.4× 10^18 GeV is the reduced Planck constant and ρ_Φ( T_c ) ≡ V_eff( 0,T_c)≃λ/4v_Φ^4. A large number of literature <cit.> has thoroughly investigated the impact of entropy production during matter/dust domination on the spectral distortion of GWs. This process leads to the suppression of GW amplitudes, which we will delve into further in the subsequent sections of our study. § TENSOR BLUE-TILT, LOW-SCALE-LEPTOGENESIS AND PTA DATA In this section, we will provide a brief overview of the production of GWs during inflation and their propagation through the scalar-dominated cosmic epoch until the present day which may shed light on the recent PTA findings. GWs are described with the perturbed FLRW line element: ds^2=a(τ)[-dτ^2+(δ_ij+h_ij)dx^idx^j)], where τ is the conformal time, a(τ) is the scale factor. The transverse and traceless nature of the 3× 3 symmetric matrix h_ij, as indicated by ∂_ih^ij=0 and δ^ijh_ij=0, characterizes the GWs. Due to their weak nature with |h_ij|≪1, the linearized evolution equation ∂_μ(√(-g)∂^μ h_ij)=16π a^2(τ) π_ij would suffice to study the propagation of the GWs. The quantity π_ij is the tensor part of the anisotropy stress, which serves as an external source coupling to h_ij, and in a realistic cosmic setting, it only affects the GW spectrum at scales larger than those of PTAs, e.g., due to neutrino free streaming <cit.>. It is convenient to express h_ij in the Fourier space: h_ij(τ, x⃗)=∑_λ∫d^3k⃗/(2π)^3/2 e^ik⃗.x⃗ϵ_ij^λ(k⃗)h_k⃗^λ(τ), where the index λ=“+/-" denotes the two polarisation states of the GWs. The polarization tensors, in addition to being transverse and traceless, also satisfy the conditions: (i) ϵ^(λ)ij(k⃗)ϵ_ij^(λ^')(k⃗)=2δ_λλ^' (ii) ϵ^(λ)_ij(-k⃗)=ϵ^(λ)_ij(k⃗). Assuming isotropy and the similar evolution of each polarisation state, we can rename h_k⃗^λ(τ) as h_k(τ), where k=|k⃗|=2π f with f being the frequency of the GWs at the present time at a_0=1. Considering the sub-dominant contribution from π_ij, the equation governing GW propagation in Fourier space can be expressed as follows ḧ_k+2ȧ/aḣ_k+k^2h_k=0, where the dot indicates a conformal time derivative. Using Eq.(<ref>) and Eq.(<ref>), one calculates the energy density of the GWs as <cit.>ρ_GW=1/32π G∫dk/k(k/a)^2T_T^2(τ, k)P_T(k), where T_T^2(τ, k)=|h_k(τ)|^2/|h_k(τ_i)|^2 is a transfer function which is computed from Eq.(<ref>), with τ_i as an initial conformal time. The quantity P_T(k)=k^3/π^2|h_k(τ_i)|^2 serves as a characterization of the primordial power spectrum, establishing a connection to inflation models with specific forms. Generally, P_T(k) is parametrised as a power-law given by P_T(k)=r A_s(k_*)(k/k_*)^n_T, where r≲ 0.06<cit.> is the tensor-to-scalar-ratio, A_s ≃ 2× 10^-9 is the scalar perturbation amplitude at the pivot scale k_*=0.01 Mpc^-1 and n_T is the tensor spectral index. The GW energy density relevant for detection purposes is expressed as Ω_GW(k)=k/ρ_cdρ_GW/dk, where ρ_c=3H_0^2/8π G with H_0≃ 2.2 × 10^-4  Mpc^-1 being the present-day Hubble constant. From Eq.(<ref>), the quantity Ω_GW(k) is derived as Ω_GW(k)=1/12H_0^2(k/a_0)^2T_T^2(τ_0,k)P_T(k), τ_0=1.4× 10^4  Mpc. It is worth noting that the simplest single-field slow-roll inflation models satisfy a consistency relation between r and n_T, given by n_T=-r/8<cit.>. As a result, the spectral index of GWs in these models is slightly red-tilted, with n_T≲0. However, in many inflationary models <cit.>, n_T is positive (n_T>0), which deviates from the scale-invariant spectrum predicted by the simplest single-field slow-roll inflation models. It should be noted that there might be scale dependence in the spectral index due to higher-order corrections <cit.>. A key caveat of the possibility of BGWs is the Δ N_eff bound from BBN and the lack of detection of any SGWB by LIGO <cit.>. It is worth noting that if there is any late-time entropy production that occurs through the decay of an oscillating scalar field, matter era after reheating, it can significantly modify the transfer function during the intermediate matter era and suppress the GW spectrum for modes that entered the horizon during the matter-dominated phase. This can potentially alleviate the constraints from BBN and LIGO. Therefore, the possibility of late-time entropy production can serve as a probe of any intermediate matter era, which we will briefly discuss in the following section. *The imprints of an early matter era on SGWB from inflationary blue-tilt: There have been various attempts to compute the transfer function analytically <cit.>, and one commonly utilized approach is described in Refs. <cit.>. In the context of an intermediate matter domination, the expression for T_T^2(τ_0,k) is given by T_T^2(τ_0,k)=F(k)T_1^2(ζ_ eq)T_2^2(ζ_Φ)T_3^2(ζ_Φ R)T_2^2(ζ_R), where F(k) reads F(k)=Ω_m^2( g_*(T_k, in)/g_*0)(g_*s0/g_*s(T_k, in))^4/3(3j_1(kτ_0)/kτ_0)^2. Here j_1(kτ_0) is the spherical Bessel function, Ω_m=0.31, g_*0=3.36, g_*0s=3.91 and an approximate analytical form of the scale-dependent g_*0(s)(T_k, in) used in Eq.(<ref>) is given by <cit.> g_*0(s)(T_k, in)=g_*0(A+ tanh k_1/A+1)(B+ tanh k_2/B+1), where A=-1-10.75/g_*0(s)/-1+10.75/g_*0(s),  B=-1-g_max/10.75 /-1+g_max/10.75, and k_1=-2.5  log_10(k/2π/2.5× 10^-12 Hz), k_2=-2.0  log_10(k/2π/6.0× 10^-9 Hz). The transfer functions are given by T_1^2(ζ)=1+1.57ζ+ 3.42 ζ^2, T_2^2(ζ)=(1-0.22ζ^1.5+0.65ζ^2 )^-1, T_3^2(ζ)=1+0.59ζ+0.65 ζ^2, where ζ_i ≡ k/k_i , with k_i s being the modes entering the horizon according to Fig.<ref> and are derived as k_ eq = 7.1× 10^-2Ω_m h^2 Mpc^-1, k_Φ=1.7× 10^14(g_*s(T_Φ)/106.75)^1/6(T_Φ/10^7 GeV) Mpc^-1, k_Φ R=1.7× 10^14κ^2/3(g_*s(T_Φ)/106.75)^1/6(T_Φ/10^7 GeV) Mpc^-1, and k_R=1.7× 10^14κ^-1/3(g_*s(T_R)/106.75)^1/6(T_R/10^7 GeV) Mpc^-1 with T_Φ≃(90/π^2 g_*)^1/4√(Γ_N^ΦM̃_Pl). Given the above set of equations and using κ from Eq.(<ref>), we evaluate Eq.(<ref>) for different benchmark values listed in Table <ref> along with keeping in mind the strong constraints coming from (i) the LIGO O3 bound on SGWBs, i.e. Ω_GW (25 Hz)h^2≤ 2.2× 10^-9 (ii) and the Δ N_eff bound from BBN, i.e. ∫_f_low^f_highf^-1df Ω_GW(f)h^2≲ 5.6× 10^-6Δ N_eff, where Δ N_ eff≲ 0.2. The lower limit of the integration is set by the frequency associated with the mode entering the horizon during the BBN epoch, which we approximate as f_ low≃ 10^-10 Hz. Conversely, the upper limit is determined by the highest frequency of GWs determined by the Hubble rate at the end of inflation, given by f_ high=a_ endH_ end/2π. The values of f_ high vary for different benchmark points, yet we observe that adopting a universal value of f_ high≃ 10^5 Hz does not significantly affect the results. Hence, we utilize f_ high=10^5 Hz to obtain the BBN constraint across all values of g^' and M_i. Let us now focus on the analysis of the LSL scenario with respect to the recent NANOGrav-2023 data. The 15 years NANOGrav data are expressed in terms of power-law signal with characteristic strain given by h_c(f)=A_CP( f/f_yr)^(3-γ_CP)/2 with f_yr=1yr^-1 and A_CP being the characteristic strain amplitude. The abundance of GWs has the standard form and can be recast as: Ω(f)=Ω_yr(f/f_yr)^(5-γ_CP) with Ω_yr=2π^2/3H_0^2A_CP^2 f_yr^2 We include the representation of four benchmark points (as given in Table <ref>) characterized by distinct values of (r,n_T), as well as the remaining parameters v_Φ, g^', M_N, and T_R, which govern the dynamics of LSL and effectively avoid constraints imposed by LIGO and BBN. Additionally, we do a simple power law fit to the BGW spectra in Eq.(<ref>) using Eq.(<ref>) and plot the benchmark points as solid red diamond, blue square, green spades and magenta tree in Fig.<ref> illustrating the correlation between the spectral index (γ_CP) and the amplitude (A_CP). We juxtapose the results with the 3σ, 2σ, and 1σ contours derived from NANOGrav. We find most of the BPs lie at 2σ@NANOGrav and the fit noticeably improves for larger values of n_T. Furthermore, it is important to highlight that in order to successfully reconcile the NANOGrav data with the existing constraints at high frequencies from LIGO and BBN, while incorporating the appropriate entropy production, it becomes necessary to incorporate extremely high (B-L) symmetry-breaking scales, denoted as v_Φ, along with the presence of RH neutrinos at the GeV scale and relatively larger gauge coupling constant denoted as g^'. Hence, we can infer that even though the recent NANOGrav data does not align well with BGWs, even when considering a spectral index large as 1.12, the LSL mechanisms with a very high scale symmetry breaking and RH neutrino mass of 𝒪(GeV) still provide a potential explanation for the PTA data. Moreover, these LSL models can be strongly constrained by interferometers such as LIGO, which can provide restrictions on the high-frequency peak expected in these models. Thus, LSL models offer a well-constrained framework for further exploration. Another important point to mention is that for a GUT-motivated U(1)_B-L symmetry, cosmic strings can appear as topological defects after symmetry breaking which can make the spectrum flat in the middle (a non-trivial characteristic peak-plateau -peak signal <cit.>) and could make the LSL mechanisms distinct from any other BGW+intermediate matter domination scenarios. § DISCUSSION AND SUMMARY Recently, PTAs like, NANOGrav, EPTA in combination with InPTA, CPTA, and PPTA, have found strong evidence of a SGWB at nano-Hz frequencies. The anticipated Hellings-Downs inter-pulsar correlations support the detection. Although the most natural source of such GWs in this frequency range are supermassive black hole binaries, an exciting possibility could be GWs of primordial origin which fits well with the recent data. An inflationary gravitational wave background with a large tensor blue tilt is one of the possible candidates which is able to produce characteristic strain at the PTAs. While the standard slow-roll inflation models are unable to produce such GWs, many models beyond the standard one can make such background. One of the primary obstacles for blue-tilted GWs is to surpass BBN bound on the effective number of neutrino species, requiring a low reheating temperature. We show that a long-lived scalar field that makes the right-handed neutrino massive can produce a right-handed mass-dependent matter era, leading to entropy production before the BBN. When inflationary GWs with large blue-tilt encounter such a post-reheating scenario, they get suppressed and leave detectable characteristic spectral features spanning a wide range of frequencies. For RH neutrino masses 𝒪( GeV), a low-scale leptogenesis mechanism, leading to large entropy production, can bring down inflationary GWs with large blue-tilt with amplitude compatible with the one PTAs reported. In addition, such a scenario can be falsifiable at high-frequency GW detectors, such as LIGO, because low-scale leptogenesis creates a double peak GW spectrum, the first peak is at the nano frequencies, whereas, the second one is within the range of future LIGO run. ieeetr99 Chen:2021rqp S. Chen, R. N. Caballero, Y. J. Guo, A. Chalumeau, K. Liu, G. Shaifullah, K. J. Lee, S. Babak, G. Desvignes and A. Parthasarathy, et al. Mon. Not. Roy. Astron. Soc. 508 (2021) no.4, 4970-4993 doi:10.1093/mnras/stab2833 [arXiv:2110.13184 [astro-ph.HE]]. NANOGrav:2020bcs Z. Arzoumanian et al. [NANOGrav], Astrophys. J. Lett. 905 (2020) no.2, L34 doi:10.3847/2041-8213/abd401 [arXiv:2009.04496 [astro-ph.HE]]. Goncharov:2021oub B. Goncharov, R. M. Shannon, D. J. Reardon, G. Hobbs, A. Zic, M. Bailes, M. Curylo, S. Dai, M. Kerr and M. E. Lower, et al. Astrophys. J. Lett. 917 (2021) no.2, L19 doi:10.3847/2041-8213/ac17f4 [arXiv:2107.12112 [astro-ph.HE]]. NANOGrav:2023gor G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951 (2023) no.1, L8 doi:10.3847/2041-8213/acdac6 [arXiv:2306.16213 [astro-ph.HE]]. Antoniadis:2023ott J. Antoniadis, P. Arumugam, S. Arumugam, S. Babak, M. Bagchi, A. S. B. Nielsen, C. G. Bassa, A. Bathula, A. Berthereau and M. Bonetti, et al. [arXiv:2306.16214 [astro-ph.HE]]. Reardon:2023gzh D. J. Reardon, A. Zic, R. M. Shannon, G. B. Hobbs, M. Bailes, V. Di Marco, A. Kapur, A. F. Rogers, E. Thrane and J. Askew, et al. Astrophys. J. Lett. 951 (2023) no.1, L6 doi:10.3847/2041-8213/acdd02 [arXiv:2306.16215 [astro-ph.HE]]. Xu:2023wog H. Xu, S. Chen, Y. Guo, J. Jiang, B. Wang, J. Xu, Z. Xue, R. N. Caballero, J. Yuan and Y. Xu, et al. Res. Astron. Astrophys. 23 (2023) no.7, 075024 doi:10.1088/1674-4527/acdfa5 [arXiv:2306.16216 [astro-ph.HE]]. Hellings:1983fr R. w. Hellings and G. s. Downs, Astrophys. J. Lett. 265 (1983), L39-L42 doi:10.1086/183954 Ellis:2023dgf J. Ellis, M. Fairbairn, G. Hütsi, J. Raidal, J. Urrutia, V. Vaskonen and H. Veermäe, [arXiv:2306.17021 [astro-ph.CO]]. Ellis:2023tsl J. Ellis, M. Lewicki, C. Lin and V. Vaskonen, [arXiv:2306.17147 [astro-ph.CO]]. Wang:2023len Z. Wang, L. Lei, H. Jiao, L. Feng and Y. Z. Fan, [arXiv:2306.17150 [astro-ph.HE]]. Kitajima:2023cek N. Kitajima, J. Lee, K. Murai, F. Takahashi and W. Yin, [arXiv:2306.17146 [hep-ph]]. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, [arXiv:2306.17149 [astro-ph.CO]]. Megias:2023kiy E. Megias, G. Nardini and M. Quiros, [arXiv:2306.17071 [hep-ph]]. Fujikura:2023lkn K. Fujikura, S. Girmohanta, Y. Nakai and M. Suzuki, [arXiv:2306.17086 [hep-ph]]. Han:2023olf C. Han, K. P. Xie, J. M. Yang and M. Zhang, [arXiv:2306.16966 [hep-ph]]. Zu:2023olm L. Zu, C. Zhang, Y. Y. Li, Y. C. Gu, Y. L. S. Tsai and Y. Z. Fan, [arXiv:2306.16769 [astro-ph.HE]]. Yang:2023aak J. Yang, N. Xie and F. P. Huang, [arXiv:2306.17113 [hep-ph]]. Guo:2023hyp S. Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu and B. Zhu, [arXiv:2306.17022 [hep-ph]]. Ghoshal:2023fhh A. Ghoshal and A. Strumia, [arXiv:2306.17158 [astro-ph.CO]]. Shen:2023pan Z. Q. Shen, G. W. Yuan, Y. Y. Wang and Y. Z. Wang, [arXiv:2306.17143 [astro-ph.HE]]. Franciolini:2023wjm G. Franciolini, D. Racco and F. Rompineve, [arXiv:2306.17136 [astro-ph.CO]]. Lambiase:2023pxd G. Lambiase, L. Mastrototaro and L. Visinelli, [arXiv:2306.16977 [astro-ph.HE]]. Li:2023yaj Y. Li, C. Zhang, Z. Wang, M. Cui, Y. L. S. Tsai, Q. Yuan and Y. Z. Fan, [arXiv:2306.17124 [astro-ph.HE]]. Blasi:2020mfx S. Blasi, V. Brdar and K. Schmitz, Phys. Rev. Lett. 126, no.4, 041305 (2021) doi:10.1103/PhysRevLett.126.041305 [arXiv:2009.06607 [astro-ph.CO]]. Ellis:2020ena J. Ellis and M. Lewicki, Phys. Rev. Lett. 126, no.4, 041304 (2021) doi:10.1103/PhysRevLett.126.041304 [arXiv:2009.06555 [astro-ph.CO]]. rfit1 R. Samanta and S. Datta, JHEP 05, 211 (2021) doi:10.1007/JHEP05(2021)211 [arXiv:2009.13452 [hep-ph]]. rfit2 S. Datta, A. Ghosal and R. Samanta, JCAP 08, 021 (2021) doi:10.1088/1475-7516/2021/08/021 [arXiv:2012.14981 [hep-ph]]. rfit3 R. Samanta and F. R. Urban, JCAP 06, no.06, 017 (2022) doi:10.1088/1475-7516/2022/06/017 [arXiv:2112.04836 [hep-ph]]. rfit4 D. Borah, S. Jyoti Das, R. Samanta and F. R. Urban, JHEP 03, 127 (2023) doi:10.1007/JHEP03(2023)127 [arXiv:2211.15726 [hep-ph]]. lep2 A. Riotto and M. Trodden, Ann. Rev. Nucl. Part. Sci. 49, 35 (1999). lep3 A. Pilaftsis and T. E. J. Underwood, Nucl. Phys. B 692, 303 (2004). lep4 W. Buchmuller, P. Di Bari and M. Plumacher, Annals Phys. 315, 305 (2005). lep5 S. Davidson, E. Nardi and Y. Nir, Phys. Rept. 466, 105 (2008). lep6 D. Bodeker and W. Buchmuller, arXiv:2009.07294 [hep-ph]. lep7 P. Di Bari, [arXiv:2107.13750 [hep-ph]]. lep8 S. Davidson and A. Ibarra, Phys. Lett. B 535, 25-32 (2002). lep9 E. K. Akhmedov, V. A. Rubakov and A. Y. Smirnov, Phys. Rev. Lett. 81, 1359 (1998). lep10 A. Pilaftsis and T. E. J. Underwood, Nucl. Phys. B 692, 303-345 (2004). lep11 T. Hambye and D. Teresi, Phys. Rev. Lett. 117, no. 9, 091801 (2016). lep12 P. S. Bhupal Dev, P. Millington, A. Pilaftsis and D. Teresi, Nucl. Phys. B 886 (2014) 569. nat1 F. Vissani, Phys. Rev. D 57, 7027-7030 (1998) doi:10.1103/PhysRevD.57.7027 [arXiv:hep-ph/9709409 [hep-ph]]. nat2 J. D. Clarke, R. Foot and R. R. Volkas, Phys. Rev. D 91, no.7, 073009 (2015) doi:10.1103/PhysRevD.91.073009 [arXiv:1502.01352 [hep-ph]]. Datta:2022tab S. Datta and R. Samanta, JHEP 11 (2022), 159 doi:10.1007/JHEP11(2022)159 [arXiv:2208.09949 [hep-ph]]. inf3 M. C. Guzzetti, N. Bartolo, M. Liguori and S. Matarrese, Riv. Nuovo Cim. 39, no.9, 399-495 (2016) doi:10.1393/ncr/i2016-10127-1 [arXiv:1605.01615 [astro-ph.CO]]. bgw1 A. Gruzinov, Phys. Rev. D 70, 063518 (2004) doi:10.1103/PhysRevD.70.063518 [arXiv:astro-ph/0404548 [astro-ph]]. bgw2 T. Kobayashi, M. Yamaguchi and J. Yokoyama, Phys. Rev. Lett. 105, 231302 (2010) doi:10.1103/PhysRevLett.105.231302 [arXiv:1008.0603 [hep-th]]. bgw3 S. Endlich, A. Nicolis and J. Wang, JCAP 10, 011 (2013) doi:10.1088/1475-7516/2013/10/011 [arXiv:1210.0569 [hep-th]]. bgw4 D. Cannone, G. Tasinato and D. Wands, JCAP 01, 029 (2015) doi:10.1088/1475-7516/2015/01/029 [arXiv:1409.6568 [astro-ph.CO]]. bgw5 A. Ricciardone and G. Tasinato, Phys. Rev. D 96, no.2, 023508 (2017) doi:10.1103/PhysRevD.96.023508 [arXiv:1611.04516 [astro-ph.CO]]. bgw6 Y. F. Cai, J. O. Gong, S. Pi, E. N. Saridakis and S. Y. Wu, Nucl. Phys. B 900, 517-532 (2015) doi:10.1016/j.nuclphysb.2015.09.025 [arXiv:1412.7241 [hep-th]]. bgw7 T. Fujita, S. Kuroyanagi, S. Mizuno and S. Mukohyama, Phys. Lett. B 789, 215-219 (2019) doi:10.1016/j.physletb.2018.12.025 [arXiv:1808.02381 [gr-qc]]. bgw8 Y. Mishima and T. Kobayashi, Phys. Rev. D 101, no.4, 043536 (2020) doi:10.1103/PhysRevD.101.043536 [arXiv:1911.02143 [gr-qc]]. bn1 S. Vagnozzi, Mon. Not. Roy. Astron. Soc. 502, no.1, L11-L15 (2021) doi:10.1093/mnrasl/slaa203 [arXiv:2009.13432 [astro-ph.CO]]. bn2 S. Bhattacharya, S. Mohanty and P. Parashari, Phys. Rev. D 103, no.6, 063532 (2021) doi:10.1103/PhysRevD.103.063532 [arXiv:2010.05071 [astro-ph.CO]]. bn3 S. Kuroyanagi, T. Takahashi and S. Yokoyama, JCAP 01, 071 (2021) doi:10.1088/1475-7516/2021/01/071 [arXiv:2011.03323 [astro-ph.CO]]. bn4 M. Benetti, L. L. Graef and S. Vagnozzi, Phys. Rev. D 105, no.4, 043520 (2022) doi:10.1103/PhysRevD.105.043520 [arXiv:2111.04758 [astro-ph.CO]]. Vagnozzi:2023lwo S. Vagnozzi, [arXiv:2306.16912 [astro-ph.CO]]. NANOGrav:2023hvm A. Afzal et al. [NANOGrav], Astrophys. J. Lett. 951 (2023) no.1, L11 doi:10.3847/2041-8213/acdc91 [arXiv:2306.16219 [astro-ph.HE]]. KAGRA:2021kbb R. Abbott et al. [KAGRA, Virgo and LIGO Scientific], Phys. Rev. D 104 (2021) no.2, 022004 doi:10.1103/PhysRevD.104.022004 [arXiv:2101.12130 [gr-qc]]. Peimbert:2016bdg A. Peimbert, M. Peimbert and V. Luridiana, Rev. Mex. Astron. Astrofis. 52, no.2, 419-424 (2016) [arXiv:1608.02062 [astro-ph.CO]]. LIGOScientific:2016jlg B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 118, no.12, 121101 (2017) [erratum: Phys. Rev. Lett. 119, no.2, 029901 (2017)] doi:10.1103/PhysRevLett.118.121101 [arXiv:1612.02029 [gr-qc]]. bml1 A. Davidson, Phys. Rev. D 20, 776 (1979) doi:10.1103/PhysRevD.20.776 bml2 R. E. Marshak and R. N. Mohapatra, Phys. Lett. B 91, 222-224 (1980) doi:10.1016/0370-2693(80)90436-0 bml3 R. N. Mohapatra and R. E. Marshak, Phys. Rev. Lett. 44, 1316-1319 (1980) [erratum: Phys. Rev. Lett. 44, 1643 (1980)] doi:10.1103/PhysRevLett.44.1316 bml4 W. Buchmüller, V. Domcke, K. Kamada and K. Schmitz, JCAP 10, 003 (2013) doi:10.1088/1475-7516/2013/10/003 [arXiv:1305.3392 [hep-ph]]. bml5 W. Buchmuller, V. Domcke, H. Murayama and K. Schmitz, Phys. Lett. B 809, 135764 (2020) doi:10.1016/j.physletb.2020.135764 [arXiv:1912.03695 [hep-ph]]. Linde:1978px A. D. Linde, Rept. Prog. Phys. 42, 389 (1979) doi:10.1088/0034-4885/42/3/001 Kibble:1980mv T. W. B. Kibble, Phys. Rept. 67, 183 (1980) doi:10.1016/0370-1573(80)90091-5 Quiros:1999jp M. Quiros, [arXiv:hep-ph/9901312 [hep-ph]]. Caprini:2015zlo C. Caprini, M. Hindmarsh, S. Huber, T. Konstandin, J. Kozaczuk, G. Nardini, J. M. No, A. Petiteau, P. Schwaller and G. Servant, et al. JCAP 04, 001 (2016) doi:10.1088/1475-7516/2016/04/001 [arXiv:1512.06239 [astro-ph.CO]]. Hindmarsh:2020hop M. B. Hindmarsh, M. Lüben, J. Lumma and M. Pauly, SciPost Phys. Lect. Notes 24, 1 (2021) doi:10.21468/SciPostPhysLectNotes.24 [arXiv:2008.09136 [astro-ph.CO]]. Megevand:2016lpr A. Megevand and S. Ramirez, Nucl. Phys. B 919 (2017), 74-109 doi:10.1016/j.nuclphysb.2017.03.009 [arXiv:1611.05853 [astro-ph.CO]]. Weinberg:2003ur S. Weinberg, Phys. Rev. D 69, 023503 (2004) doi:10.1103/PhysRevD.69.023503 [arXiv:astro-ph/0306304 [astro-ph]]. Zhao:2009we W. Zhao, Y. Zhang and T. Xia, Phys. Lett. B 677, 235-238 (2009) doi:10.1016/j.physletb.2009.05.046 [arXiv:0905.3223 [astro-ph.CO]]. WMAP:2006rnx L. Page et al. [WMAP], Astrophys. J. Suppl. 170, 335 (2007) doi:10.1086/513699 [arXiv:astro-ph/0603450 [astro-ph]]. BICEP2:2018kqh P. A. R. Ade et al. [BICEP2 and Keck Array], Phys. Rev. Lett. 121, 221301 (2018) doi:10.1103/PhysRevLett.121.221301 [arXiv:1810.05216 [astro-ph.CO]]. Liddle:1993fq A. R. Liddle and D. H. Lyth, Phys. Rept. 231, 1-105 (1993) doi:10.1016/0370-1573(93)90114-S [arXiv:astro-ph/9303019 [astro-ph]]. Kuroyanagi:2011iw S. Kuroyanagi and T. Takahashi, JCAP 10, 006 (2011) doi:10.1088/1475-7516/2011/10/006 [arXiv:1106.3437 [astro-ph.CO]]. t1 N. Seto and J. Yokoyama, J. Phys. Soc. Jap. 72, 3082-3086 (2003) doi:10.1143/JPSJ.72.3082 [arXiv:gr-qc/0305096 [gr-qc]]. t2 L. A. Boyle and P. J. Steinhardt, Phys. Rev. D 77, 063504 (2008) doi:10.1103/PhysRevD.77.063504 [arXiv:astro-ph/0512014 [astro-ph]]. t3 K. Nakayama, S. Saito, Y. Suwa and J. Yokoyama, JCAP 06, 020 (2008) doi:10.1088/1475-7516/2008/06/020 [arXiv:0804.1827 [astro-ph]]. t4 S. Kuroyanagi, T. Chiba and N. Sugiyama, Phys. Rev. D 79, 103501 (2009) doi:10.1103/PhysRevD.79.103501 [arXiv:0804.3249 [astro-ph]]. t5 K. Nakayama and J. Yokoyama, JCAP 01, 010 (2010) doi:10.1088/1475-7516/2010/01/010 [arXiv:0910.0715 [astro-ph.CO]]. t6 S. Kuroyanagi, T. Takahashi and S. Yokoyama, JCAP 02, 003 (2015) doi:10.1088/1475-7516/2015/02/003 [arXiv:1407.4785 [astro-ph.CO]]. gs1 Y. Watanabe and E. Komatsu, Phys. Rev. D 73, 123515 (2006) doi:10.1103/PhysRevD.73.123515 [arXiv:astro-ph/0604176 [astro-ph]]. gs2 K. Saikawa and S. Shirai, JCAP 05, 035 (2018) doi:10.1088/1475-7516/2018/05/035 [arXiv:1803.01038 [hep-ph]].
http://arxiv.org/abs/2307.02958v1
20230706124245
Classification and magic magnetic-field directions for spin-orbit-coupled double quantum dots
[ "Aritra Sen", "György Frank", "Baksa Kolok", "Jeroen Danon", "András Pályi" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
Department of Theoretical Physics, Institute of Physics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary Department of Theoretical Physics, Institute of Physics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary Department of Theoretical Physics, Institute of Physics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary Department of Physics, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway Department of Theoretical Physics, Institute of Physics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary MTA-BME Quantum Dynamics and Correlations Research Group, Müegyetem rkp. 3., H-1111 Budapest, Hungary The spin of a single electron confined in a semiconductor quantum dot is a natural qubit candidate. Fundamental building blocks of spin-based quantum computing have been demonstrated in double quantum dots with significant spin-orbit coupling. Here, we show that spin-orbit-coupled double quantum dots can be categorised in six classes, according to a partitioning of the multi-dimensional space of their g-tensors. The class determines physical characteristics of the double dot, i.e., features in transport, spectroscopy and coherence measurements, as well as qubit control, shuttling, and readout experiments. In particular, we predict that the spin physics is highly simplified due to pseudospin conservation, whenever the external magnetic field is pointing to special directions (`magic directions'), where the number of special directions is determined by the class. We also analyze the existence and relevance of magic loops in the space of magnetic-field directions, corresponding to equal local Zeeman splittings. These results present an important step toward precise interpretation and efficient design of spin-based quantum computing experiments in materials with strong spin-orbit coupling. Classification and magic magnetic-field directions for spin-orbit-coupled double quantum dots András Pályi ============================================================================================== § INTRODUCTION Double quantum dots (DQDs) are workhorses in the experimental exploration of quantum computing with electron spins.<cit.> DQDs allowed spin qubit initialization and readout in early experiments based on the Pauli blockade transport effect<cit.>. Since then, numerous experimental demonstrations of single- and two-qubit gates <cit.>, qubit readout <cit.>, qubit shuttling <cit.> and few-qubit quantum processors <cit.> have been completed. Spin-orbit interaction often plays a pronounced role in the physical properties of double quantum dots. This is the case, for example, for electrons and holes in III-V semiconductors such as InAs and InSb, or holes in group-IV semiconductors such as Si or Ge. Spin-orbit interaction can be an asset or nuisance; for example, it enables coherent electrical spin control <cit.>, but also contributes to decoherence<cit.>. Hence, understanding spin-orbit-related features and opportunities is of great importance for spin-based quantum computing. In this work, we classify spin-orbit-coupled double quantum dots (DQD) into six different classes according to their g-tensors, see Fig. <ref>. The classification is conveniently carried out in a gauge of `pseudospin-conserving tunneling'. In such a gauge, the classification is based on the combined g-tensor M = g_R^-1g_L constructed from the g-tensors g_L and g_R of the two dots. In fact, the classification is defined by the eigenvalue structure of the combined g-tensor M, i.e., how many of its three eigenvalues are positive, negative, or complex. We show that the eigenvectors of M associated to positive or negative eigenvalues specify special, `magic' magnetic-field directions. Directing the magnetic field along these magic directions, a conserved pseudospin can be defined, yielding a major simplification of qubit dynamics. We highlight pronounced physical features associated to these magic magnetic-field directions: (i) spectral crossings in the magnetic-field-dependent and detuning-dependent DQD spectrum, observable via microwave spectroscopy, via pronounced features in quantum capacitance, or via a finite-magnetic-field Kondo effect, (ii) prolonged spin relaxation time (relaxation sweet spot), and (iii) high-fidelity qubit shuttling. We also discuss related features for two-electron DQDs. Importantly, two of our classes (Class A and D) are compatible with the widely used concept of a single `spin-orbit field direction'<cit.>, while the other four classes are incompatible with the latter and hence imply novel, unusual phenomenology as the magnetic-field parameter space is explored. We also analyze the existence of magic loops in the space of magnetic-field directions, corresponding to equal local Zeeman splittings. These directions correspond to stopping points in Pauli spin blockade in two-electron DQDs, and provide dephasing sweet spots in single-electron DQDs. The rest of this paper is structured as follows. In Sec. <ref>, we introduce our parametrized model for the spin-orbit-coupled DQD, and transform it for convenience to a specific gauge (`gauge of pseudospin-conserving tunneling'). In Sec. <ref>, we provide a classification of our DQD model family, i.e., a partitioning of the parameter space based on the eigenvalue structure of the combined g-tensor M = g^-1_R g_L. In Secs. <ref> and <ref>, we discuss physical features associated to the magic magnetic-field directions in a DQD with a single electron (with two electrons). In Sec. <ref>, we describe how transitions between the different classes can occur as the g-tensors are changed by, e.g., tuning the gate voltages of the DQD. In Sec. <ref>, we analyze magic loops, i.e., magnetic-field directions where the Zeeman splittings in the two dots are equal. Finally, we conclude in Sec. <ref>. § HAMILTONIAN FOR A SPIN-ORBIT-COUPLED DOUBLE QUANTUM DOT We start with a frequently used phenomenological 4 × 4 model Hamiltonian describing a single electron (or hole) in a spin-orbit-coupled DQD. This Hamiltonian acts on the Hilbert space spanned by the local Kramers basis states in the two dots, |*⟩L ⇑̃, |*⟩L ⇓̃, |*⟩R ⇑̃, and |*⟩R ⇓̃, where L and R refer to the two dots, and the arrows refer to local pseudospin basis states that form a local Kramers pair in each dot. In particular, for the local basis states it holds that they are related by the time reversal operator 𝒯, e.g., |L ⇓̃⟩ = 𝒯|L ⇑̃⟩. For a single quantum dot with spatial symmetries, those spatial symmetries imply a natural choice for the Kramers-pair basis <cit.>; however, here we consider double quantum dots, and assume that all spatial symmetries are broken by the nanostructured environment (e.g., gates, leads), which motivates to use generic Kramers pairs as described above. In this basis, our Hamiltonian reads as: H = H_os + H_t + H_Z, H_os = ϵ/2τ_z, H_t = t̃_0 τ_x + t̃⃗̃·σ̃⊗τ_y, H_Z = 1/2μ_B (σ̃_L·g̃_LB⃗ + σ̃_R·g̃_RB⃗), where H_os, H_t, H_Z are on-site, tunneling and Zeeman terms, respectively. The vector σ̃=(σ̃_x,σ̃_y,σ̃_z) is the vector of Pauli matrices acting on the local Kramers bases on the two dots, e.g., σ̃_z = *⇑̃ - *⇓̃. The vector τ = (τ_x,τ_y,τ_z) is the vector of Pauli matrices acting on the orbital degree of freedom, e.g., τ_z = *L - *R. The vector σ̃_L consists of components such as σ_z ⊗ (1+τ_z)/2, etc. Furthermore, ϵ denotes the on-site energy detuning between the two dots. The pseudospin-conserving tunneling amplitude is denoted by t̃_0, and t̃⃗̃=(t̃_x,t̃_y,t̃_z) is the vector of pseudospin-non-conserving tunneling amplitudes. This tunneling Hamiltonian respects time-reversal symmetry <cit.>. In the Zeeman term H_Z, the Bohr magneton is denoted as μ_B, whereas g̃_L and g̃_R are the g-tensors of the two dots, and B⃗ is the external magnetic field. Note that the matrix elements of the g-tensors depend on the gauge choice, i.e., the choice of the local Kramers-pair basis, which we have not yet specified.<cit.> In a generic gauge, the g-tensors are real matrices, but they are not necessarily symmetric. For convenience, we convert the Hamiltonian above to a gauge that we refer to as the `gauge of pseudospin-conserving tunneling'. This is done by a local change of the Kramers basis in one of the dots, say, the right one, i.e., |*⟩R ⇑ = W |*⟩R ⇑̃ and |*⟩R ⇓ = W |*⟩R ⇓̃, where W is a 2× 2 special unitary matrix. An appropriately chosen basis change W yields (see App. <ref> for details) the following transformed Hamiltonian: H = ϵ/2τ_z + t_0 τ_x + 1/2μ_B (σ_L· g_LB⃗ + σ_R· g_RB⃗). As a result of the basis change on dot R, the corresponding g-tensor has been rotated such that g_R = R g̃_R. On the other hand, the g-tensor of dot L is unchanged, g_L = g̃_L. The Hamiltonian in Eq. (<ref>) is illustrated in Fig. <ref>. In what follows, we will refer to g_LB⃗ and g_RB⃗ as the internal Zeeman fields. We emphasize that in this gauge, all effects of spin-orbit interaction are incorporated in the two effective g-tensors, and the interdot tunneling term is pseudospin-conserving. Before analyzing Hamiltonian (<ref>), let us discuss a few experimental observations regarding g-tensors in DQDs. Based on, e.g., Refs. NadjPerge2012PRL,Crippa2018,Scherubl2019,Jirovec2022PRL, HendrickxIBM, g-tensor principal values in semiconductor DQDs range between 0.05 to 30, and the principal axis might<cit.> or might not<cit.> be correlated with the device geometry. Furthermore, in Ref. Jirovec2022PRL, a planar Ge hole DQD was studied, with the conclusion that the g-factors in the out-of-plane direction have the same sign on the two dots, whereas they exhibit opposite signs in a certain in-plane direction. This anticipates that in DQDs with strong spin-orbit interaction, g-tensors can have a rich variety, including strong anisotropy, large g-tensor difference between the two dots, and even different signs of the two g-tensor determinants are possible. From now on, we take these features as our motivating starting point, and discuss potential scenarios arising from this rich variety of g-tensor configurations on a conceptual level. In this work, we suppress further material-specific considerations, e.g., based on real-space models of strong spin-orbit interaction. Such considerations are important steps to be taken in future work. § MAGIC MAGNETIC FIELD DIRECTIONS AND THE CLASSIFICATION OF THE COMBINED G-TENSOR Equation (<ref>) describes a Hamiltonian family parametrized by 20 parameters, out of which 18 describe the two g-tensors. We now classify this Hamiltonian family into six classes. The classification is based on the two g-tensors. In particular, it is based on the physical intuition that there might be special (`magic') magnetic-field directions such that the internal Zeeman fields g_LB⃗ and g_RB⃗ in the two dots are parallel. If the magnetic field is pointing to such a magic direction, then the pseudospin (more precisely, its projection on the internal Zeeman field direction) is conserved, leading to a major simplification of the spectral and dynamical properties, as discussed below. For which magnetic-field directions are the internal Zeeman fields g_LB⃗ and g_RB⃗ parallel to each other? They are parallel <cit.>, i.e., g_LB⃗∥ g_RB⃗, if it holds that g_R^-1 g_LB⃗∥B⃗. This holds if B⃗ is a (right) eigenvector of the combined g-tensor M = g_R^-1 g_L, that is, M B⃗ = λB⃗. In fact, the internal Zeeman fields are aligned (anti-aligned), if B⃗ is an eigenvector of M corresponding to a positive (negative) eigenvalue. We call the eigenvectors of M corresponding to real eigenvalues as magic magnetic field directions. The above observation implies that the spin-orbit-coupled double quantum dots characterised by the Hamiltonian of Eq. (<ref>) can be categorized into six classes, as shown in Fig. <ref>(b). (i) If det M >0, i.e., if the determinants of the two g-tensors have the same sign, then there are three classes, to be denoted by A (+,c,c), B (+,-,-), and C (+,+,+). Here, + stands for a positive eigenvalue, - stands for a negative eigenvalue, and c stands for a complex (non-real) eigenvalue of the M matrix. (ii) If det M < 0, i.e., if the determinants of the two g-tensors have opposite signs, then there are three further classes: D (-,c,c), E (-,-,-), and F (+,+,-). We illustrate these classes in Fig. <ref> by plotting the eigenvalues of M, computed for representative g-tensor examples. Our conclusion so far is that spin-orbit-coupled double quantum dots can be classified through the eigenvalue characteristics of the combined g-tensor M. The number of positive, negative and complex eigenvalues of M varies as we move between the classes. For each real eigenvalue of M, there is a magic magnetic-field direction where pseudospin is conserved. Below, we show that the DQD's physical properties depend markedly on the sign of the eigenvalue corresponding to the magic direction, i.e., the case of aligned internal Zeeman fields is accompanied by different physical consequences than the case of anti-aligned internal Zeeman fields. We also note that in the above classification, we implicitly assumed that the g-tensors are invertible, i.e., all eigenvalues are nonzero. Nevertheless, our classification is satisfactory in the sense that g-tensors with a zero eigenvalue form a zero-measure set within the space of g-tensors. Also, our classification has a certain `robustness' or `stability': given that a Hamiltonian is in a certain class, then its perturbation cannot change the class as long as the perturbation is sufficiently weak. Transitions between different classes upon continuous perturbations are discussed in Sec. <ref> below. Finally, we remark that two of our classes (Class A and D) are compatible with the widely used concept of a single `spin-orbit field direction'<cit.>. The the other four classes (B, C, E, F), where the number of magic directions is greater than one, are incompatible with a single spin-orbit field direction. § SINGLE-ELECTRON EFFECTS WITH MAGIC MAGNETIC-FIELD DIRECTIONS In what follows, we highlight the role of the magic magnetic-field directions in determining physical properties. In this section, we focus on the properties of spin-orbit-coupled DQDs hosting a single electron. §.§ Robust spectral degeneracies First, we describe single-electron spectral degeneracies that appear when the magnetic field points to a magic direction. These are illustrated in Fig. <ref>, where we show four energy spectra, plotted as a function of the on-site energy detuning ϵ. Figure <ref>a shows four energy levels (bands) as a function of detuning, for a weak magnetic field, such that the Zeeman energy is much smaller than the tunneling amplitude. In this case, there are no degeneracies in the spectrum. Note that the tunneling amplitude sets the gap between the bonding (lower-energy) and antibonding (higher-energy) bands at zero magnetic field. At Zeeman energies higher than the tunneling amplitude t_0, we see band anticrossings or crossings, depending on the external magnetic field direction. In Fig. <ref>(b) we show the detuning-dependent spectrum for a generic (non-magic) magnetic field direction, where the bands exhibit four anticrossings, i.e., there are no spectral degeneracies. Spectral degeneracies are associated with the magic magnetic-field directions, as shown in Fig. <ref>(c,d). In Fig. <ref>(c), the magnetic field points to a magic direction of a positive eigenvalue. In this case, two band crossing points between the bonding band of the high-energy pseudospin and the antibonding band of the low-energy pseudospin are present. The reason for the presence of these spectral crossing points is that the 4× 4 Hamiltonian now separates into two uncoupled pseudospin sectors, due to pseudospin conservation, which in turn is the consequence of the magnetic field pointing to a magic direction. In Fig. <ref>(d), the magnetic field points to a magic direction of a negative eigenvalue. In this case, there is a crossing point between the two bonding bands, and there is a crossing point between the two antibonding bands. Again, the crossings arise due to pseudospin conservation. We emphasize that the sign of the eigenvalue (corresponding to the magic direction along which the magnetic field is applied) determines which pair(s) of bands cross. Remarkably, these spectral degeneracies are robust in the following sense. If the g-tensors suffer a small perturbation, e.g., due to a small change of the voltages of the confinement gates, then the eigenvalue characteristics (number of positive, negative, complex eigenvalues) of the combined g-tensor M remain unchanged, albeit that the eigenvectors and eigenvalues of M do suffer a small change. This means that the magic directions change a bit, but a small adjustment of the magnetic field to align with the new magic direction is sufficient to reinstate the degeneracy points in the detuning-dependent spectrum again. This robustness of the degeneracy points is often phrased as topological protection <cit.>, and it is a direct consequence of the fact that the subset (`stratum') of matrices with a twofold eigenvalue degeneracy has a codimension of 3 in the space of Hermitian matrices <cit.>. In fact, we can consider a three-dimensional parameter space formed by the detuning ϵ, and the polar and azimuthal angles θ and ϕ that characterize the direction of the magnetic field. In that three-dimensional parameter space, one can associate a topological invariant to the degeneracy point, which is often called the Chern number <cit.> or the local degree <cit.>. For Hermitian matrices parameterized by three parameters (such as our Hamiltonian), (i) band crossings arise generically, (ii) the value of the Chern number associated to a generic band crossing is ± 1, and (iii) such band crossings are robust against small changes of further control parameters (e.g., the elements of the g-tensors, or the magnetic field strength, in our physical setup). We have computed the Chern number for the band crossings shown in Fig. <ref>, and indeed found ± 1, confirming the topological protection of these degeneracy points. A natural question is: How to perform the classification experimentally? I.e., given an experimental setup with a tuned-up single-electron double quantum dot in a material with strong spin-orbit coupling, how could an experiment find out the eigenvalue class corresponding to that setup? (1) A natural idea is to use spectroscopy based on electron spin resonance<cit.> or electrically driven spin resonance<cit.>, such that the magnetic field strength and direction is scanned. In principle, these techniques provide access to all spectral gaps as functions of detuning and magnetic field, and hence are suited to locate the spectral degeneracies in the parameter space, e.g., in the space of ϵ, θ and ϕ. On the one hand, the number of degeneracy points found between the lowest two energy levels is equal to the number of degeneracy points between the highest two energy levels, and this number is also the number of negative eigenvalues of M. On the other hand, the number of degeneracy points between the first and second excited levels implies the number of positive eigenvalues of M, hence completing the experimental classification. (2) Besides resonant mapping of the energy gaps via the spectroscopic techniques described above, the magic directions belonging to the negative eigenvalues of the combined g-tensor can also be found using simpler techniques sensitive to the ground state only. A ground-state degeneracy point, such as the one depicted in Fig. <ref>(c), is often detected via cotunneling spectroscopy <cit.>. Moreover, at low enough temperature this degeneracy causes a Kondo effect at finite magnetic field <cit.>. Finally, the ground-state degeneracy point of Fig. <ref>(c) leads to characteristic features of the quantum capacitance, e.g., the suppression of the latter<cit.> compared to the quantum capacitance induced by an anticrossing. This quantum capacitance suppression can be detected as a function of the magnetic field direction, along the lines of the experiment of Ref. <cit.>, revealing the magic direction belonging to the negative eigenvalue. We discuss this effect further in App. <ref>. §.§ Relaxation sweet spot A further physical consequence of setting the magnetic field in a magic direction is an increased spin relaxation time. That is, the magic direction provides a spin relaxation sweet spot in the parameter space of magnetic-field directions. The description of this feature is as follows. In a spin-orbit-coupled double quantum dot, a key mechanism of spin relaxation is detuning noise. Electric fluctuations, including phonons, fluctuating charge traps, gate voltage jitter, etc., induce on-site energy fluctuations, leading to fluctuations of the detuning ϵ. In turn, these detuning fluctuations push the electron back-and-forth between the two dots. If the magnetic field is not along a magic direction, then electron feels an internal Zeeman field with a fluctuating direction, leading to qubit relaxation. However, if the magnetic field is pointing along a magic direction, then the pseudospin is conserved despite the fluctuating electron motion, and hence qubit relaxation is suppressed. Of course, relaxation is absent only in the idealized case described above. In reality, electric fluctuations not only modify the detuning, but also reshape the landscape of the double-dot confinement potential, and hence modify tunneling as well as the g-tensors. Nevertheless, as long as the dominant qubit relaxation mechanism is due to detuning noise, a qubit relaxation sweet spot is expected if the magnetic field is pointing to a magic direction. §.§ Shuttling sweet spot Shuttling electrons in quantum dot arrays is a prominent element of proposals describing scalable spin qubit architectures<cit.>. In such architectures, it is desirable to preserve the quantum state of a spin qubit upon shuttling to a neighboring dot<cit.>. In a double-dot setup, such high-fidelity qubit shuttling is facilitated if a conserved pseudospin can be defined. This is indeed the case, whenever the magnetic field is oriented along a magic direction. § TWO-ELECTRON EFFECTS WITH MAGIC MAGNETIC FIELD DIRECTIONS So far we studied a spin-orbit-coupled DQD hosting a single electron, and we investigated the role of the magic magnetic-field directions in the qualitative structure of the single-particle spectrum, as well as their relation to sweet spots for relaxation and coherent shuttling. However, such DQD systems are also often operated in the two-electron regime, typically tuned to the vicinity of the (1,1)–(0,2) charge degeneracy. Measurement of the current through the DQD in this setting is useful to characterize both coherent and dissipative components of the spin dynamics. A combination of DQD gate-voltage pulse sequences and charge sensing provides elementary experiments toward spin-based quantum information processing, demonstrating initialization, coherent control, readout, and rudimentary quantum algorithms. The mechanism of Pauli spin blockade (PSB) is an essential ingredient in those experiments. In this Section, we will connect the two-electron DQD physics and PSB to the matrix M defined in Sec. <ref>. We will assess the potential of the spin-orbit-coupled DQD for hosting spin qubits and performing PSB-based qubit readout, highlighting the special role of the magic magnetic-field directions we introduced above. We believe that connecting the experimental phenomenology to the properties of the matrix M provides a more precise representation of the underlying physics than the usual interpretation in terms of a spin-orbit field that only acts during electron tunneling. In particular, we provide a potential explanation of the recently observed experimental feature <cit.> which we term `inverted PSB readout'. §.§ Robust spectral degeneracies in the two-electron low-energy spectrum First, we investigate the low-energy part of the spectrum close to the (1,1)–(0,2) charge transition. To this end, we write a two-electron version of the Hamiltonian (<ref>), projected to the four (1,1) states—| T_+⟩ = |L⇑,R⇑⟩, | T_0⟩ = 1/√(2)[ |L⇑,R⇓⟩ + |L⇓,R⇑⟩], | T_-⟩ = |L⇓,R⇓⟩, and | S⟩ = 1/√(2)[ |L⇑,R⇓⟩ - |L⇓,R⇑⟩]—and the (0,2) singlet | S_02⟩ = 1/√(2)[ |R⇑,R⇓⟩ - |R⇓,R⇑⟩], yielding H^(2) = 1/2μ_B (σ_L· g_LB⃗ + σ_R· g_RB⃗) + √(2) t_0 [ | S⟩⟨ S_02| + H.c.] - ϵ| S_02⟩⟨ S_02|. To understand the different possible scenarios, we focus on class F, which hosts both types of magic field directions (both corresponding to positive and negative eigenvalues of M). Fig. <ref>(a) shows a typical spectrum as a function of the detuning ϵ, at a finite magnetic field in a generic direction (see the caption for parameters used). The detuning-dependent state | S_02⟩ decreases in energy with increasing ϵ and it anticrosses with all four (1,1) states, indicating that they indeed all have a finite singlet component. Away from the anticrossings, the four (1,1) eigenstates correspond to the four possible configurations with both pseudospins aligned or anti-aligned with the local internal Zeeman field g_ L,RB⃗. In Fig. <ref>(b–d) we explore the three magic magnetic-field directions available in class F. For the simple example g-tensors we chose [g_L = (6, -4, 5) and g_R = (3, 5, 2)] the three magic directions are along the three Cartesian axes x̂, ŷ, and ẑ. Directing B⃗ along x̂ or ẑ [shown in Fig. <ref>(b,d), respectively] corresponds to a positive eigenvalue of M, causing the local (internal) Zeeman fields on the dots g_ L,RB⃗ to be parallel. This implies that the highest (lowest) (1,1) state in the spectrum has its two pseudospins parallel to each other, which explains why they do not hybridize with the singlet | S_02⟩. In Fig. <ref>(c) B⃗ points along ŷ, which is a magic direction corresponding to a negative eigenvalue of M. In this case the internal Zeeman fields g_ L,RB⃗ are anti-aligned and the highest and lowest (1,1) states now have their pseudospins anti-aligned with each other [each pseudospin aligns with the local internal Zeeman field]. These two states now have a finite overlap with | S⟩ and thus hybridize with | S_02⟩, whereas the two central states now have parallel pseudospins and thus cross with | S_02⟩. We note that the spectral crossings discussed here are robust in the same sense as described for the single-electron spectral crossings in Sec. <ref>. §.§ Single-spin readout via Pauli spin blockade A DQD, hosting two electrons, tuned to the vicinity of the (1,1)–(0,2) charge transition, can be used to perform readout of a single-spin qubit via spin-to-charge conversion. This readout functionality relies on the PSB mechanism, as we summarize below. Assume that deep in the (1,1) charge configuration (left side of the plots in Fig. <ref>) the left electron is in an unknown pseudospin state, which one wants to read out in the basis of the local pseudospin eigenstates |+L⟩ and |-L⟩, where +(-) denotes the pseudospin state (anti)aligned with the local internal field (g_ LB⃗ in this case). The electron in the right dot will serve as a reference and is initialized in its local pseudospin ground state |-R⟩. In terms of the two-electron eigenstates discussed above, the system will thus occupy one of the states |± L,-R⟩, which are the (1,1) ground state and first or second excited state, depending on the relative magnitude of |g_ LB⃗| and |g_ RB⃗|. The task of reading out the left spin is thus equivalent to the task of distinguishing these two states. Such a classification is usually done by a slow, adiabatic detuning sweep to the “right” side of the charge transition, i.e., into the (0,2) charge region, followed by a detection of the final charge state of the right dot: If one of the two (1,1) states to be distinguished connects adiabatically to the (0,2) state while the other connects to a (1,1) state, then the outcome of a charge sensing measurement of the final state provides unambiguous information about the initial state of the left pseudospin. This readout mechanism is called PSB readout, as the spin-to-charge conversion is based on the fact that the Pauli principle forbids an aligned spin pair to occupy the single ground-state orbital of the right dot. Comparing the panels of Fig. <ref>, we can identify a few different scenarios, depending on the relative magnitude of |g_ LB⃗| and |g_ RB⃗|. (Spectra such as shown in Fig. <ref> will look qualitatively the same for |g_ LB⃗| < |g_ RB⃗| and |g_ LB⃗| > |g_ RB⃗|, the main difference being the ordering of the levels |± L,± R⟩; below we will investigate both cases while referring to Fig. <ref>.) If it happens to be the case that |g_ LB⃗| < |g_ RB⃗| then the two (1,1) states to be distinguished are the ground and first excited state. All four spectra shown in Fig. <ref> now allow in principle for PSB readout, since in all cases the two lowest (1,1) states connect adiabatically to different charge states in the (0,2) region. However, for a generic field direction [Fig. <ref>(a)] spin-to-charge conversion might be more demanding than for the magic field directions: Firstly, since in the generic case the excited (1,1) state needs to traverse two anticrossings adiabatically, with potentially different coupling parameters, a careful engineering of the detuning pulse shape could be needed.[If the magnitude of the two coupling parameters is very different, one could also design a pulse shape that results in adiabatic evolution across one of the anticrossings and diabatic evolution across the other, which would again result in good spin-to-charge conversion. This is, however, a rather special situation and making it work would require accurate knowledge about the details of the two g-tensors.] Secondly, in this case the charge-state readout signal could be obscured due to the fact that the final (1,1) state has a finite spin-singlet component, allowing for relatively fast spin-conserving charge relaxation to the (0,2) ground state. The situation is rather different when |g_ LB⃗| > |g_ RB⃗|. In that case, the initial (1,1) states |± L,-R⟩ to be distinguished are the ground and second excited state (the first excited state being |-L,+R⟩). Considering the four spectra shown in Fig. <ref>, we see that the magic field directions corresponding to positive eigenvalues of M [Fig. <ref>(b,d)] now create a situation where neither of the two (1,1) states connects adiabatically to the (0,2) state, suggesting that there is no reliable spin-to-charge conversion through adiabatic passage in this case. [In this case, fast spin-conserving charge relaxation in the (0,2) region could in fact restore the PSB signal.] The situation for the generic field direction [Fig. <ref>(a)] is very similar to before: The two (1,1) states do connect to different charge states, but devising the optimal pulse shape for spin-to-charge conversion could be challenging and fast charge relaxation might obscure the signal. Finally, if the field points along the magic direction that corresponds to a negative eigenvalue of M [Fig. <ref>(c)], the lowest two (1,1) states again couple adiabatically to different charge states that have an orthogonal (pseudo)spin structure, thus yielding a proper PSB readout signal. Combining all the observations made so far, we see that magic field directions corresponding to negative eigenvalues of M are favorable for PSB-based spin readout, independent of the ratio |g_ LB⃗|/|g_ RB⃗| and the spin-conserving charge relaxation rate in the (0,2) region. Since the relative magnitude of |g_ LB⃗| and |g_ RB⃗| could be hard to control or extract in experiment, one should thus rather search for a magic field direction corresponding to a negative eigenvalue of M, e.g., along the lines suggested in Sec. <ref>. This will yield good spin-to-charge conversion irrespective of the more detailed properties of g_ L,R. We emphasize that in this case where the field is along a magic direction corresponding to a negative eigenvalue of M, the spectrum is inverted as compared to the “standard” level ordering: upon sweeping the detuning, Pauli spin blockade [i.e., no tunneling to (0,2)] occurs for the first excited (1,1) state. With this in mind, we now interpret a recent unexpected experimental observation. In Ref. Hendrickx2021, the authors implement spin-to-charge conversion and PSB readout in a DQD, and they observe that `both antiparallel spin states are blocked, opposite to conventional' Pauli blockade readout. Our interpretation is that the device of Ref. Hendrickx2021 is a spin-orbit-coupled DQD whose combined g-tensor M has a negative eigenvalue, and this particular observation is made when the magnetic field points approximately to a magic direction corresponding to a negative eigenvalue of M. In that case, the energy spectrum is qualitatively similar to that shown in Fig. <ref>(c). With this in mind, we can now also place the connection between PSB and the “orientation” of the spin-orbit coupling in the right context. In a typical experiment where PSB is used to extract information about the spin-orbit coupling, the DQD is tuned to the (0,2) region and connected to a source and drain contact in such a way that transport through the system depends on the charge cycle (1,1) → (0,2) → (0,1) → (1,1), where still the only accessible (0,2) state is a spin singlet. Whenever one or more (1,1) states have a vanishing overlap with | S⟩, the system will inevitably enter PSB, resulting in a strongly reduced current. Measuring the current as a function of the direction of applied magnetic field, a minimum is then usually associated with having the external field aligned with an effective spin-orbit field. From the reasoning presented above, we see that in terms of the matrix M one expects a reduced current whenever the magnetic field direction hits one of the magic directions. These dips in the current are, in fact, equivalent to “stopping points” of type (iii) and (iv) as discussed in Ref. Qvist2022, where they were explained, as usual, in terms of the relative orientation of the local Zeeman fields as compared to the direction of a field describing the spin-orbit-induced non-spin-conserving tunneling. In the present work, we understand these directions in a more “democratic” way, as resulting from the basic properties of the combined matrix M = g̃_ R^-1 R^-1g̃_ L that includes all onsite and interdot spin-orbit effects. § TRANSITION PATTERNS AMONG THE SIX CLASSES In section <ref>, we have classified spin-orbit-coupled DQDs into six classes, based on their combined g-tensor M. In an experiment, the two g-tensors can be changed in situ, e.g., by changing the gate voltages. As a result, the combined g-tensor M also changes, and if this change is significant, then M can transition from one class to another. Are there any constraints on how M can transition across the classes? Yes, there are, as we discuss below. We focus on `generic’ transitions, which require only a single-parameter fine tuning of the g-tensors. Accordingly, we take into account those cases where one eigenvalue of one of the g-tensors goes through zero (without the loss of generality, we assume it is g_L), but discard more fine-tuned cases, e.g., when two eigenvalues of one of the g-tensors goes through zero simultaneously, and when one eigenvalue of each g-tensor goes through zero simultaneously. We depict the generic transitions in Fig. <ref> as the lines connecting the colored circles, where the circles represent the classes. Solid lines represent transitions where the sign of the determinant of M does not change, whereas dashed lines represent transitions where that sign does change. In principle, the maximum number of transitions between the six classes could be 15, but we find that only 8 of those transitions are generic, as shown in Fig. <ref>. Instead of a formal proof of this structure, we provide intuitive arguments. As an example of a generic transition, consider the AB pair of classes, connected by a straight line in Fig. <ref>. It is straightforward to exemplify a process where, by continuously tuning the g-tensors, the two complex eigenvalues shown in Fig. <ref> (black points) move simultaneously toward the negative real axis, collide on the negative real axis, and separate as two different negative eigenvalues. In fact, tuning the parameter α from 1 to π [see inset of Fig. <ref>(a,b)] does result in such a process. Furthermore, a small perturbation of such a process still results in a similar change of the eigenvalue structure of M. Hence, the AB transition is generic. As a counterexample, consider the BC pair of classes, which are not connected in Fig. <ref>. One way to generate this transition is to change the g-tensors in such a way that the two negative eigenvalues in Fig. <ref>(b) (blue points) move to reach zero simultaneously, and then move onto the positive real axis. Clearly, this requires a higher degree of fine-tuning than a BF transition, where only one of the negative eigenvalues moves across zero. That is, the BF transition is generic, but the BC transition is not. Another way to reach a BC transition is to induce a collision of the two negative eigenvalues to render them a complex pair, and then move them onto the positive real axis. This is a BA transition followed by an AC transition. These arguments illustrate that the BC transition is not generic. Going beyond such arguments, a formalized derivation of the generic transitions can be given using the codimension counting technique we have discussed and used in section III of Ref. Frank2020. In that language, generic transitions are those that are characterized with a codimension-1 eigenvalue pattern of M. § MAGIC LOOPS The magic magnetic-field directions we investigated in the previous Sections turned out to have many interesting properties, with implications of qualitative importance for the single- and two-electron physics in spin-orbit-coupled DQDs. As explained in Sec. <ref>, these directions, being the eigenvectors of the matrix M = g̃_ R^-1 R^-1g̃_ L, are the magnetic-field orientations for which the internal Zeeman fields on the dots are aligned (or anti-aligned). In the context of PSB, the magic directions result in a proper spin blockade since they make the states |+L,+R⟩ and |-L,-R⟩ truly orthogonal to the pseudospin singlet state. Aligning the magnetic field along a magic direction is thus expected to fully restore PSB, which is in general lifted in DQDs with strong spin-orbit coupling. The converse, however, is not true: A restored spin blockade does not always imply that the external field is pointing along a magic direction. Indeed, it is known that there is one more internal field configuration, not related to the magic directions, that yields a full spin blockade: This is the configuration with the two internal fields having equal magnitude, |g_ LB⃗| = |g_ RB⃗|. In this case, the two (1,1) states |+L,-R⟩ and |-L,+R⟩ are degenerate at vanishing interdot tunneling, independent of the relative orientation of the internal fields. Both states being tunnel-coupled to the same state | S_02⟩ will result in one “bright” and one “dark” state, the latter being fully spin-blocked.<cit.> A few natural questions arise regarding these equal-Zeeman directions for which |g_ LB⃗| = |g_ RB⃗|: (i) For a given DQD Hamiltonian, do such equal-Zeeman directions exist? (ii) Is their existence determined by the combined g-tensor M? (iii) If those equal-Zeeman directions do exist, then how are they arranged on the unit sphere of magnetic-field directions? (iv) Is there a particular relation between the arrangements of equal-Zeeman directions and the arrangements of the magic directions discussed in previous sections? (v) Can we identify any physical consequence of the equal-Zeeman directions, beyond the full PSB discussed above? We address these questions in what follows. §.§ Existence condition of magic loops with equal Zeeman splittings The condition of equal Zeeman splittings in the two dots reads: |g_LB⃗|=|g_RB⃗|. This can be rewritten by inserting g_R^-1g_R, to obtain |g_Lg_R^-1g_RB⃗|=|g_RB⃗|. We introduce the notations M'=g_Lg_R^-1, and B⃗'=g_RB⃗. With this notation, Eq. (<ref>) takes the following simple form: |M'B⃗'|=|B⃗'|. Rescaling the magnetic-field vector does not change this condition, therefore we rewrite the latter in terms of the unit vector b⃗' = B⃗'/|B⃗'| characterizing the magnetic field direction. Then, we obtain: |M'b⃗'|=1. For a given M', is there a unit vector b⃗' that satisfies Eq. (<ref>)? This can be answered by analyzing the singular values of M'. We introduce the smallest singular value σ_1 and the greatest singular value σ_3 of M': σ_1=min_|b⃗'|=1 |M'b⃗'|,σ_3=max_|b⃗'|=1 |M'b⃗'|. According to the defining Eqs. (<ref>), there is no b⃗' solving Eq. (<ref>) if σ_1>1 or σ_3<1. If, however, σ_1<1<σ_3, there are unit vectors b⃗_1' and b⃗_3' for which |M'b⃗_1'|=σ_1<1 and |M'b⃗_3'|=σ_3>1. This divides the unit sphere of b⃗' into regions where M' contracts or elongates the vectors it acts on. The boundaries between these regions are the unit vectors that satisfy Eq. (<ref>). These boundaries appear generally as a pair of loops, related to each other by inversion symmetry. Transforming back, b⃗ = g_R^-1b⃗' specifies the magnetic-field directions b⃗/ |b⃗| where Eq. (<ref>) is satisfied. Note that in general, b⃗ is not a unit vector and it points to a different direction as b⃗'. However, on the unit sphere of magnetic-field directions, these special directions b⃗/|b⃗| also form a pair of loops in an inversion-symmetric configuration. We term these loops of equal-Zeeman directions the `magic loops'. Magic loops are exemplified, for a specific parameter set, as the yellow lines in Fig. <ref>(a), where the violet and green manifolds indicate the relative magnitude of the Zeeman splitting |g_ L,RB⃗| / |B⃗| on the left and right dot, respectively, as a function of the direction of B⃗. A detuning-dependent spectrum in a two-electron dot, calculated for the magnetic field directed to a point of the magic loop, is shown in Fig. <ref>(b). The dark state discussed above is shown in Fig. <ref>(b) as the flat (red) spectral line at zero energy. We wish to point out that the definitions of the matrices M and M' are very similar, the only difference being the ordering of g_L and g_R^-1. The relation between the two matrices is given by the basis transformation M'=g_Lg_R^-1=g_Rg_R^-1g_Lg_R^-1=g_RMg_R^-1, therefore, their eigenvalues are the same. Hence, the matrix M' not only determines the existence of magic loops, but it also describes which magic direction class (from A to F) the DQD belongs to, the latter being determined by its eigenvalues. The matrix M does not encode both properties, as the singular values of M and M' generally differ. So far we defined M' in the gauge of pseudospin-conserving tunneling. In a generic gauge, the corresponding definition reads: M'=g̃_Lg̃_R^-1R^-1. The eigenvalues and singular values of this M' can be used to perform the magic-direction classification and to determine the existence of the magic loops. The rotation R^-1 is necessary to guarantee the equality of the eigenvalues of M and M'. Note that with this defining equation, M' is a gauge-dependent quantity but its eigenvalues and singular values are not. §.§ Stopping points of leakage current in Pauli spin blockade Based on the concepts of the (isolated) magic directions and magic loops, we now return to PSB as a dc transport effect as described in the last paragraph of Sec. <ref>. Our results imply that [as long as the PSB leakage current is controlled by our Hamiltonian (<ref>)] a vanishing leakage current can be caused by the magnetic field being in a magic direction, or being directed to a point of a magic loop. One possibility to distinguish between an isolated magic direction and a magic loop is to measure the leakage current in a small region surrounding the original magnetic-field direction. Another one is to perform detuning-dependent spectroscopy, and identify qualitative features shown either in Fig. <ref> (magic direction) or in Fig. <ref>(b) (magic loop). §.§ Dephasing sweet spots Finally, we derive another physical property of spin-orbit-coupled DQDs with magic loops, which is practically relevant when the DQD hosts a single electron as a qubit. We find that if the magic loops are present, and the magnetic field points to a magic-loop direction, then this is a dephasing sweet spot for the qubit at zero detuning, and the sweet spot is robust against changing the detuning parameter. Our derivation relies on the observation that for weak magnetic fields, when both local Zeeman splittings are much smaller than the tunneling amplitude, the splitting between the two lowest eigenstates is described by a detuning-dependent effective or `averaged' g-tensor, which reads: g_eff(ϵ) = 1/2[( 1- ϵ/√(ϵ^2 + 4 t_0^2)) g_L + ( 1+ ϵ/√(ϵ^2 + 4 t_0^2)) g_R ]. We assume that in our case, qubit dephasing is dominated by charge-noise-induced fluctuations of the detuning ϵ. The defining condition of a dephasing sweet spot is that the fluctuating component of the internal Zeeman field should be perpendicular to the static component. For our case, this translates to the condition ∂_ϵ g_eff(ϵ) B⃗⊥ g_eff(ϵ) B⃗, which is indeed fulfilled if ϵ = 0 and if B⃗ is along a magic loop. This is proven straightforwardly by performing the derivative of the left hand side of Eq. (<ref>) using Eq. (<ref>), evaluating both sides at ϵ = 0, and using the fact that for three-component real vectors a⃗ and b⃗ of equal length, a⃗ - b⃗⊥a⃗ + b⃗. The dephasing sweet spots associated to the magic loops survive a finite static detuning from ϵ = 0. Our argument for this is as follows. We rewrite Eq. (<ref>) as f(ϵ,θ,φ) ≡[g_eff(ϵ) n̂⃗̂(θ,φ) ] ·[ ∂_ϵ g_eff(ϵ) n̂⃗̂(θ,φ) ] = 0, where θ and φ are the polar and azimuthal angles of the magnetic field. Consider the detuning value ϵ_0 = 0 where dephasing is reduced for magnetic-field directions along the magic loop, and take a generic point (θ_0,φ_0) of the magic loop. For generic values of the g-tensor matrix elements, the derivative ∂_φ f does not vanish at (ϵ_0,θ_0,φ_0). Therefore, by changing the detuning ϵ_0 ↦ϵ_0 + δϵ, we can follow the displacement (θ_0,φ_0) ↦ (θ_0,φ_0+ δφ) of the corresponding point of the magic loop along the azimuthal direction via δφ = - δϵ f_ϵ(0) /f_φ(0), where the new sweet spot is generically slightly away from the magic loop. Here, we have simplified the notation of the derivatives, e.g., f_ϵ(0) ≡ (∂_ϵ f)(ϵ_0, θ_0, φ_0). (Of course, an alternative formulation of this argument is obtained by exchanging the roles of θ and φ.) The mapping of the displacement via Eq. (<ref>) can be done for all points of the magic loop. Hence, we conclude that the dephasing sweet spots identified for zero detuning survive for finite detuning, but the loop formed by these points on the unit sphere of magnetic-field directions is distorted as ϵ changes. Note that, depending on the two g-tensors, it may happen that for a finite critical value of ϵ, each loop contracts to a single point. An equivalent argument for the survival of dephasing sweet spots at finite detuning is as follows. The dephasing sweet spot condition is given by Eq. (<ref>), valid also for finite detuning. We now perform the differential on the left hand side of Eq. (<ref>) using Eq. (<ref>), and exploit the fact that for real three-component vectors a⃗ and b⃗, the conditions a⃗⊥b⃗ and |a⃗ + b⃗| = |a⃗ - b⃗| are equivalent. This translates Eq. (<ref>) to the following form: | [g_L - G(ϵ)] B⃗| = | [g_L + G(ϵ)] B⃗|, with G(ϵ) = ϵ/2√(ϵ^2+4 t_0^2)( g_L - g_R). Equation (<ref>) has the same structure as Eq. (<ref>), with the only difference that the matrices in the former are different from the matrices in the latter. Furthermore, there is a continuous connection between those matrices, as G(ϵ = 0) = 0. The singular-value analysis carried out above for Eq. (<ref>) can be straightforwardly adopted for Eq. (<ref>), yielding ϵ-dependent smallest and greatest singular values σ_1(ϵ) and σ_3(ϵ), both being continuous functions as G(ϵ = 0) = 0. This continuity implies that if the magic loops exist, i.e., σ_1(0) < 1 < σ_3(0), then there is a detuning neighborhood around ϵ = 0 where σ_1(ϵ) < 1 < σ_3(ϵ) holds, and therefore loops of reduced dephasing on the unit sphere of magnetic-field directions do exist. We remark that the claim of Sec. <ref>, i.e., that a magic magnetic-field direction provides a relaxation sweet spot, can be derived using the notion of the averaged g-tensor introduced and expressed in Eq. (<ref>). Namely, for a magic direction, g_eff(ϵ) B⃗ is a weighted sum of the two parallel local internal Zeeman fields g_L B⃗ and g_R B⃗, which implies that the direction of g_eff(ϵ) B⃗ does not depend on ϵ. In turn, this implies that a fluctuating internal Zeeman field, caused by a fluctuating detuning, does not have a transversal component to the static internal Zeeman field, which leads to a suppression of qubit relaxation. § CONCLUSIONS We have proposed a sixfold classification (classes A-F) of spin-orbit-coupled double quantum dots, based on a partitioning of the multi-dimensional space of their g-tensors. Only two of our classes (A and B) are compatible with the widely used concept of a single `spin-orbit field direction', while the other four classes (B, C, E, F) are incompatible with the latter and hence imply new phenomenology to be explored experimentally. We have argued that the class determines physical characteristics of the double dot, i.e., features in transport, spectroscopy and coherence measurements, as well as qubit control, shuttling, and readout experiments. In particular, we have shown that the spin physics is highly simplified by pseudospin conservation if the external field is pointing to special directions (`magic directions'), where the number of special directions is determined by the class. We also analyzed the existence and relevance of magic loops in the space of magnetic-field directions, corresponding to equal local Zeeman splittings. The theoretical understanding our study provides is necessary for the correct interpretation and efficient design of spin-based quantum computing experiments in material systems with strong spin-orbit interaction. § ACKNOWLEDGEMENT We thank J. Asbóth, L. Han, G.  Katsaros, and Y.-M. Niquet for useful discussions. This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), by NKFIH through the OTKA Grant FK 132146, and by the European Union through the Horizon Europe project IGNITE. We acknowledge financial support from the ONCHIPS project funded by the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101080022. § GAUGE TRANSFORMATION In this section, we describe the transformation of the Hamiltonian to a gauge of pseudospin-conserving tunneling, to write the Hamiltonian in the form as in Eq. (<ref>). We have the freedom to choose the Kramers-pair local basis states on each quantum dot. As a consequence of spin-orbit interaction, we have a pseudospin-non-conserving tunneling vector t̃, introduced in Eq. (<ref>), which describes the rotation of the pseudospin upon the interdot tunneling event. We redefine the basis on the right dot to eliminate t̃, which renders the pseudospin-conserving tunneling energy to t_0 = √(t̃_̃0̃^2 + t̃^2) (see Eq. (<ref>)). The tunneling Hamiltoninan H_t in the basis |*⟩L ⇑̃, |*⟩L⇓̃, |*⟩R ⇑̃, and |*⟩R ⇓̃ has the following matrix form, H_t = (0 0 t̃_0 - t̃_z - (t̃_x - t̃_y) 0 0 - (t̃_x + t̃_y) t̃_0 + t̃_z t̃_0 + t̃_z (t̃_x - t̃_y) 0 0 (t̃_x + t̃_y) t̃_0 - t̃_z 0 0). The definition of the new basis is the following: |*⟩R ⇑ = (t̃_0 + t̃_z)|*⟩R ⇑̃ + (-t̃_y + t̃_x)|*⟩R ⇓̃/t̃_̃0̃^2 + t̃^2 = W|*⟩R ⇑̃, |*⟩R ⇓ = (t̃_0 - t̃_z)|*⟩R ⇓̃ + (t̃_y + t̃_x)|*⟩R ⇑̃/t̃_̃0̃^2 + t̃^2 = W|*⟩R ⇓̃. This basis transformation is a local rotation of the pseudospin around the vector 𝐧̃_so = 𝐭̃/|𝐭̃| with an angle θ_so = 2arctan|𝐭̃|/t̃_0 on the right dot, i.e., W =e^iθ_so/2𝐧̃_soσ̃_R. Note that W has the same form in the new basis as well. The original basis was formed by a Kramers pair, and the new basis is also formed by a Kramers pair, i.e., 𝒯|*⟩R ⇑ = |*⟩R ⇓. The tunneling Hamiltonian, re-written in the new basis, reads: H_t = t_0 |*⟩L ⇑̃⟨*|R ⇑ + t_0 |*⟩L ⇓̃⟨*|R ⇓ + h.c. The basis transformation also changes the g-tensor on the right dot. The Zeeman Hamiltonian for the right dot in the original basis is proportional to σ̃_R·g̃_RB⃗. The Pauli matrices in the new basis can be related to the Pauli matrices in the original basis as σ̃_R = W^†σ_R W = R^-1σ_R, where R matrix is a three dimensional rotation with angle θ_so around the axis 𝐧̃_so. The elements of R can be expressed with the tunneling parameters directly: R_ij = (t̃_̃0̃^2 - t̃^2)δ_ij + 2 t̃_i t̃_j - 2 t̃_0 ∑_k ε_ijkt̃_k/t̃_̃0̃^2 + t̃^2. Note that the determinant of R is 1. Thus, the Zeeman term expressed in the new basis is proportional to , with the rotated g-tensor g_R = R g̃_R. § QUANTUM CAPACITANCE FEATURES ALONG THE MAGIC MAGNETIC-FIELD DIRECTION Here, we discuss the quantum capacitance features of a spin-orbit-coupled DQD hosting a single electron in thermal equilibrium. We expect that the measurement of this quantity as a function of magnetic field direction and detuning can reveal a magic direction that belongs to a negative eigenvalue of the combined g-tensor M. This expectation is reinforced by a recent experiment <cit.>, which demonstrates the quantum capacitance of a two-electron DQD is suppressed in the vicinity of a ground-state crossing, as compared to the case when an anticrossing is induced by spin-orbit interaction. Interestingly, the simple model we present here predicts an enhancement of the quantum capacitance at a crossing point. We point out mechanisms that are probably relevant to explain this difference, but postpone a detailed analysis for future work. We start our analysis with a DQD charge qubit, disregarding spin for simplicity. The Hamiltonian in the left-right basis reads H = (0 Δ/2 Δ/2 ϵ), where ϵ is the on-site energy detuning on the two dots and Δ is the interdot tunneling matrix element. At finite temperature, the quantum capacitance contribution of the ground state and the excited state are respectively given by C^g_Q = e^2Δ^2/2/(ϵ^2 + Δ^2)^3/2×exp(-β E_g)/Z, C^e_Q = - e^2Δ^2/2/(ϵ^2 + Δ^2)^3/2×exp(-β E_e)/Z, where E_g and E_e are the two energy eigenvalues, Z is the canonical partition function, β = 1/k_B T, and e is the electron charge. The total quantum capacitance C_Q is a sum of these two contributions. (Note that in an experiment, the measurement of this quantum capacitance is also influenced by the lever arms between the gate electrodes and the quantum dots.) As a function of detuning ϵ, the total quantum capacitance shows a peak at ϵ = 0 with peak width ∝Δ and peak height expressed as: C^max_Q ≡max_ϵ C_Q(ϵ,Δ,T) = e^2/2 ΔtanhΔ/2 k_B T. Remarkably, this function decreases monotonically as the anticrossing size Δ increases. In the limit of a crossing, i.e., for Δ→0, from Eq. (<ref>) we obtain the following expression for the peak height: C^max_Q = e^2/4 k_B T. The peak width converges to zero in this Δ→ 0 limit. Importantly, a quantum capacitance measurement using a radiofrequency probe signal <cit.> might yield a result very different from C_Q, especially in the limit of Δ→ 0, as we discuss below. Using the approach described above, we computed the thermal-equilibrium quantum capacitance C_Q(ϵ,B⃗, T) of a single-electron DQD with g-tensors, for the example we have analysed in Fig. <ref>. For detuning values where the ground state participates in crossings [e.g., Fig. <ref>(d)] or anticrossings [e.g., Fig. <ref>(b,c)], the detuning dependence of the thermal quantum capacitance develops a peak with a height essentially described by Eq. (<ref>). Figure <ref> shows the height of this capacitance peak height C_Q^max =max_ϵ C_Q(ϵ,B⃗,T) as the function of the magnetic-field direction, with a fixed magnetic-field strength (see caption for parameters). This spherical plot exhibits a maximum of C_Q^max along the magic magnetic-field direction corresponding to the negative eigenvalue. (Note that our point grid on the spherical surface intentionally avoids the magic direction itself to avoid the Δ→ 0 limit.) In principle, the pronounced feature observed in Fig. <ref> would be useful to identify magic directions corresponding to a negative eigenvalue, using relatively simple thermal-equilibrium capacitance measurements<cit.>. However, in practice, the theoretical model we have applied here probably needs to be refined, the required refinements depending on the hierarchy of frequency and energy scales of the experiment. A recent experimental result <cit.> that anticipates this has found a suppression of quantum capacitance associated to spectral crossings, in contrast to the theory outlined here which predicts an enhancement. Further scales (beyond anticrossing size Δ and thermal energy k_B T) that probably enter such a refined analysis include the amplitude and frequency of the radiofrequency probe field used to measure the quantum capacitance. In fact, a radiofrequency probe field of sufficient strength and frequency induces overdrive effects <cit.>, e.g., Landau-Zener transitions between the two levels. As a consequence, the measured charge response and the apparent quantum capacitance will deviate from a the prediction of the simple thermal equilibrium picture we used above. In particular, Landau-Zener transitions are efficient when the radiofrequency probe signal drives the DQD charge through a small anticrossing corresponding to the Δ→ 0 limit discussed above. Quantitatively, such a scenario is described by, e.g., the conditions ϵ, Δ≪ A, Δ^2 ≲ A ω, where A is the amplitude of the on-site energy oscillations induced by the probe field, and ω is the frequency of the probe field. The resulting diabatic dynamics is expected to lead to an apparent quenching of the quantum capacitance <cit.>, potentially explaining the findings of Ref. Han2023. Furthermore, the strength of charge noise causing detuning jitter, as well as the finite resolution of the detuning mesh used in the experiment, can also play a qualitative role in such a quantum capacitance measurement. We postpone the detailed analysis of such refined models to future work.
http://arxiv.org/abs/2307.01294v1
20230703185959
Nonlinear Redshift-Space Distortions on the Full Sky
[ "Lawrence Dam", "Camille Bonvin" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
arrows.meta decorations.pathreplacing,calligraphy calc pdflatex pdflatexfalse articletitles articletitlestrue uprightparticles uprightparticlesTrue #1#1 #1#1 #1 #1 |eidδ_Dn̂ℋxsr̊zk̨qv̌êŭȷjJe_xe_ye_zU𝖳b_Ab_B⟨⟩_δlawrence.dam@unige.ch Département de Physique Théorique and Centre for Astroparticle Physics, Université de Genève, 24 quai Ernest-Ansermet, CH-1211 Genève 4, Switzerlandcamille.bonvin@unige.ch Département de Physique Théorique and Centre for Astroparticle Physics, Université de Genève, 24 quai Ernest-Ansermet, CH-1211 Genève 4, Switzerland We derive an analytic expression for the two-point correlation function in redshift space which (i) is nonlinear; (ii) is valid on the full sky, i.e. the distant-observer limit is not assumed; (iii) can account for the effect of magnification and evolution bias due to a non-uniform selection function; and (iv) respects the fact that observations are made on the past lightcone, so naturally yields unequal-time correlations. Our model is based on an exact treatment of the streaming model in the wide-angle regime. Within this general regime, we find that the redshift-space correlation function is essentially determined by a geometric average of its real-space counterpart. We show that the linear expression for the galaxy overdensity, accurate to subleading order, can be recovered from our nonlinear framework. This work is particularly relevant for the modeling of odd multipoles of the correlation function at small separations and low redshifts, where wide-angle effects, selection effects, and nonlinearities are expected to be equally important. Nonlinear Redshift-Space Distortions on the Full Sky Camille Bonvin August 1, 2023 ==================================================== § INTRODUCTION Redshift-space distortions (RSD) have been identified as a key observable to test the laws of gravity and probe the validity of the ΛCDM model <cit.>. Typically one treats RSD in the flat-sky regime, or distant-observer limit. In this regime, the redshift-space correlation function takes a simple form with a multipole structure consisting of a monopole, quadrupole and hexadecapole <cit.>. However, this approximation is only valid over a limited range of separations and opening angles. At small separations, nonlinearities become relevant and need to be included in the modeling; whereas at large separations and opening angles, the flat-sky approximation breaks down and wide-angle effects need to be accounted for. These two types of corrections are usually treated separately: either one models the linear correlation function with wide-angle corrections, or one models the flat-sky correlation function in the nonlinear regime. In most cases, these separate approaches are enough to provide a precise description of the signal. Besides a desire for a general model, there are two situations where both wide-angle corrections and nonlinear effects might become relevant over the same range of scales. The first concerns measurements of the correlation function at very low redshift, such as those expected from the Dark Energy Spectroscopic Instrument (DESI) <cit.>. In particular, the Bright Galaxy Survey sample of DESI has a very high number density of galaxies at low redshift (with median z≈0.2), over 14,000 deg^2 <cit.>. At these redshifts, nonlinear evolution might be expected to be relevant up to relatively large separations, while wide-angle effects are expected to be important down to relatively small separations <cit.>. These effects are indeed governed by the ratio of pair separation s to distance d, which quickly becomes non-negligible for small d. A nonlinear model on the full sky may therefore be needed for this type of survey. The second situation concerns the measurement of relativistic effects <cit.>, where wide-angle effects and nonlinearities are both important over the same range of scales. Relativistic effects have been shown to contribute to the correlation function by generating odd multipoles (a dipole and an octupole) in the correlation of two populations of galaxies <cit.>. Both in the linear regime <cit.> and in the perturbative nonlinear regime, wide-angle effects are roughly of the same order of magnitude as relativistic effects. This is because relativistic effects scale as ℋ/k×RSD, while wide-angle effects scale as s/d×RSD <cit.>. These two types of effects are therefore roughly of the same order of magnitude at all scales, since s/d∼ℋ/k.[This is of course a crude comparison, since on the one hand the ratio of and d varies with redshift, and on the other hand RSD and relativistic effects are also redshift-dependent. However, it shows that wide-angle effects and relativistic effects have a similar scaling and cannot be treated separately, even for large k.] As a consequence, if one wants, for example, to measure through the dipole the relativistic effects in the nonlinear regime, it is necessary to model at the same time wide-angle effects in this regime. A number of works have studied various aspects of the problem. Castorina and White <cit.> calculated the impact of wide-angle corrections on the even multipoles, modelled using the resummed approach to Lagrangian perturbation theory (LPT) <cit.>. Their work showed that linear theory is adequate to describe wide-angle corrections for the even multipoles, except around the baryon acoustic peak where non-perturbative corrections are known to be important <cit.>. However, their model misses a contribution related to the (uniform) selection function. This was pointed out by Taruya et al. <cit.>, who presented a model similar to that of Castorina and White, but without this deficiency (see also Refs. <cit.> for subsequent work including the gravitational redshift). A comparison with simulations showed that the contribution from the selection function is important for an accurate prediction of the dipole moment (though not for the even multipoles). Both of these works did not consider contributions from a non-uniform radial selection function. It is however known that in the linear regime a non-uniform selection function generates contributions from the magnification bias, which are of the same order of magnitude as wide-angle effects <cit.>, and may even dominate the signal for some choices of populations <cit.>. Concerning the second situation, Beutler and di Dio <cit.> proposed a method to compute the relativistic power spectrum, including selection effects and wide-angle effects in perturbation theory. They derived an expression for the dipole, including contributions up to third order in perturbation theory, which agrees well with numerical simulations up to k_ max≃ 0.4 h^-1Mpc. More recently, Noorikuhani and Scoccimarro <cit.> calculated the impact of relativistic effects and wide-angle corrections on the galaxy power spectrum and bispectrum. Their approach was to model these Fourier statistics in the usual way—i.e. work in the distant-observer limit and use one-loop perturbation theory—but supplement with the leading-order relativistic and wide-angle contributions. This hybrid approach was justified on the basis that the nonlinearities were found not to mix significantly with the relativistic and wide-angle effects. This paper begins a study of these two situations from an altogether different approach. Here we will largely focus on the first situation, exhibiting a novel approach to the streaming model <cit.>, a nonlinear model of the RSD correlation function; a forthcoming work will be dedicated to a complete model for the second situation. We will thus show how the streaming model can be exactly extended to the wide-angle regime, taking advantage of the simple geometry of the problem in configuration space. This model is similar in some respects to that of Taruya et al. <cit.> but differs importantly in others. In particular, here we allow for the more realistic case of a non-uniform selection function, which leads to further distortions through the magnification and evolution bias. In addition, here we derive in full generality the relation between the matter density field in redshift space and in real space, independent of the details of how such fields might evolve or might be biased in relation to the galaxy field. (The dynamics and galaxy bias can be specified, for example, using the `convolution LPT' prescription <cit.>, as has proven a powerful method.) Based on a more general treatment of the redshift mapping and number conservation, we will further show that the streaming model can also accommodate selection, galaxy evolution and relativistic effects—indeed, almost all subleading effects at 𝒪(/k). These effects, as mentioned, are of the same order as the wide-angle effects so in principle should also be taken into account. With the streaming model, these effects are logically separated and enter in resummed form, thereby offering a compact way of capturing the large number of terms that contribute to the overdensity at subleading order (i.e. when expressed through a perturbative expansion). Additionally, its modular form lends itself well to the problem of modeling at the same time the three different kinds of sources of nonlinearity that need to be considered in a realistic model—namely, dynamics, galaxy bias, and the redshift mapping (wide-angle effects in our model are exact to all orders in s/d). The outline of this paper is as follows. In Section <ref> we extend the nonlinear approach to RSD <cit.> to the wide-angle regime, deriving a non-perturbative expression for the wide-angle correlation function in redshift space. In Section <ref> we extend the derivation to construct a more realistic model of the correlation function which takes into account selection effects, as well as the fact that observations are made on the lightcone. We then perform a perturbative expansion of our model in Section <ref> and show that well-known results from linear theory are recovered, including of many relativistic effects. In Section <ref> we present the full-sky generalisation of the Gaussian streaming model, and show that it is consistent with the expected form in the distant-observer limit. In Section <ref> we calculate the linear theory multipoles including wide-angle contributions, and show that they are consistent with expressions found in the literature. Our conclusions follow in Section <ref>. Several appendices describe the details of our calculations. § NONLINEAR MODELING IN THE WIDE-ANGLE REGIME This section is principally devoted to a study of the relation between a galaxy at its true (comoving) position and its observed (comoving) position , ()=+·(̆) , as concerns the correlation function in redshift space. Here =̆^-1$̌ (which has units of length),$̌ is the peculiar velocity and is the conformal Hubble parameter. This mapping of course leads to the well-known Kaiser effect, typically modelled in the distant-observer limit in which one assumes that distant objects have identical line of sight . This approximation is valid for small opening angles between any two lines of sight in a galaxy sample. Here we present an exact treatment of the general case in which lines of sight are allowed to vary across the full sky without restriction to small angles. To highlight the key trick in this paper and make clear the geometric interpretation, we will first present the calculation of the full-sky correlation function without any complicating factors such as selection effects. We will also focus on equal-time correlations and suppress the time dependence in the number density, velocity, etc; we will restore it in Section <ref> when we come to consider unequal-time correlations on the lightcone and related projection effects. The basis of our approach is the number conservation of objects in real and redshift space: n_s()^3=n()^3 , where n() and n_s() are the comoving number densities in real and redshift space, respectively. In integral form, we have equivalently n_s() =∫^3 n()(-()) . This expression in fact holds for general mappings ()—not just for Eq. (<ref>)—including those that also contain transverse displacements. It also holds in the regime of multiple streams, i.e. when more than one point in real space formally maps to a single point in redshift space (when () has singular Jacobian). Now, since we work on the full sky and since the mapping only affects the radial positions, it is natural to switch to a spherical coordinate system. Thus let χ=|| be the (observed) radial distance in redshift space and χ'=|| the radial distance in real space. Writing Eq. (<ref>) in spherical coordinates, separating the Dirac delta function into a radial piece and an angular piece, and inserting n_s()=n̅_s[1+δ_s()] and n()=n̅[1+δ()] (here n̅_s and n̅ denote the mean number densities in redshift and real space, respectively), we have 1+δ_s() =∫^∞_0χ'χ'^2∫^2 [1+δ()]1/χ^2(χ-χ'-·(̆))(-) , where =/|| is the line of sight, and we have furthermore used that, in the absence of selection or evolution effects, the mean densities in real and redshift space are equal, n̅=n̅_s; see Appendix <ref> for justification. Parametrising the positions as =χ and =χ', and doing the trivial angular integral, we get 1+δ_s(χ) =1/χ^2∫^∞_0χ'χ'^2 [1+δ(χ')](χ-χ'-·(̆χ')) =1/χ^2∫^∞_0χ'χ'^2 [1+δ(χ')] ∫^∞_-∞ k/2π ^- k(χ-χ')^ k u_(χ') , where in the second line the Dirac delta function is given as its Fourier representation, writing u_=·$̆ for the radial component of the velocity. As we will show in Section <ref>, this equation recovers at linear order the familiar Kaiser term, including the subdominant inverse-distance term. With Eq. (<ref>) it is straightforward to obtain the correlation functionξ_s=⟨δ_s(_1)δ_s(_2)⟩: 1+ξ_s(χ_1,χ_2,_1·_2) =1/χ_1^2χ_2^2∫χ_1' χ_1'^2∫χ_2' χ_2'^2∫^2κ/(2π)^2 ^-κ·(χ-χ')⟨[1+δ(χ_1'_1)][1+δ(χ_2'_2)]^κ·w⟩ =1/χ_1^2χ_2^2∫χ_1' χ_1'^2∫χ_2' χ_2'^2 [1+ξ(r)] ∫^2κ/(2π)^2 ^-κ·(χ-χ')^κ·w , wherer=(χ_1'^2+χ_2'^2-2χ_1'χ_2'cosϑ)^1/2is the separation between the two galaxies in real space,cosϑ=_1·_2is the cosine of the opening angle, and we have defined the following two-component vectors:κ=(k_1,k_2),χ=(χ_1,χ_2),χ'=(χ'_1,χ'_2), andw≡(u_(_1),u_(_2)) =(_1·(̆χ_1'_1),_2·(̆χ_2'_2)). Furthermore, in the second line we have identified the moment generator^κ·w, where in this work a subscriptδdenotes the density-weighted ensemble average O≡⟨[1+δ(_1)][1+δ(_2)] O⟩ / ⟨[1+δ(_1)][1+δ(_2)]⟩ . There is a more intuitive way of expressing Eq. (<ref>). Recognising that theκ-integral in Eq. (<ref>) (the inverse Fourier transform of the generating function) defines the joint probability distribution of radial displacements, p(χ-χ';χ'_1·_2) ≡∫^2κ/(2π)^2 ^-κ·(χ-χ') Z(=κ;(̊χ'),_1·_2) , Z(;,̊_1·_2)≡^·w , we can write 1+ξ_s(χ_1,χ_2,_1·_2) =1/χ_1^2χ_2^2∫^∞_0χ_1'χ_1'^2∫^∞_0χ_2'χ_2'^2[1+ξ(r)] p(χ-χ';χ'_1·_2) . This formula is the full-sky generalisation of the (distant-observer) streaming model <cit.>, which is given by a single line-of-sight integral. The probability distribution (<ref>) is scale-dependent: it depends not only onχ-χ'but also onχ'itself through the moments ofw(by way ofZ).[More precisely, these velocity moments generally depend on the separation r and the projections ̊̂·_1 and ̊̂·_2, which are geometrically related to χ_1', χ_2' and _1·_2.] The existence of coherent flows is the origin of this scale dependence, without whichpis a proper probability distribution. This dependence onχ'has the effect that as we integrate overχ_1'andχ_2'we pass through a two-parameter family of probability distributions, each with different mean, covariance, etc—there is not a single fixed distribution. There is also a dependence ofpon the opening angle_1·_2but, unlike the dependence onχ', is known a priori (as indicated by the conditional). A useful if heuristic way to view Eq. (<ref>) is as the expectation of1+ξ(r)when averaged over all real-space triangles that can be formed from an opening angle_1·_2, e.g. by varying the adjacent side lengthsχ_1'andχ_2'. Schematically, ξ_s = ⟨ξ⟩_Δ , for⟨·⟩_Δsome average over triangles. More precisely, we have a probability space of real-space triangles, parametrised relative to the fixed redshift-space triangle byχ_1-χ_1'andχ_2-χ_2'(see Fig. <ref>). These radial displacements are correlated since they are the result of Doppler shifts produced by the radial velocitiesu_(χ_1')andu_(χ_2'), which are themselves correlated. Since velocity correlations depend on the separation=̊_1-_2=χ_1'_1-χ_2'_2, not all triangles in Eq. (<ref>) contribute with the same probability. In particular, triangles in real space that are far from the redshift-space configuration will contribute negligibly, since no large-scale correlated flow is likely to arise that can map these configurations into each other; conversely, configurations that are close to each other will contribute significantly to the integral. How close will depend on the characteristic separation along each line of sight as determined by the means⟨ u_(_1)⟩_δand⟨ u_(_2)⟩_δ. We emphasise that no dynamical assumptions have been made in obtaining Eq. (<ref>); it is an exact result based on the formal relation (<ref>) between the observed and underlying density fields. [As with other nonlinear treatments of RSD (e.g. Ref. <cit.>), our model is `exact' only to the extent that the redshift mapping (<ref>) is exact. But this mapping cannot be said to be exact as it is based on a linear approximation of the full relation between and (even if the perturbations are themselves fully nonlinear); see Section <ref>. ] Furthermore, we have made no attempt to account for the galaxy bias, since including it in this framework is straightforward <cit.>—e.g. by replacing1+δwith1+bδ(in linear theory), or, more generally, some functional ofδ. Irrespective of tracer (galaxies, halos, dark matter particles, etc), the relation between the observed and underlying fields remains the same. Finally, since the line-of-sight integrals in Eq. (<ref>) are over non-oscillatory real functions, evaluating them numerically is in principle straightforward once the real-space correlation function and probability distribution are specified. (In Section <ref> we explicitly show the form of these integrals in the case of the Gaussian streaming model.) § A MODEL INCLUDING LIGHTCONE, SELECTION AND EVOLUTION EFFECTS Going beyond the distant-observer limit, wide-angle effects are among a number of effects that need to be considered all at once. We first give a physical explanation of these additional effects in Sections <ref> and <ref>, and in Section <ref> we derive the full model including all effects. §.§ Extending the redshift mapping to the lightcone Observations are made on the past lightcone but this is not reflected in the mapping (<ref>) nor the correlation function (<ref>) derived from it. In particular, the mapping (<ref>) does not take into account the fact that perturbations to the redshift also induce a displacement in the lookback time, thus changing the apparent position on the lightcone. We can see this by reconsidering the problem of mapping galaxies in a redshift survey. [See Ref. <cit.> for a discussion of the general problem in terms of photon geodesics.] The basic task is to assign Cartesian (comoving) coordinatesusing redshifts and angular positions. For a galaxy with measured redshiftzobserved in the direction, we assign to it the coordinates=χ(z), where the conversion from redshift to comoving radial distance is given byχ(z)=∫^z_0 z'/H(z'). This is the observed position. (Here we assume perfect knowledge of the underlying background cosmology, and no angular deflections so that=.) The (unknown) true position is=χ', whereχ'≡χ(z̅),z̅=z-δ zis the background redshift andδ zthe redshift perturbation. This is the position that would be inferred had the redshift not suffered a Doppler shift. The mapping (<ref>) is obtained by linearizing=χ(z̅+δ z)about the true position=χ(z̅): ≃χ(z̅)+δ z χ/ z|_z=z̅ =+·(̆τ',) , where in the second equality we have used thatδ z=(1+z̅)·̌, obtained from the relation1+z=(1+z̅)(1+·̌)for the Doppler shift. But redshift is not only an indicator of distance; it is also an indicator of time, with galaxies at larger redshifts associated with larger lookback times; see Fig. <ref>. So in addition to assigning spatial coordinates, we also assign a time coordinateτto each galaxy <cit.>. More precisely, following a galaxy photon along the line of sightback to the observed redshiftz, we assign τ=τ_0-χ(z) , the time at which the photon was apparently emitted. Hereτ_0is the present conformal time. Likewise, we have for the real timeτ'=τ_0-χ'. The relation betweenτ'andτis then given by linearizingτ(z)=τ(z̅+δ z)aboutτ(z̅)≡τ'. At linear order we haveτ≃τ'-u_(τ',), which together with Eq. (<ref>) constitutes a map(τ',)↦(τ,)on the lightcone. Note thatτ'is in fact degenerate withsinceτ'=τ_0-χ'andχ'=||, and that the displacement on the lightcone is null:-(τ-τ')^2+(χ-χ')^2=0. In the rest of this section we will work with this spacetime mapping. As we will see in Section <ref>, any evolution in the number density between the surface of constantτ(constant observed redshiftz) and the surface of constantτ'(constant background redshiftz̅) gives rise to an apparent density fluctuation. These distortions are among some of the many contributions to the full expression for the overdensity derived using relativistic perturbation theory. While subdominant to the Kaiser effect, these projection effects are of the same order as wide-angle effects so in principle should also be included. §.§ Selection effects In addition to projection effects related to the lightcone, we also need to take into account the selection effects. These give rise to fluctuations of order/kwhich, although subdominant to the usual RSD, are of the same size as the wide-angle corrections so cannot generally be ignored. §.§.§ Flux limit Since surveys only observe above a certain flux limitF_*, not all sources in the sky will be bright enough to be detected. This is seen in the observed mean number density, or selection functionn̅_s(χ,F>F_*), which tends to fall off with distanceχ. In general, we do not haven̅_s=n̅(wheren̅is the selection function in real space), since some sources that would otherwise not be detectable in real space, can be seen in redshift-space due to magnification effects (and vice-versa). The difference between the two effectively generates an additional density fluctuation. Note that the selection function provides a complete description of the selection effect in the nonlinear regime; however in linear theory the relevant quantity is the linear response of the selection function to a change in the flux limit, i.e. the slope ofn̅_swith respect to the threshold: s_*≡∂logn̅_s/∂ m_* =-2/5∂lnn̅_s/∂ln F_* =-2/5∂lnn̅_s/∂ln L_* , wherem_*=-2.5log F_*+const.is the magnitude limit andF_*(L_*) is the flux (luminosity) limit of the survey. This parameter (not to be confused with the redshift coordinates) is known as the `magnification bias' and is survey and population dependent. §.§.§ Galaxy evolution Galaxies can merge with each other or be created altogether. This was not reflected in the model (<ref>) which assumed a constant number of galaxies (n̅_s=n̅=const). Since the time evolution of the mean comoving number densityn̅(τ)depends upon the uncertain details of galaxy formation and evolution, it is conventionally parametrised by the `evolution bias' f_evol≡∂lnn̅/∂ln a ; n̅(τ) =F_evol(τ)n̅(τ_0) , F_evol(τ) ≡exp(-∫^1_a(τ) a'/a' f_evol(a')) . With no evolution,f_evol=0andF_evol=1for all time. In general,f_evolis tracer dependent and a function of the flux cut. Note that the effect of galaxy evolution on the apparent number density may be considered an example of a projection effect, in that the lookback time (<ref>) of a galaxy viewed in real space is different to the lookback time of the same galaxy but viewed in redshift space. §.§ Derivation of the general model We now construct a model of the redshift-space correlation function valid on the full sky and in the nonlinear regime, building into it the lookback time (<ref>), as well as the flux cut and galaxy evolution. The calculation is essentially the same as before once we have setup the problem and introduced some definitions. Readers who are not interested in these details may skip ahead to Eq. (<ref>) and follow the discussion from there. To include a flux cut in the model we now need to consider the luminosity of each galaxy. We define the redshift-space distribution functionΦ_s(τ,,F_s), i.e. the redshift-space comoving number density of galaxies in the flux bin(F_s,F_s+ F_s). Similarly, letΦ(τ,,F)be the true distribution function, i.e. the real-space comoving number density of galaxies in the (non-redshifted) flux bin(F,F+ F). Since the mapping(τ',)↦(τ(),()), whereτ()is the lookback time (<ref>) and()is given by Eq. (<ref>), is nothing more than a reassignment of each galaxy's coordinates, the number of galaxies per flux bin is conserved: Φ_s(τ,,F_s)^3 F_s =Φ(τ',,F)^3 F =Φ(τ',,L)^3 L , whereτ'=τ_0-χ'and, in a slight abuse of notation,Φ(τ',,F)=Φ(τ',,L) L/ F. Hereτ=τ_0-χandτ'=τ_0-χ', and we recall that these are related byτ=τ'-u_. Equation (<ref>) simply reflects the fact that all galaxies observed in^3with flux betweenF_sandF_s+ F_s, physically lie in^3with intrinsic luminosity betweenLandL+ L. Among all the galaxies in the volume element^3or^3, we select only those that meet or exceed the flux thresholdF_*: Θ(F_s-F_*)Φ_s(τ,,F_s)^3 F_s =Θ(L-L_*())Φ(τ',,L)^3 L . HereΘis the Heaviside step function which enforces the threshold andL_*()=4π d_L^2() F_*is the luminosity threshold for an object at luminosity distanced_L(). Note that on both sides of Eq. (<ref>) we are imposing the same selection criterion so that the same galaxies are being selected in both real and redshift space. Since the luminosity distanced_Lis affected by inhomogeneities and depends therefore on direction, a fixed flux thresholdF_*in all directions corresponds to different luminosity thresholdsL_*()in different directions. Integrating both sides of Eq. (<ref>) yields the differential relation between number densities [cf. Eq. (<ref>)] n_s(τ,;F>F_*)^3=n(τ',; L>L_*())^3 , where n_s(τ,;F>F_*) ≡∫^∞_F_* F_s Φ_s(τ,,F_s) , n(τ',;L>L_*()) ≡∫^∞_L_*() L Φ(τ',,L) . Separating the number densities into a mean contribution and an overdensity, assuming a universal luminosity function, we obtain [1+δ_s(τ,)]^3 =n̅(τ',L>L_*())/n̅_s(τ,F>F_*) [1+δ(τ',)]^3 . The denominator on the right-hand side can be rewritten as n̅_s(τ,F>F_*)=n̅(τ, L>L̅_*(χ)) , since the mean number of galaxies atτwith flux aboveF_*corresponds to the galaxies that have mean intrinsic luminosity aboveL̅_*(χ). The fraction in Eq. (<ref>) can then be split as n̅(τ',L>L_*())/n̅(τ, L>L̅_*(χ)) =n̅(τ',L>L_*())/n̅(τ', L>L̅_*(χ'))n̅(τ', L>L̅_*(χ'))/n̅(τ, L>L̅_*(χ'))n̅(τ, L>L̅_*(χ'))/n̅(τ, L>L̅_*(χ)) , which gives rise to three contributions. Firstδ_*, defined as 1+ δ_*(τ',)=n̅(τ',L>L_*())/n̅(τ', L>L̅_*(χ')) , represents the fractional number density of galaxies atwith luminosity comprised betweenL̅_*(χ')andL_*()=L̅_*(χ')+δ L_*(χ',), where the perturbation to the luminosity thresholdδ L_*is directly related to the perturbation to the luminosity distance by δ L_*(χ',) =4π F_* [d_L^2(χ',)-d̅_L^ 2(χ')] , and is affected by the Doppler effect (among other things). HereL̅_*(χ')=4πd̅_L^ 2(χ')F_*is the threshold that would be adopted in the absence of perturbations to the luminosity distance. Secondδ_ evol, defined as 1+δ_ evol(τ',)=n̅(τ', L>L̅_*(χ'))/n̅(τ, L>L̅_*(χ')) , encodes the evolution of the comoving number density of galaxies, above a fixed luminosity thresholdL̅_*(χ'), between the hypersurface of constantτ(corresponding to constant observed redshiftz) and the hypersurface of constantτ'(corresponding to constant background redshiftz̅). Using Eq. (<ref>), applied atτandτ'for the same luminosity thresholdL̅_*(χ'), we obtain 1+δ_ evol(τ',)=F_ evol(τ')/F_ evol(τ) . Finallyδ_L, defined as 1+δ_L(τ',)=n̅(τ, L>L̅_*(χ'))/n̅(τ, L>L̅_*(χ)) , describes the fractional number of galaxies with luminosity betweenL̅_*(χ')andL̅_*(χ). (Hereδ_Lis not to be confused with the linear matter field.) This term accounts for the fact that, since we select galaxies above a fixed flux thresholdF_*, we do not select the same population of galaxies at each distance. Galaxies that are further away are selected with a higher luminosity than galaxies that are closer. Because of this, even if the luminosity function would be constant in time for all values ofL, there is an evolution in the mean number density. With this, Eq. (<ref>) becomes [1+δ_s(τ,)]^3=[1+δ_ tot(τ',)]^3 , where 1+δ_ tot(τ',)≡ [1+δ_ evol(τ',)][1+δ_L(τ',)][1+δ_*(τ',)][1+δ(τ',)] . Without selection and evolution effects, clearlyδ_tot=δ. An explicit expression forδ_scan now be obtained by a similar calculation to the one presented in Section <ref>. Thus, passing from differential to integral form (<ref>), changing to spherical coordinates, etc, we have 1+δ_s(τ,χ) =1/χ^2∫^∞_0χ'χ'^2[1+δ_ tot(τ_0-χ',χ')]∫ k/2π ^- k(χ-χ')exp[ k u_(τ_0-χ',χ')] . The difference between this expression and our earlier one, Eq. (<ref>), is thatδthere is replaced withδ_tothere, and secondly the line-of-sight integral is now performed on the past lightcone. Finally, since Eq. (<ref>) is of the same form as Eq. (<ref>), the calculation proceeds as before and we can write down at once the correlation function [cf. Eq. (<ref>)]: 1+ξ_s(χ_1,χ_2,_1·_2) = 1/χ_1^2χ_2^2∫^∞_0χ_1' χ_1'^2∫^∞_0χ'_2 χ_2'^2[1+ξ_ tot(χ_1',χ_2',_1·_2)] p_tot(χ-χ'_1·_2) with 1+ξ_ tot(χ_1',χ_2',_1·_2) ≡⟨[1+δ_tot(τ_0-χ_1',χ_1'_1)] [1+δ_tot(τ_0-χ_2',χ_2'_2)]⟩ , wherep_totis given by Eq. (<ref>), in which the density weighting (<ref>) is now with respect toδ_ tot(instead ofδ). In particular, notice thatξ_totno longer depends on the separationr, as was the case before, but on the full triangular configuration given by three numbers, namely the side lengthsχ_1',χ_2', and the opening angle_1·_2. As we will see in Section <ref>, this is because in a perturbative expansionδ_totcontains terms depending on the line of sight, whose presence induces an angular dependence in the correlations, breaking statistical isotropy. We can also see that the correlation function (<ref>) is a function of the (apparent) past lightcone: it manifestly expresses the unequal-time correlation of any two pointsS_1=(τ_1,_1)=(τ_0-χ_1,χ_1_1)andS_2=(τ_2,_2)=(τ_0-χ_2,χ_2_2). As with our earlier Eq. (<ref>), the formula we have just derived is still `summing over triangles', but now the `triangles' are all those configurations that can be formed on the past light cone from an opening angle_1·_2, as opposed to those formed on spatial hypersurfaces. Since these configurations are on the lightcone, they are in principle all observationally accessible, e.g. if the peculiar velocity of each galaxy was perfectly known. Hence the correlation function (<ref>) is determined by marginalising over all potentially observable configurations, unlike our earlier Eq. (<ref>), which is determined by marginalising over unobservable configurations. Finally,ξ_sis expressed in terms of radial distances (assuming perfect knowledge of the underlying background cosmology), but we note that it is also possible, and perhaps more desirable, to express it in terms of observed and background redshifts,zandz̅. Working in terms of redshifts and angles, the natural observable is the three-dimensional angular power spectrumC_ℓ(z,z')<cit.>, which we note can be constructed from ourξ_sif we leave arbitrary the conversion of redshift to distance. [See Refs. <cit.> for related work on connecting C_ℓ(z,z') to the idealised power spectrum P(k), corrected for unequal-time correlations and wide-angle effects.] § RECOVERING LINEAR THEORY We now verify that our nonlinear expression (<ref>) of the redshift-space overdensity recovers well-known results from linear theory. To begin, we expand the last exponential in Eq. (<ref>), keeping only up to the linear contribution inu_: 1+δ_s(τ,χ) ≃1/χ^2∫χ'χ'^2[1+δ_ tot(τ',χ')] ∫ k/2π ^- k(χ-χ')(1+ k u_(τ',χ')) =1/χ^2∫χ'χ'^2[1+δ_ tot(τ',χ')] ((χ-χ') -u_(τ',χ')/χ(χ-χ') ) , where we have written factors ofkas radial derivatives using k^- kχ=-^- kχ/χ. Note that in Eq. (<ref>) we have made explicit the dependence of the field in the position=χ'but also in timeτ'=τ_0-χ', since the density and velocity are evolving with time. Dropping the quadratic termu_δ, taking theχ-derivative outside of the integral and doing the now trivial integrals, we find (recalling thatu_≡^-1v_) 1+δ_s(τ,χ) =1+δ_ tot(τ,χ) -1/χ^2/χ(χ^2v_(τ,χ)/(τ)) =1+δ_ tot(τ,χ) -1/χ^2∂/∂χ(χ^2v_(τ,χ)/(τ)) -1/χ^2τ/χ∂/∂τ(χ^2v_(τ,χ)/(τ)) =1+δ_ tot(τ,χ) -1/∂ v_/∂χ -(2/χ+/^2) v_ +1/v̇_ , whereτ=τ_0-χand an overdot denotes partial differentiation with respect to conformal time. Note that the second and third term in the second line is equal toδ_s=δ_tot-∇·_̆(with the radial velocity field_̆≡·(̆)), a formula that is conventionally obtained by linearizing the Jacobian of the mapping (<ref>). The last two terms in the last line of Eq. (<ref>) arise because of the lookback time, and are are thus not captured by the Jacobian. The kinematic terms in Eq. (<ref>) can be understood as follows. The third term, the radial derivative of the velocity, gives the well-known RSD effect. The fourth term, proportional to2/(χ), is the wide-angle contribution from a uniform selection function, already present in the original Kaiser formula. This term is usually ignored since at large distances (compared to the separation) it is subdominant to the standard RSD term, but it is important to include in a wide-angle analysis <cit.>. Less well known are the last two terms in Eq. (<ref>); these are due to the fact that, because we are integrating along the line of sight, we are traversing a geodesic on the past lightcone, with bothandv_evolving along it <cit.>. In addition to these terms, which derive from the mapping itself, there are also the selection and evolution effects. These are contained inδ_tot, Eq. (<ref>), which at linear order readsδ_tot≃δ+δ_evol+δ_L+δ_*. We calculateδ_evol,δ_Landδ_*as follows. * For δ_ evol, expand F_evol(τ') in Eq. (<ref>) about τ=τ'+δτ: F_evol(τ')≃ F_evol(τ) - F_evol/τ δτ =F_evol(τ)(1-f_evolδτ) , where in the second equality the derivative has been evaluated using Eq. (<ref>). Inserting this into Eq. (<ref>) and using that by Eqs. (<ref>) and (<ref>)δτ=-δχ=-^-1v_, we find δ_evol=f_evol v_ . * For δ_L, expand n̅(τ,L>L̅_*(χ')) in Eq. (<ref>) around χ=χ'+δχ. At linear order, n̅(τ,L>L̅_*(χ')) ≃n̅(τ,L>L̅_*(χ))(1 -∂lnn̅/∂lnL̅_*lnL̅_*/lnχδχ/χ) , where all quantities on the right-hand side are evaluated at χ and the chain rule has been used on the second term. Here δχ=^-1 v_ and the χ derivative is lnL̅_*/lnχ =2lnd̅_L/lnχ =2χ(1+1/χ) , where the first equality follows because L̅_*∝d̅_L^ 2 and the second equality follows from differentiating d̅_L(χ)=(1+z)χ=χ/a[τ(χ)]. Inserting the linear expansion (<ref>) into Eq. (<ref>), we have δ_L=5 s_*(1+1/χ) v_ , where we have inserted Eq. (<ref>) for the magnification bias, replacing n̅ with n̅_s (since the difference results in a second-order correction). * For δ_*, expand n̅(τ',L>L_*()) in Eq. (<ref>) around L̅_*(χ')=L_*()-δ L_*(χ',). At linear order, n̅(τ',L>L_*())≃n̅(τ',L>L̅_*(χ'))(1+∂lnn̅/∂lnL̅_*δ L_*(χ',)/L̅_*(χ')) . The perturbation to the threshold at a fixed position in real space is δ L_*(χ',)/L̅_*(χ')=2δ d_L(χ',)/d̅_L(χ')=4 v_ , where the first equality follows from linearizing Eq. (<ref>), while in the second equality we have used the luminosity distance fluctuations due to the source velocity calculated in Ref. <cit.>. [See equation (53) (or equivalently equation (55)) in Ref. <cit.>, noting that there is equal to - here.] Inserting the linear expansion (<ref>) into Eq. (<ref>), we obtain δ_* =-10 s_* v_ , where again we have substituted in Eq. (<ref>) for the magnification bias. Finally, inserting Eqs. (<ref>), (<ref>), and (<ref>) into Eq. (<ref>) forδ_tot, we obtain δ_s(τ,χ)=δ -1/∂ v_/∂χ+1/v̇_ + (f_ evol-5s_*-/^2+5s_*-2/χ)v_ . Upon comparing this equation with the full expression obtained from relativistic calculations—e.g. equation (2.13) in Ref. <cit.>—we see that we have recovered all subleading effects at𝒪(/k), with the exception of two terms. The first is a kinematic term given simply asv_. This missing term can be traced back to the starting point of our derivation, Eq. (<ref>), which is based on the naive Euclidean volume element^3. This Newtonian derivation does not account for the fact that the hypersurface of constant time for the moving galaxies (in real space) does not coincide with the hypersurface of constant conformal timeτ. That is, these two frames are `tilted' with respect to one another, and it is by accounting for this that we recover precisely the term that we are after. From an observational point of view, this term arises because photons, followed back down the past lightcone, do not probe the rest-frame galaxy density. Based on purely kinematic considerations, these photons will intercept more galaxies moving towards them versus away from them <cit.>, so that if a galaxy is receding away from the observer then the apparent local density is enhanced relative to its intrinsic value. Technically speaking, the missing term arises through projection of the four-currentj^μ=n u^μat the source position onto the covariant three-dimensional volume element (the three-form dual to the one-form x^μ). Clearly this requires a relativistic treatment, beginning with a covariant notion of number conservation <cit.>; this is however beyond the scope of this work. The second term missing is^-1∂Ψ/∂χ, the contribution from the gravitational redshift. This can be put down to the simple fact that the standard mapping (<ref>) only accounts for the dominant Doppler shift and therefore ignores the subdominant contribution from the gravitational redshift. To illustrate the basic structure in a minimal model, we have neglected to include the gravitational redshift. However, adding this effect into the model is straightforward: by Eq. (<ref>) we take()→()-^-1Ψ(); see also Refs. <cit.>. A complete model including the gravitational redshift and the relativistic tilt will be presented in a forthcoming work <cit.>. § GAUSSIAN STREAMING MODEL ON THE FULL SKY The discussion up to now has been fairly general in that no assumptions have been placed on the statistics of the velocities that determinep(χ-χ')and therefore the correlation function. We now wish to specify these statistics by presenting a particular model of Eq. (<ref>), namely, the full-sky version of the Gaussian streaming model <cit.>, often used in configuration-space analyses <cit.>. We will however include the selection and evolution effects, which we recall are entirely contained inδ_tot, Eq. (<ref>). The lookback time is also included, which amounts to takingτ'→τ_0-χ'. We follow the usual procedure <cit.> for constructing such models, namely, we rewrite the generating functionZin terms of the connected moments using the cumulant generating functionW≡ln Z, then Taylor expandW, keeping only the first and second connected moments (as determines a Gaussian). In detail, by expandingW()≡ln Z()about=0we have W() =∑_n=1^∞^n/n! J_i_1⋯ J_i_n ⟨ w_i_1⋯ w_i_n⟩_δ_tot,c , ⟨ w_i_1⋯ w_i_n⟩_δ_tot,c =(-)^n∂^nln Z/∂ J_i_1⋯∂ J_i_n|_J=0 , where repeated indices are summed over, andi_1=1,2,i_2=1,2, etc. Here subscriptδ_totdenotes the density-weighted average (<ref>), subscript `c' denotes the connected part of the moment, andw_i=u_(τ_i,_i)=_i·(̆τ_i,_i). (Without loss of generality one may take the lines of sight_1and_2to lie within thexz-plane, as in Fig. <ref>.) Then in terms of the connected moments Z() =exp(∑_n=1^∞^n/n! J_i_1⋯ J_i_n ⟨ w_i_1⋯ w_i_n⟩_δ_tot,c) . These expressions are general. As mentioned, in the Gaussian streaming model we keep only the first and second connected moments, i.e. truncating the sum atn=2. This leaves the mean and covariance, μ(χ_1',χ_2',_1·_2) ≡⟨w⟩_δ_tot,c = [ ⟨ u_(χ_1')⟩_δ_tot,c; ⟨ u_(χ_2')⟩_δ_tot,c ] , C(χ_1',χ_2',_1·_2) ≡⟨ww^⟩_δ_tot,c =[ ⟨ u_(χ_1') u_(χ_1')⟩_δ_tot,c ⟨ u_(χ_1') u_(χ_2')⟩_δ_tot,c; ⟨ u_(χ_1') u_(χ_2')⟩_δ_tot,c ⟨ u_(χ_2') u_(χ_2')⟩_δ_tot,c ] . Here we have used the shorthandu_(χ_i')≡ u_(τ_0-χ_i',χ_i'_i). Note that the mean radial velocityμ, being density weighted, does not in general vanish. Keeping terms in Eq. (<ref>) up to second order inyields the generating function of a Gaussian: Z() =exp(·μ-1/2^C) . Inverse Fourier transform ofZ(), i.e. evaluating Eq. (<ref>), thus yields a two-dimensional Gaussian with meanμand covarianceC, which when inserted back into Eq. (<ref>) furnishes the wide-angle Gaussian streaming model: [Since we are using spherical coordinates, the probability distribution is perhaps better described as a Maxwell–Boltzmann distribution p∝ x^2^-x^2 (or some two-dimensional analogue thereof). Although for large variance we note that the Maxwellian is well-approximated by a Gaussian (in the one-dimensional case).] 1+ξ_s(χ_1,χ_2,_1·_2) =1/χ_1^2χ_2^2∫^∞_0χ_1' χ_1'^2∫^∞_0χ'_2 χ_2'^2 [1+ξ_tot(χ_1',χ_2',_1·_2)] ×1/2π|C|^1/2 exp( -1/2(χ-χ'-μ)^C^-1(χ-χ'-μ) ) , remembering thatμandCare functions ofχ_1',χ_2', and_1·_2. Note that this model does not assume thatδandu_are Gaussian fields, nor is it assuming that in the perturbative expansion (<ref>) the fieldsδandu_are small fluctuations. Rather, this model is based on the correlations being small on large scales. The Gaussian distribution arises from our having truncated the generating function at second order in=κ. Of course, extensions to Eq. (<ref>) to include higher-order, non-Gaussian statistics are also possible <cit.>. The equivalent model without selection and evolution effects is obtained by takingξ_tot→ξandδ_tot→δin the density weighting. The above model also takes into account the lookback time, which can be ignored by treating time in the usual way, i.e.as an independent variable (not degenerate with distance). Overall, the effect of these three effects changes the quantitative predictions but does not change the basic form of the model. Equation (<ref>) is the full-sky generalisation of the well-known Gaussian streaming model of the distant-observer limit: 1+ξ_s(s,μ) =∫^∞_-∞ r_ [1+ξ(r)]·1/√(2π)σ_12()̊ exp( -1/2(s_-r_-u_12()̊)^2/σ^2_12()̊) , wherer_ands_are the real- and redshift-space separations along the line of sight; andu_12()̊≡⟨Δ u_⟩_δ_tot,candσ^2_12()̊≡⟨(Δ u_)^2⟩_δ_tot,c, whereΔ u_≡·(̆_1)-·(̆_2), are the mean and dispersion of the pairwise velocity, respectively, and all quantities are evaluated at a fixed time. Although the wide-angle and distant-observer models are similar in form, it requires some work to show that Eq. (<ref>) does indeed reduce to Eq. (<ref>) in the appropriate limit. We leave the details of this calculation to Appendix <ref>. § MULTIPOLE EXPANSION IN THE WIDE-ANGLE REGIME In this section we show that Eq. (<ref>) recovers the standard linear predictions for the multipoles, including those induced when going beyond the distant-observer limit. Since the aim here is to compare our results with those in the wide-angle literature, we will ignore contributions from galaxy evolution and relativistic effects. To facilitate the calculation, recall that the correlation function on a fixed redshift slice can be expanded about the distant-observer limit as <cit.>ξ_s(s,μ,d) =∑_n=0^∞(s/d)^n ∑_ℓ=0^∞ξ^(n)_ℓ(s,d)ℒ_ℓ(μ) , i.e. in terms of a small expansion parameterϵ≡ s/d, where for closely separated lines of sight a low-order expansion is valid. Hereℒ_ℓis Legendre polynomial of theℓth degree,s=|_1-_2|is the separation,μ=cosθ(see Fig. <ref>), anddis some distance to the galaxy pair (to be made precise shortly). Note that in addition to the explicit dependence of the multipoles indviaϵ,ξ_ℓ^(n)depends also ondthrough the evolution of the density and velocity, which depend on redshift, and therefore varies with distanced. The usual Kaiser multipoles <cit.> are given by then=0multipoles: ξ^(0)_0(s) =∫k^2 k/2π^2 j_0(ks) ( P_δδ(k) -1/3(+)P_θδ(k) +1/5P_θθ(k)) , ξ^(0)_2(s) =∫k^2 k/2π^2 j_2(ks) (2/3(+)P_θδ(k) -4/7P_θθ(k)) , ξ^(0)_4(s) =∫k^2 k/2π^2 j_4(ks) 8/35P_θθ(k) , whereandare the linear galaxy bias of two tracers labelled A and B, andθ=-fδ, wherefis the growth rate. The wide-angle corrections are given by multipolesn≥1, and theℓth multipole is given by the sumξ_ℓ(s,d)≡∑_n ϵ^nξ_ℓ^(n)(s,d). Unlike in the distant-observer limit, the wide-angle contributions to the multipoles depend on how the angular separationμis defined, i.e.what we choose for the line of sight <cit.>. §.§ Mid-point parametrisation The shape and size of the multipoles depend on how we parametrise the triangle as formed by the galaxy pair with the observer. In particular, we need to fix the definition ofμ. This means choosing a line of sight, and there is no unique choice for this. In this work we use the line of sight defined by the mid-point parametrisation (see Fig. <ref>). This section describes this parametrisation and collects some useful formulae. In Appendix <ref> we give a formula relating multipoles in the mid-point parametrisation to those in the end-point parametrisation. First, the mid-point of the separation=_1-_2is given byd≡(_1+_2)/2. We thus have_1=d+/2and_2=d-/2. The expansion parameter in Eq. (<ref>) isϵ≡ s/d. In particular, we may aligndwith the+z-axis, i.e.d̂=. We can also without loss of generality place the triangular configuration in thexz-plane, with the first galaxy placed in the left half-plane (with negativex-coordinate) and the second galaxy placed in the right half-plane (with positivex-coordinate); see Fig. <ref>. With these choices=-√(1-μ^2)+μ, withμ≡·and,are unit vectors along thexandzaxes, respectively. Now_1=d(+ϵ/2)and_2=d(-ϵ/2), from which the unit vectors are found to be _1 ≡_1=(+ϵ/2) ∑_n=0^∞ (-ϵ/2)^n ℒ_n(μ) =-ϵ/2√(1-μ^2) +𝒪(ϵ^2) , _2 ≡_2=(-ϵ/2) ∑_n=0^∞(+ϵ/2)^n ℒ_n(μ) =+ϵ/2√(1-μ^2) +𝒪(ϵ^2) . Observe that at𝒪(ϵ), the lines of sight_1and_2are symmetric about thez-axis (equal and oppositex-components). Note the following relations when going between variables{χ_1,χ_2,cosϑ≡_1·_2}and{s,d,μ}:s=(χ^2_1+χ^2_2-2χ_1χ_2cosϑ)^1/2,d=1/2(χ^2_1+χ^2_2+2χ_1χ_2cosϑ)^1/2, andμ^2 =1/4(χ_1^2-χ_2^2)^2/[(χ_1^2+χ_2^2)^2 -(2χ_1χ_2cosϑ)^2]. These follow from the cosine rule. §.§ Linear theory We now show that Eq. (<ref>) recovers at zeroth-order (n=0) the standard Kaiser multipoles <cit.>, and at the first-order (n=1) the wide-angle corrections. As mentioned, the wide-angle corrections vanish at first-order in the auto-correlation function but not for the cross-correlation function. We will thus consider the cross-correlation between two tracer populations, described by linear biasand. We will assume no magnification and evolution bias since we have already shown in Section <ref> that we recover the correct linear expression forδ_s. The details of our computations can be found in Appendix <ref>. First we convert Eq. (<ref>) to the cross-correlation. In linear theory this is done simply by replacingδ(_1)→δ(_1)andδ(_2)→δ(_2): 1+ξ_s(s,μ,d) =1/χ_1^2χ_2^2∫χ_1' χ_1'^2∫χ_2' χ_2'^2∫^2κ/(2π)^2 ^-κ·(χ-χ')⟨(1+δ_1)(1+δ_2)^κ·w⟩ . Here we have used the shorthandsδ_1=δ(_1)andδ_2=δ(_2), and as beforew=(u_(_1),u_(_2)). For convenience we will also write U_i()̊=⟨ u_i(_1)δ(_2)⟩ and Ψ_ij()̊=⟨ u_i(_1)u_j(_2)⟩ for the velocity-density and velocity-velocity correlation functions. Here the separation=̊_1-_2is given in terms of the radial distances as(̊χ')=χ_1'_1-χ_2'_2, and in redshift space(χ)=χ_1_1-χ_2_2=(̊χ). The idea of the calculation is to expand^κ·w, keeping up to quadratic terms and dropping zero-lag terms (which are absent in the linear predictions). The integrations can then be done analytically (see Appendix <ref> for details). The result is ξ_s(χ_1,χ_2,_1·_2) =ξ(s) -∂/∂χ_1 U_in̂_1^i +∂/∂χ_2 U_in̂_2^i +∂/∂χ_1∂/∂χ_2Ψ_ijn̂_1^in̂_2^j -2/χ_1 U_in̂_1^i +2/χ_2 U_in̂_2^i +(2/χ_1∂/∂χ_2 +2/χ_2∂/∂χ_1) Ψ_ijn̂_1^in̂_2^j . This is the linear correlation function corresponding to the right-hand side of Eq. (<ref>). Hereξ,U_iandΨ_ijdepend onχ_1andχ_2through(and we remember that lines of sight are always constant with respect to their radial derivatives,∂n̂^i/∂χ=0). The first line in Eq. (<ref>) yields the usual Kaiser multipoles (among wide-angle corrections), while the second line consists of terms suppressed by a factor of/kwith respect to the Kaiser multipoles, but are of the same order as the wide-angle contributions. §.§.§ Distant-observer limit The multipoles of the distant-observer limit (i.e. the Kaiser multipoles) can be recovered by setting_1=_2==(0,0,1)and takingχ_1,χ_2→∞. Doing so eliminates the second line of terms in Eq. (<ref>), leaving ξ_s(s,μ) =ξ(s) -(+)∂_3 U_3(s,μ) -∂_3^2Ψ_33(s,μ) , (distant-observer limit) where derivatives are with respect tos_3, thez-component of. This equation was first derived in Ref. <cit.>. A straightforward computation of the derivatives yields ∂_3 U_3(s,μ) =∫k^2 k/2π^2 (1/3 j_0(ks) - 2/3 j_2(ks)ℒ_2(μ)) P_θδ(k) , ∂_3^2Ψ_33(s,μ) =-∫k^2 k/2π^2 (1/5 j_0(ks) -4/7 j_2(ks)ℒ_2(μ) +8/35 j_4(ks)ℒ_4(μ))P_θθ(k) , wherej_ℓis theℓth-order spherical Bessel function. From here it is not difficult to assemble the Kaiser multipoles (<ref>). §.§.§ Wide-angle corrections The wide-angle corrections enter theU_iterms at orderϵand theΨ_ijterms atϵ^2. (Note that in the auto-correlation, i.e. when=, the corrections also enterU_iatϵ^2.) Since we are interested only in the leading-order corrections (orderϵ), we need only focus on terms involvingU_iin Eq. (<ref>); the terms involvingΨ_ijare as given in the distant-observer limit so require no further calculation. For the∂_i U_jterms in the first line of Eq. (<ref>), we have -∂/∂χ_1 U_in̂_1^i +∂/∂χ_2 U_in̂_2^i =-(+)∫k^2 k/2π^2 (1/3 j_0(ks) - 2/3 j_2(ks)ℒ_2(μ)) P_θδ(k) -2/5ϵ(-)∫k^2 k/2π^2 (ℒ_1(μ)-ℒ_3(μ)) j_2(ks) P_θδ(k) , where the first integral on the right-hand side is the distant-observer contribution, and the second integral is the leading-order wide-angle correction.[This agrees with equations (52) and (53) in Ref. <cit.>; see also equation (3.19) in Ref. <cit.>.] The details of this computation can be found in Appendix <ref>. For theU_iterms in Eq. (<ref>) up to leading order inϵ, we have -2/χ_1 U_in̂_1^i +2/χ_2 U_in̂_2^i =2ϵ(-) ∫k^2 k/2π^2j_1(ks)/ksℒ_1(μ) P_θδ(k) . The leading-order wide-angle multipoles are thus ξ^(1)_1(s) =(-)∫k^2 k/2π^2 (-2/5 j_2(ks) + 2j_1(ks)/ks) P_θδ(k) , ξ^(1)_3(s) =2/5(-)∫k^2 k/2π^2 j_2(ks)P_θδ(k) . These are consistent with those given in, e.g.Refs. <cit.>. Note that when working within the end-point parametrisation the odd multipoles,ξ_1^(1)andξ_3^(1), receive additional contributions, which are of a geometric, non-cosmological nature (Appendix <ref>). Thus we have recovered the linear multipoles. § CONCLUSIONS We have described a framework to model in the nonlinear regime not only wide-angle effects but also selection and relativistic effects. Our main result is Eq. (<ref>), an expression for the redshift-space correlation function which is valid in both the nonlinear regime and on the full sky, and accounts for the survey flux limit and the population evolution of tracers. Based on this expression, we have also given the full-sky generalisation of the Gaussian streaming model, Eq. (<ref>), which we have checked reduces to the well-known flat-sky model (<ref>) in the appropriate limit. The correlation function (<ref>) takes a lensing-like form (i.e. is given by integrals along each line of sight) which can be understood probabilistically: a given two-point correlation in redshift space is determined by averaging over all the possible two-point correlations in real space that can be formed on the two lines of sight. Geometrically, this can be understood as a weighted sum over the space of triangular configurations in which the observer is fixed at one vertex with the galaxies at the other two (at the ends of the lines of sight). Since the opening angle is fixed, the probability space is two dimensional and given by the joint statistics of the line-of-sight components of the galaxy velocities. We note that this heuristic generalises to higher-order correlation functions (e.g. for the three-point function the sum is over tetrahedrons). We have also given a non-perturbative expression (<ref>) for the overdensity in redshift space. Performing a perturbative expansion of this expression, we showed that we are able to recover all but two terms of the well-known linear expression of the overdensity at subleading order; see Eq. (<ref>). The first term missing traces back to the fact that observations probe the number density of galaxies not in their rest frame but in a frame tilted with respect to it. This results in an additional kinematic term but requires a covariant expression of number conservation. The second term is the gravitational redshift, whose absence is due to the simple fact that we have chosen to exhibit the formalism using the familiar redshift mapping (<ref>). A model of the overdensity, complete down to𝒪(/k)effects, will be presented in a follow-up work <cit.>. Nevertheless, the expression we have derived provides a compact description of a large number of terms (RSD, magnification bias, evolution bias, projection effects related to the lightcone, etc). Furthermore, our work provides a simple quasi-Newtonian derivation to the full relativistic calculation. In summary, we have shown that the streaming model is not limited to the distant-observer limit but that it can be straightforwardly extended into the wide-angle regime and be built upon to include a number of other important effects. In a future work we will present numerical results for a realistic model including nonlinear evolution and galaxy bias, with a view towards an eventual measurement of the gravitational redshift. §.§ Acknowledgements This work is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No. 863929; project title “Testing the law of gravity with novel large-scale structure observables". CB acknowledges support from the Swiss National Science Foundation. § ON THE MEAN DENSITY IN THE WIDE-ANGLE REGIME In Section <ref> we assumed thatn̅_s≡⟨ n_s()⟩=n̅. However, this is not assured in the wide-angle regime. This can be shown by direct calculation of the expectation of Eq. (<ref>): ⟨ n_s()⟩ =1/χ^2n̅∫^∞_0χ'χ'^2∫ k/2π ^- k(χ-χ')⟨[1+δ(χ')]^ k u_⟩ . In general, the right-hand side does not evaluate ton̅. We can see this as follows. By the cumulant expansion formula⟨^iX⟩=exp⟨^iX⟩_c, we have that⟨^ k u_⟩=^-k^2σ_u^2/2and⟨^ k u_δ⟩=0, assuming as a first approximation thatδandu_are Gaussian fields. Recognising that^-k^2σ_u^2/2is the Fourier transform of a Gaussian with mean zero and varianceσ_u^2, we have ⟨ n_s()⟩ =1/χ^2 n̅∫^∞_0χ'χ'^21/√(2π)σ_u ^-(χ-χ')^2/2σ_u^2 . That the mean density⟨ n_s()⟩is a position-dependent quantity may seem strange at first, but this is just a consequence of the loss of statistical homogeneity in the wide-angle regime, with the observer representing a preferred location in space. Indeed, in the distant-observer limit, where homogeneity is retained, thisχdependence drops out: ifχ≫σ_u, then the Gaussian in the integrand is sharply peaked aroundχ'=χ, so that the integral evaluates to approximatelyχ^2. Therefore⟨ n_s⟩→n̅asχ/σ_u→∞, so that it is perfectly valid to take⟨ n_s⟩=n̅in this limit. But short of this limit there are corrections, which decrease with depth. Fortunately, convergence to this limit is rapid. Quantitatively, with theΛCDM valueσ_u=5.8 h^-1Mpc(corresponding to a velocity dispersion of about300km s^-1), we find for depthsχ≥100 h^-1Mpc(orz≥0.023) that the deviations from⟨ n_s()⟩/n̅=1are≤0.3%, i.e. small in most situations of interest. There are of course corrections to these estimates from non-Gaussianity due to nonlinear gravitational evolution. These corrections are largest on small scalesk∼1/χ. On intermediate scalesk∼0.1 h Mpc^-1, where nonlinearities begin to be important, we expect perturbatively small departures from Gaussianity. This translates to non-Gaussianities becoming important at depthsχ≃60 h^-1Mpc(z∼0.01) or shallower, i.e. small or negligible by the time we reach the convergence scaleχ≃100 h^-1Mpc. This means that, unless one's sample contains very local galaxies, there seems little harm in taking⟨ n_s()⟩=n̅(though one can always include the corrections should they be wanted). § RECOVERING THE STANDARD GAUSSIAN STREAMING MODEL OF THE DISTANT-OBSERVER LIMIT In this appendix we verify that the usual distant-observer streaming model (<ref>) is recovered as a special case of the full-sky streaming model (<ref>). Clearly we must end up with one less integral, leaving an integral over the separationr_. However, this is not as straightforward as simply taking the distant-observer limit,χ_1,χ_2→∞and_1→_2. It turns out to be convenient to centre the coordinates on the redshift-space positions, with the coordinate transformationχ_1'=χ_1-q_1andχ_2'=χ_2-q_2, orχ'=χ-q. Equation (<ref>) then reads 1+ξ_s(χ_1,χ_2,_1·_2) =∫^χ_1_-∞ q_1 (1-q_1/χ_1)^2 ∫^χ_2_-∞ q_2 (1-q_2/χ_2)^2 [1+ξ(r)] ×1/2π|C()̊|^1/2exp( -1/2(-μ()̊)^C^-1()̊(-μ()̊) ) . (Here we have ignored for simplicity the selection effects and the lookback time so thatμandCdepend on a triangle configuration parametrised by$̊, i.e. we are working on a constant-time hypersurface.) In the limit χ_1,χ_2→∞ the first two factors in parentheses tend to unity (noting that at large q_1,q_2 these factors become irrelevant as the Gaussian rapidly takes the whole integrand to zero). Thus, setting these factors to unity, and doing some straightforward matrix algebra, the foregoing expression becomes 1+ξ_s =∫^∞_-∞ q_1 ∫^∞_-∞ q_2 1+ξ(r)/2πσ^2√(1-ρ^2) exp( -1/2(Δ_1-Δ_2)^2+2(1-ρ)Δ_1Δ_2/σ^2(1-ρ^2)) , where as shorthands Δ_1≡ q_1-μ_1 and Δ_2≡ q_2-μ_2, while C_11=C_22=σ^2 and C_12=ρσ^2, where σ=σ()̊ and ρ=ρ()̊ (or functions of q_1 and q_2). Defining the line-of-sight separation in real and redshift space, r_=χ_1'-χ_2' and s_=χ_1-χ_2, and the mid-points r̅_=(χ_1'+χ_2')/2 and s̅_=(χ_1+χ_2)/2, we have q_1-q_2=s_-r_ and (q_1+q_2)/2=s̅_-r̅_, which implies q_1=s̅_-r̅_+12(s_-r_) and q_2=s̅_-r̅_-12(s_-r_). (Note that s_ and s̅_ are fixed by the redshift-space configuration.) Making another change of coordinates, (q_1,q_2) to (r_,r̅_), we have after some more algebra 1+ξ_s(s_,s_⊥) =∫^∞_-∞ r_ 1+ξ(r)/2πσ^2√(1-ρ^2) exp( -1/2(s_-r_-u_12)^2/2σ^2(1-ρ)) ∫^∞_-∞r̅_ exp( -(s̅_-r̅_)^2/σ^2(1+ρ)) , where we recognised that μ_1-μ_2 =⟨ u_(_1)⟩_δ,c-⟨ u_(_2)⟩_δ,c≡ u_12, and noted that u_12, σ, and ρ depend on r_, but not r̅_, hence the last integral. Here we have r^2=r_^2+r_⊥^2 and r_⊥=s_⊥. Upon doing the last (Gaussian) integral over r̅_ and noting that 2σ^2(1-ρ)=2C_11-2C_12=⟨(Δ u_)^2⟩_δ,c≡σ_12^2, we hence recover Eq. (<ref>), the usual Gaussian streaming model, i.e. in the distant-observer limit. § LINEAR THEORY MULTIPOLES In this appendix we calculate the contributions to the multipoles from wide-angle effects and inverse-distance terms (from the selection function), filling in some of the details of Section <ref>. We will compute from Eq. (<ref>) the leading-order wide-angle corrections, i.e. at 𝒪(ϵ), and will consider the cross-correlation of two different tracers, described by linear bias and . For this calculation it is convenient to start with Eq. (<ref>), which for the cross-correlation function is given by simply replacing δ(_1)→δ(_1) and δ(_2)→δ(_2): 1+ξ_s(s,μ,d) =1/χ_1^2χ_2^2∫χ_1'^2χ_1'∫χ_2'^2χ_2' ∫^2κ/(2π)^2 ^-κ·(χ-χ')⟨(1+δ_1)(1+δ_2)^κ·w⟩ . Here we have used the shorthands δ_1=δ(_1) and δ_2=δ(_2), and as before w=(_1·(̆_1),_2·(̆_2)). For convenience we write U_i()̊≡⟨ u_i(_1)δ(_2)⟩ and Ψ_ij()̊=⟨ u_i(_1)u_j(_2)⟩ for the velocity–density and velocity–velocity correlation functions. In terms of the (linear) power spectra, U_i() =∫^3/(2π)^3 ^-·̨ k_i/k^2 P_θδ(k) =-∂_i∫k^2 k/2π^2 1/k^2 j_0(ks) P_θδ(k) , Ψ_ij() =∫^3/(2π)^3 ^-·̨ k_i/k^2- k_j/k^2 P_θθ(k) =-∂_i∂_j ∫k^2 k/2π^21/k^4 j_0(ks) P_θθ(k) , where θ is the velocity divergence, ∂_i=∂/∂ s_i, and we have used that u_i= k_i/k^2θ (for a potential flow). Recall that the separation =̊_1-_2 is given in terms of the radial distances as (̊χ')=χ_1'_1-χ_2'_2, and in redshift space (χ)=χ_1_1-χ_2_2=(̊χ). To evaluate Eq. (<ref>), we expand the generator and keep only up to quadratic terms: ⟨(1+δ_1)(1+δ_2)^κ·w⟩ ≃1+ξ(r)+κ_a⟨ w_a(δ_1+δ_2)⟩ -12κ_aκ_b⟨ w_a w_b⟩ =1+ξ(r)+(κ_1n̂_1^i-κ_2n̂_2^i) U_i()̊ -κ_1κ_2n̂_1^in̂_2^jΨ_ij()̊ , where we have dropped zero-lag terms since they are absent in the linear predictions. Here we have used that ⟨_̆2δ_1⟩=-⟨_̆1δ_2⟩, and ⟨_̆1δ_1⟩=⟨_̆2δ_2⟩=0, which follow from isotropy of the underlying fields. We will now evaluate Eq. (<ref>) using the expansion (<ref>). This is a two-step calculation: first evaluate the κ integral to yield a Dirac delta function, then evaluate the radial integrals. First, focus on the U_i term in Eq. (<ref>); applying the κ_i integral on this we have ∫^2κ/(2π)^2 ^-κ·(χ-χ') (κ_1_1-κ_2_2)·()̊ =()̊·(_2∂/∂χ_2 -_1∂/∂χ_1) (χ-χ') . Inserting this back into the line-of-sight integrals and doing the integration with the help of the delta functions, we obtain 1/χ_1^2χ_2^2(∂/∂χ_2_2 -∂/∂χ_1_1) ·χ_1^2χ_2^2() =1/χ_2^2∂/∂χ_2(χ_2^2)·_2 -1/χ_1^2∂/∂χ_1(χ_1^2)·_1 . Note that ∂ s/χ_1=_1·∇ s=_1· and ∂ s/χ_2=-_2·∇ s=-_2·. Next, the Ψ_ij term in Eq. (<ref>); plugging this into the κ_i integral gives -1/2∫^2κ/(2π)^2 ^-κ·(χ-χ') κ_1κ_2n̂_1^in̂_2^jΨ_ij()̊ =Ψ_ij()̊n̂_1^in̂_2^j∂/∂χ_1∂/∂χ_2(χ-χ') . Inserting this back into Eq. (<ref>) and doing the radial integrals we have 1/χ_1^2∂/∂χ_1χ_1^2 1/χ_2^2∂/∂χ_2χ_2^2 Ψ_ij()n̂_1^in̂_2^j = (∂/∂χ_1∂/∂χ_2 +2/χ_1∂/∂χ_2 +2/χ_2∂/∂χ_1 +2/χ_12/χ_2)Ψ_ij()n̂_1^in̂_2^j , where the second and third term on the right-hand side are order /k, while the last is order (/k)^2. Altogether we have ξ_s(χ_1,χ_2,_1·_2) =ξ(s) -1/χ_1^2∂/∂χ_1χ_1^2 U_in̂_1^i +1/χ_2^2∂/∂χ_2χ_2^2 U_in̂_2^i +1/χ_1^2∂/∂χ_1χ_1^2 1/χ_2^2∂/∂χ_2χ_2^2 Ψ_ijn̂_1^in̂_2^j , where ξ, U_i and Ψ_ij depend on χ_1 and χ_2 through , and we remember that ∂n̂^i/∂χ=0, i.e. lines of sight are always constant with respect to their radial derivatives. This equation is the wide-angle formula for the linear correlation function and as we just saw the last derivative produces an order (/k)^2 term that we will henceforth ignore. Evaluating the derivatives in Eq. (<ref>) yields Eq. (<ref>) in the main text. We now move onto computing the wide-angle corrections. For this it is convenient to switch to Cartesian coordinates, noting that for any function f(), with =_1-_2, we have by the chain rule ∂ f/∂χ_1=n̂_1^j∂_j f and ∂ f/∂χ_2=-n̂_2^j∂_j f, since ∂ s_1^j/∂χ_1=n̂_1^j and ∂ s_2^j/∂χ_2=n̂_2^j. With these, Eq. (<ref>) becomes ξ_s(χ_1,χ_2,_1·_2) =ξ(s) -(n̂_1^in̂_1^j+n̂_2^in̂_2^j)∂_i U_j -n̂_1^in̂_2^jn̂_1^kn̂_2^l∂_k∂_lΨ_ij -2/χ_1 U_in̂_1^i +2/χ_2 U_in̂_2^i +(2/χ_1n̂_1^k -2/χ_2n̂_2^k)∂_kΨ_ijn̂_1^in̂_2^j , where as a shorthand ∂_i=∂/∂ s_i. Note that when = the wide-angle corrections enter terms in the first line at second order in ϵ, and when ≠ they enter at first order in ϵ. It now remains to compute the multipoles. We will first compute the zeroth-order multipoles, i.e. the usual Kaiser multipoles, then the first-order multipoles that are associated with the wide-angle contributions. The following derivatives will be needed: 1/k^2 ∂_m∂_n j_0(ks) =-j_1(ks)/ksδ_mn + j_2(ks)ŝ_mŝ_n , 1/k^3 ∂_j∂_m∂_n j_0(ks) =j_2(ks)/ks(ŝ_jδ_mn+2 perm.) -j_3(ks)ŝ_jŝ_mŝ_n , 1/k^4 ∂_i ∂_j ∂_m ∂_n j_0(ks) =j_2(ks)/(ks)^2(δ_ijδ_mn + 2 perm.) - j_3(ks)/ks(ŝ_iŝ_jδ_mn + 5 perm.) + j_4(ks)ŝ_iŝ_jŝ_mŝ_n , where ∂_i≡∂/∂ s_i. Note that (2ℓ+1) j_ℓ(x)/x=j_ℓ-1(x)+j_ℓ+1(x). §.§ Distant-observer limit To recover the Kaiser multipoles, set _1=_2==(0,0,1). Then Eq. (<ref>) simplifies to ξ_s(χ_1,χ_2,_1·_2) =ξ(s) -(+)∂_3 U_3 -∂_3^2Ψ_33 -(2/χ_1-2/χ_2) U_3 +(2/χ_1-2/χ_2)∂_3Ψ_33 . In the distant-observer limit, we can immediately discard all terms order ϵ and higher, namely, the last two terms in Eq. (<ref>), since with χ∼ d and U∼∂Ψ∼ s they are 𝒪(ϵ). The remaining terms evaluate to Eqs. (<ref>) and (<ref>) in the main text, and from these equations it is straightforward to assemble the Kaiser multipoles (<ref>). Note that for the auto-correlation function (=), wide-angle effects enter the multipoles at ϵ^2, not order ϵ. This is only true in the mid-point (and bisector) parametrisations, however. §.§ Wide-angle contributions at leading order Referring back to Eq. (<ref>), wide-angle corrections enter the terms U and ∂ U at order ϵ. By contrast, wide-angle corrections enter the ∂Ψ and ∂^2Ψ terms at order ϵ^2, so do not need to be considered further in this leading-order calculation. The fact that the corrections are not all second order is a consequence of the bias parameters spoiling invariance of the correlations under pair interchange. We thus need only focus on the U_i terms. To develop Eq. (<ref>) further we use the leading-order expressions _1=+ϵ/2n^(1) and _2=-ϵ/2n^(1), where n^(1)=-√(1-μ^2). For the ∂_i U_j term in Eq. (<ref>) we have, with the help of Eq. (<ref>), (n̂_1^in̂_1^j+n̂_2^in̂_2^j)∂_i U_j =(+)∫k^2 k/2π^2 (1/3 j_0(ks) - 2/3 j_2(ks)ℒ_2(μ)) P_θδ(k) +2/5ϵ(-)∫k^2 k/2π^2 (-ℒ_1(μ)+ℒ_3(μ)) j_2(ks) P_θδ(k) , where the second term in this expression is the wide-angle correction (which agrees with equations 52 and 53 in Ref. <cit.>; see also equation 3.19 in Ref. <cit.>). For the U_i terms in Eq. (<ref>) we use Eq. (<ref>) and contract with the appropriate line of sight. The result is Eq. (<ref>). Gathering these results together, it is straightforward exercise to construct the multipoles (<ref>). § END-POINT PARAMETRISATION The end-point parametrisation is less symmetric than the mid-point parametrisation (it induces odd multipoles) but is often preferred for practical reasons, e.g. for power-spectrum estimation <cit.>. For completeness, in this appendix we derive the relation between the multipoles in the mid-point parametrisation (used in this work) and that in the end-point parametrisation, denoted ξ_ℓ and ξ_ℓ^ep, respectively. In general, the (cosine of the) angular separation can be defined as μ=d̂·. In the mid-point parametrisation d̂=, while in the end-point parametrisation d̂=_1 (or alternatively d̂=_2); see Fig. <ref>. Based on a trigonometric analysis of Fig. <ref>, we find that the separations are related by μ=μ_ep+ϵ/2(μ'^2-1)+𝒪(ϵ^2) , μ_ep=μ-ϵ/2(μ^2-1)+𝒪(ϵ^2) . The expansion parameter in the end-point parametrisation is ϵ_ep≡ s/s_1 and ϵ_ep=ϵ+𝒪(ϵ^2); since we will be working to leading order we may use ϵ_ep or ϵ interchangeably. The relation between Legendre polynomials in the mid-point and end-point parametrisations is at leading order (for ℓ≥1) ℒ_ℓ(μ) =ℒ_ℓ(μ_ep+12ϵ(μ_ep^2-1)) ≃ℒ_ℓ(μ_ep)+ϵ/2(μ_ep^2-1) ℒ_ℓ/μ_ep =ℒ_ℓ(μ_ep) +ϵ/2ℓ(μ_epℒ_ℓ(μ_ep)-ℒ_ℓ-1(μ_ep)) , where in the last equality we have used the recursion relation (x^2-1)ℒ_ℓ/ x =ℓ(xℒ_ℓ(x)-ℒ_ℓ-1(x)). Thus, at leading order in ϵ we have ξ_s(s,μ) ≡∑_ℓξ^(0)_ℓ(s)ℒ_ℓ(μ) =∑_ℓξ^(0)_ℓ(s) [ℒ_ℓ(μ_ep) +ϵ/2ℓ(μ_epℒ_ℓ(μ_ep)-ℒ_ℓ-1(μ_ep)) ] . Therefore, the multipoles in the mid-point parametrisation are related to those in the end-point parametrisation by ξ_ℓ^ep(s) ≡2ℓ+1/2∫^1_-1μ_ep ℒ_ℓ(μ_ep) ξ_s(s,μ) =ξ^(0)_ℓ(s)+ϵ∑_ℓ'M_ℓℓ' ξ^(0)_ℓ'(s) +𝒪(ϵ^2) , where the coupling coefficients are given by M_ℓℓ'≡1/22ℓ+1/2∫^1_-1μ_ep ℒ_ℓ(μ_ep)ℓ' (μ_epℒ_ℓ'(μ_ep)-ℒ_ℓ'-1(μ_ep)) . The nonzero coefficients are M_12=-3/5, M_32=3/5, and M_34=-10/9, i.e. the only induced multipoles (at order ϵ) are for ℓ=1,3, the dipole and octupole: ξ^ep(1)_1=-3/5 ξ^(0)_2 , ξ^ep(1)_3=3/5 ξ^(0)_2-10/9 ξ^(0)_4 , i.e. there is a leakage of the even multipoles into the odd multipoles. This expression agrees with equation 4.14 in Ref. <cit.> upon inserting the Kaiser multipoles (<ref>). Bonvinetal
http://arxiv.org/abs/2307.01105v1
20230703153255
A phase field-based framework for electro-chemo-mechanical fracture: crack-contained electrolytes, chemical reactions and stabilisation
[ "T. Hageman", "E. Martínez-Pañeda" ]
cs.CE
[ "cs.CE", "physics.app-ph", "physics.chem-ph" ]
[mycorrespondingauthor]Corresponding author e.martinez-paneda@imperial.ac.uk Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK We present a new theoretical and computational framework for modelling electro-chemo-mechanical fracture. The model combines a phase field description of fracture with a fully coupled characterisation of electrolyte behaviour, surface chemical reactions and stress-assisted diffusion. Importantly, a new physics-based formulation is presented to describe electrolyte-containing phase field cracks, appropriately capturing the sensitivity of electrochemical transport and reaction kinetics to the crack opening height. Unlike other existing methods, this approach is shown to accurately capture the results obtained with discrete fracture simulations. The potential of the electro-chemo-mechanical model presented is demonstrated by particularising it to the analysis of hydrogen embrittlement in metallic samples exposed to aqueous electrolytes. The finite element implementation takes as nodal degrees-of-freedom the electrolyte potential, the concentrations of relevant ionic species, the surface coverage, the concentration of diluted species, the displacement field and the phase field order parameter. Particular attention is devoted to improve stability and efficiency, resulting in the development of strategies for avoiding ill-constrained degrees of freedom and lumped integration schemes that eliminate numerical oscillations. The numerical experiments conducted showcase the ability of the model to deliver assumptions-free predictions for systems involving both free-flowing and crack-contained electrolytes. The results obtained highlight the role of electrolyte behaviour in driving the cracking process, evidencing the limitations of existing models. Phase Field, Fracture mechanics, Computational electrochemistry, Hydrogen Embrittlement, Finite Element Method § INTRODUCTION Many problems of technological significance are driven by the coupling between electrochemistry and mechanics. In stress corrosion cracking, cracks nucleate and grow through a combination of mechanical loads and corrosion reactions <cit.>. In the context of Li-Ion batteries, the development of cracks due to chemical strains results in degradation of battery performance and capacity <cit.>. In metals exposed to hydrogen-containing environments, one observes a remarkable reduction in ductility and fracture resistance as a result of hydrogen ingress and the associated embrittlement <cit.>. These sets of problem are characterised by their strongly coupled nature. Take for example the case of metal embrittlement due to hydrogen uptake from aqueous electrolytes. The failure load is governed by the local magnitude of the mechanical fields and of the concentration of hydrogen dissolved in the metal, which are themselves coupled (e.g., hydrogen accumulates in regions of high hydrostatic stress) and dependent on the geometry of the crack. Moreover, hydrogen ingress into the metal is governed by the near-surface stress state, the electrochemical reaction rates at the electrolyte-metal interface, and the electrochemical behaviour of the electrolyte, with all of these items being dependent on the defect geometry while at the same time influencing the defect morphology evolution. For example, the defect dimensions (e.g., crack opening height) will have a major influence on the local chemistry and electrolyte behaviour, which in turn will affect hydrogen uptake. Thus, predicting electro-chemo-mechanical fracture phenomena requires developing models capable of resolving all the coupled physical processes taking place. In this work, we present a new phase field-based model for electro-chemo-mechanical fracture that incorporates all the relevant physical stages. Namely, our theoretical and computational framework handles: (i) the electrochemical behaviour of electrolytes, predicting electrolyte potential distribution and the transport of ionic species due to diffusion and migration; (ii) the chemical reactions occurring at the electrolyte-electrode interface, with the associated kinetic effects and their dependence on electrolyte and surface conditions; (iii) the ingress of diluted species into the material and its diffusion within the solid; (iv) the deformation of the solid, and its coupling with the bulk transport of diluted species; (v) the nucleation and growth of cracks, which can be facilitated by the presence of dilute species; and (vi) a suitable treatment of electrolytes within cracks and other occluded environments. Importantly, computational procedures that can significantly facilitate numerical convergence and stability are also presented . The framework and associated computational schemes are universal but, for demonstration purposes, constitutive choices are made that particularise the model to the simulation of hydrogen embrittlement in metals exposed to aqueous electrolytes. Chemo-mechanical models exist that can predict the transport of dissolved hydrogen within a metal, resolving the interplay between mechanical fields and hydrogen diffusion <cit.>. However, these models adopt as boundary condition a constant hydrogen concentration (or chemical potential) at the metal surface, an approach that requires making strong assumptions about the electrolyte conditions, and that has been shown to deliver highly inaccurate predictions <cit.>. More comprehensive, electro-chemo-mechanical models have been recently proposed that attempt at resolving the kinetics of the hydrogen evolution reaction and accurately predict hydrogen ingress by computationally resolving the electrolyte behaviour <cit.>. However, these models only deal with stationary defects. Several methodologies exist to handle propagating cracks and these have been adopted in the hydrogen embrittlement community. Cohesive zone modelling schemes <cit.> and phase field fracture models <cit.> have been especially popular. The latter are particularly promising due to their additional modelling capabilities; by indicating the presence of fractured surfaces through an indicator function, the crack path is represented as an additional field, greatly increasing the flexibility and simplicity of the computations <cit.>. As a result, this approach has gained notable popularity since its development, and has been applied to a large range of materials and damage phenomena such as ductile fracture <cit.>, metallic fatigue <cit.>, functionally graded materials <cit.>, composites <cit.>, shape memory alloys <cit.>, and iceberg calving <cit.>. While phase field fracture modelling has been widely embraced in the hydrogen embrittlement community (see, e.g. <cit.> and Refs. therein), all studies to date require a priori knowledge of the hydrogen surface concentration for a given environment. The development of a fully coupled electro-chemo-mechanical framework would eliminate assumptions and deliver predictions purely as a function of the environment, the material and the loading conditions. However, this requires tackling multiple computational challenges. When developing a fully predictive framework for electro-chemo-mechanical fracture, one aspect that requires careful consideration is the treatment of the aqueous electrolyte solution inside of cracked domains. For example, electrolytes acidify in occluded geometries such as pits and cracks, where pH values can change by 80% depending on the defect geometry, which significantly enhances hydrogen uptake <cit.>. The need to accurately estimate crack openings poses a challenge for smeared modelling approaches such as phase field fracture as the crack is not explicitly represented. Here, one can take inspiration from the work conducted on the area of hydraulic fracture. While the highly conductive fractures assumption is occasionally used when simulating pressurised cracks <cit.>, a more common strategy is to base the diffusivity of the fluid on the opening of the cracks, prescribing fluid fluxes based on simplified relations <cit.>. These formulations reconstruct the crack opening height to impose a physically realistic fluid flow profile, and thereby increase the accuracy of the overall simulations. While not directly applicable to electrochemical transport within cracks, the manner in which the coupling between the displacement of the solid material and the state of the fluid within cracks is introduced could act as a basis to develop a rigorous scheme for electro-chemo-mechanical simulations. Here, we present a physics-based approach that enables connecting the crack height with the electrolyte behaviour. Other computational aspects, such as the use of a lumped integration scheme for improving stabilisation and robustness, are also discussed. The remainder of this paper is arranged as follows. First, in Section <ref>, we present our electro-chemo-mechanical framework encompassing electrolyte behaviour, surface chemical reactions, hydrogen uptake and diffusion in the metal, mechanical deformation, and a phase field description of fracture with a suitable electrolyte-crack treatment. There, we also introduce our new physics-based approach for describing electrolytes contained within phase field cracks. Then, in Section <ref>, we describe the numerical implementation of our theory, emphasising the handling of the couplings, convergence criteria, the strategies adopted for the prevention of ill-constrained degrees-of-freedom, and the lumped integration scheme adopted to improve numerical stabilisation. The results obtained are given in Section <ref>. First, we examine the abilities of our new physics-based formulation for handling electrolytes within phase field cracks, comparing it with existing approaches <cit.> and with discrete fracture simulations. Then, fully coupled electro-chemo-mechanical predictions are obtained for boundary value problems of particular interest, showcasing the ability of the model to capture fracture phenomena involving free-flowing and crack-contained electrolytes. Finally, concluding remarks end the paper in Section <ref>. § A THEORY FOR ELECTRO-CHEMO-MECHANICAL FRACTURE A domain Ω is considered, consisting of a metal domain Ω_s, and an electrolyte domain Ω_e, as shown in <ref>. Cracks are present in the metal domain, with the electrolyte contained within these cracks included as an ad-hoc formulation that builds upon the definition of a domain Ω_f for the crack-electrolyte region. The displacement field in the solid is denoted by 𝐮. Within the metal, hydrogen is dissolved at the interstitial lattice sites, with the interstitial lattice hydrogen concentration given by C_L. The presence of fractures is described using a phase field order parameter ϕ, with ϕ=0 denoting intact material points and ϕ=1 a locally fully fractured state. The state of the electrolyte in the Ω_f and Ω_e domains is described by the variables φ and C_π, which respectively denote the electric potential of the electrolyte and the concentrations of ionic species π=H^+,OH^-,Fe^2+,FeOH^+,Na^+, Cl^-. These species are chosen as representative of a conductive electrolyte (e.g., sea water). Finally, the hydrogen coverage of the metal-electrolyte interface is given by θ_ads. Thus, the behaviour of the electro-chemo-mechanical system is described by means of 12 field quantities, which are coupled together through physical phenomena; their governing equations and associated couplings are provided below. We proceed to describe the governing equations describing behaviour in the solid domain (Section <ref>) and the electrolyte domain (Section <ref>). In addition, the treatment of the electrolyte contained within cracks is extensively discussed in Section <ref>, presenting our approach to consistently handle this important aspect of the model. §.§ Solid domain The solid domain consists of a metal deforming under the small strain assumption, such that the strain tensor ε is given by, ε = 1/2(∇^T 𝐮 + ∇𝐮) The solid can contain or develop cracks and, accordingly, the total potential energy of the solid includes stored and fracture energy contributions, such that Π = ∫_Ω_sψ dΩ_s + ∫_Γ_d G_c dΓ_d where ψ is the stored (elastic) strain energy density and G_c denotes the critical energy release rate or material toughness, which is dependent on the lattice hydrogen concentration; G_c(C_L). To describe the evolution of cracks, we adopt a smeared representation based on phase field fracture formulations <cit.>. Accordingly, a damage function d(ϕ) is defined to capture the degradation of the material and the fracture energy is regularised using a so-called crack density function, which is a function of the phase field and its gradient, γ (ϕ, ∇ϕ). Thus, a quantity □ is distributed over a region within Ω_s, such that ∫_Γ_d□ dΓ_d = ∫_Ω_sγ( ϕ, ∇ϕ) □ dΩ_s where γ( ϕ, ∇ϕ) = 1/2ℓϕ^2+ℓ/2|∇ϕ|^2 with the length scale ℓ controlling the width of this region. Using this distribution function, a regularised form of <ref> can be written as: Π = ∫_Ω_s d(ϕ)ψ_0 +γ( ϕ, ∇ϕ) G_c(C_L) dΩ_s which is then used to obtain the strong forms of momentum balance and fracture evolution of the metal as: 0 = ∇·∂Π/∂ε = ∇·(d(ϕ) 𝒟:ε) 0=∇·∂Π/∂∇ϕ+∂Π/∂ϕ = G_c(C_L) ( 1/ℓϕ - ℓ∇^2ϕ) + ∂ d(ϕ)/∂ϕψ_0 which assumes the non-fractured metal to behave as a linear-elastic solid with stiffness tensor 𝒟. To enforce the irreversibility of the phase field parameter, a history variable ℋ is introduced <cit.>, transforming <ref> into: ϕ/ℓ - ℓ∇^2ϕ = - ∂ d(ϕ)/∂ϕℋ where ℋ = ψ_0/G_c(C_L), ℋ̇>=0 Since the fracture energy is a function of the lattice concentration, it is included inside the definition of the history variable to prevent the phase field variable from decreasing when the lattice hydrogen concentration decreases. Throughout this work, a quadratic degradation function will be used, such that d(ϕ) = k_0 + (1-k_0) (1-ϕ)^2 where k_0=10^-10 is a residual stiffness, introduced to prevent the mechanical sub-problem from becoming ill-posed. Following Martínez-Pañeda et al. <cit.>, the degradation of the material toughness with increasing hydrogen concentration is defined as, G_c(C_L) = G_c0(1-χC_L/N_L/C_L/N_L+exp(-Δ g_b/RT)) where G_c0 is the material toughness in the absence of hydrogen, Δ g_b is the binding energy of the critical interfaces, χ is the hydrogen degradation factor, N_L is the concentration of interstitial lattice sites, R is the gas constant, and T is the temperature. It remains to describe the stress-assisted diffusion of hydrogen within the bulk metal. To this end, a dissolved hydrogen chemical potential is defined as, μ = μ_0 + RT ln(θ_L/1-θ_L)-V_Hσ_H where μ_0 is the reference chemical potential, θ_L = C_L/N_L is the occupancy of interstitial lattice sites, V_H is the partial molar volume of hydrogen, and σ_H is the hydrostatic stress, which is defined as a function of the Cauchy stress tensor as σ_H=tr(d(ϕ)σ_0)/3. Dissolved hydrogen atoms can diffuse freely through the crystal lattice or be sequestered at microstructural trap sites such as carbides, grain boundaries or dislocations <cit.>. Accordingly, a concentration of hydrogen in trap sites is defined as C_T, such that the total concentration equals C=C_L+C_T. Considering that a metal can contain multiple trap types, the mass balance is given by, 0 = Ċ_L + ∑_trapsĊ_T + ∇·𝐣_L where a diffusive flux 𝐣_L=-D_L C_L/RT ∇μ is considered. Assuming equilibrium between the hydrogen in trapping sites and in interstitial sites, these two concentrations are related through <cit.>: C_T/N_T = C_L/N_L exp(Δ g_T/RT)/1+C_L/N_L exp(Δ g_T/RT) with trapping site concentration N_T, and binding energy of the trapping site Δ g_T. Substituting the chemical potential, <ref>, and the relation between lattice and trapped hydrogen, <ref>, into the mass balance equation <ref>, results in 0 = ( 1+∂Ċ_T/∂Ċ_L)Ċ_̇L̇-∇·D_LC_L/RT∇μ = ( 1+N_T/N_Lexp(Δ g_b/RT)/(C_L/N_L+exp(Δ g_b/RT))^2) Ċ_̇L̇-∇·(D_L/1-C_L/N_L∇ C_L) + ∇·( D_L C_LV_H/RT∇σ_H) where D_L is the lattice diffusion coefficient. For simplicity, we consider only one type of trapping site - grain boundaries, with binding energy Δ g_T = Δ g_b, which are also taken to be the critical interface governing the fracture resistance of the material, <ref>. §.§ Electrolyte domain Let us now consider the equations describing the behaviour of electrolytes. The ions within the electrolyte are conserved using the Nernst-Planck mass balance: 0 = Ċ_π - ∇·(D_π∇ C_π) - z_π F/RT∇·(D_π C_π∇φ) + R_π using the Faraday constant F and volume reaction rate R_π. This describes the transport of each of the π ions with diffusion coefficient D_π, driven by gradients in the concentration, and for ions with charge z_π by gradients in electric potential φ within the electrolyte. In addition, the conservation of electric current through the electroneutrality condition is used to provide the π+1 equation needed <cit.>: 0 = ∑_π z_π C_π For the reactions within the electrolyte, we include the water auto-ionisation reaction: H_2O[k_w']k_wH^+ + OH^- with reaction rates: R_H^+,w=R_OH^- = k_wC_H_2O - k_w'C_H^+C_OH^- = k_eq(K_w-C_H^+ C_OH^-) and the hydrolysis of the metal ions: Fe^2+ + H_2O[k_fe']k_feFeOH^+ + H^+ FeOH^+ + H_2Ok_feohFe(OH)_2 + H^+ with reaction rates: R_Fe^2+ =-k_feC_Fe^2++k_fe'C_FeOH^+C_H^+ R_FeOH^+ =k_feC_Fe^2+-C_FeOH^+(k_feoh+k_fe'C_H^+) R_H^+,fe =k_feC_Fe^2+-C_FeOH^+(k_fe'C_H^+-k_feoh) These reactions use forward and backward reaction constants k and k', with the hydrolysis reactions assumed to occur slowly, while the auto-ionisation reaction is assumed to always be in equilibrium. This equilibrium is enforced in <ref> by using the equilibrium constant K_w=k_wC_H_2O/k_w'=10^-8 mol^2/m^3 and by adopting a sufficiently high penalty-like reaction constant k_eq. The reactions between the metal surface and the electrolyte are given through the hydrogen evolution reaction (composed of the Volmer, Tafel, Heyrovsky, and absorption reaction steps) and the corrosion reaction <cit.>: Volmer (acid): H^+ + M + e^- [k_Va']k_Va MH_ads Heyrovsky (acid): H^+ + e^- + MH_ads [k_Ha']k_Ha M + H_2 Volmer (base): H_2O + M + e^- [k_Vb']k_Vb MH_ads + OH^- Heyrovsky (base): H_2O + e^- + MH_ads [k_Hb']k_Hb M + H_2 + OH^- Tafel: 2 MH_ads [k_T']k_T 2M + H_2 Absorption: MH_ads [k_A']k_A MH_abs Corrosion: Fe^2++2e^- [k_c']k_c Fe with their respective forward and backward reaction rates given by: [-1.25cm] Forward Backward Volmer (acid): ν_Va = k_Va C_H^+(1-θ_ads)exp( -α_Va ηF/RT) ν_Va' = k_Va' θ_adsexp((1-α_Va) ηF/RT) Heyrovsky (acid): ν_Ha = k_Ha C_H^+θ_adsexp(-α_Ha ηF/RT) ν_Ha' = k_Ha' (1-θ_ads) p_H_2 exp((1-α_Ha) ηF/RT) Volmer (base): ν_Vb = k_Vb (1-θ_ads)exp(-α_Vb ηF/RT) ν_Vb' = k_Vb' C_OH^- θ_adsexp((1-α_Vb) ηF/RT) Heyrovsky (base): ν_Hb = k_Hb θ_adsexp(-α_Hb ηF/RT) ν_Hb' = k_Hb' (1-θ_ads) p_H_2 C_OH^- exp((1-α_Hb) ηF/RT) Tafel: ν_T = k_T|θ_ads|θ_ads ν_T' = k_T' (1-θ_ads)p_H_2 Absorption: ν_A = k_A (N_L - C_L)θ_ads ν_A' = k_A' C_L (1-θ_ads) Corrosion: ν_c = k_c C_Fe^2+exp(-α_c ηF/RT) ν_c' = k_c' exp((1-α_c) ηF/RT) These rates use reaction rate constants k and k', charge transfer coefficients α, and the electric overpotential η, which is defined as the difference between the potential jump and the equilibrium potential of the specific reaction, η = E_m-φ-E_eq,H (using the imposed metal potential E_m, and either the equilibrium potential at reference conditions for the hydrogen reaction, E_eq,H or the corrosion reaction E_eq,Fe). Finally, to conserve the hydrogen between the electrolyte and the metal, the mass balance of the surface adsorbed hydrogen is used: N_adsθ̇_ads - (ν_Va-ν_Va') + ν_Ha + 2 ν_T + (ν_A-ν_A') - (ν_Vb-ν_Vb') + ν_Hb = 0 and at the metal-electrolyte interface, the H^+, OH^-, Fe^2+ fluxes are prescribed on the electrolyte and the lattice hydrogen flux on the metal: ν_H^+ = -(ν_Va - ν_Va') - ν_Ha ν_OH^- = ν_Vb - ν_Vb' + ν_Hb ν_Fe^2+ = ν_c'-ν_c ν_L = ν_A - ν_A' The governing equations described in this subsection assume that the electrolyte is a well-defined and separate phase relative to the metal, and thus refer to the electrolyte domain Ω_e. However, for the crack-contained electrolyte, represented via Ω_f in <ref> this is not the case. This will be addressed in the next subsection. §.§ Treatment of electrolyte within cracks In a smeared approach like the electro-chemo-mechanical phase field framework presented here, the metal and the electrolyte coexist when ϕ>0. This requires establishing relationships between the metal and the electrolyte as a function of the phase field, so as to capture the influence of cracking on electrolyte transport and reactions. A particularly popular approach in this regard is the distributed diffusion model developed by Wu and De Lorenzis <cit.>, which captures the enhanced electrolyte transport through cracks by enhancing diffusivity in ϕ>0 regions. Here, we present a new approach, henceforth referred to as the physics-based model, which is able to capture sensitivity to the crack height and naturally establishes a link with the discrete problem without any additional parameters. Both models are described and compared below. Common to both the distributed diffusion and physics-based models is the fact that the electrolyte-specific variables can become active in regions of Ω_s, depending on the evolution of the phase field. These electrolyte-specific variables are thus numerically considered in the entire domain, as is common in phase field approaches, but they only have physical meaning in material points experiencing damage, ϕ>0, where micro- and macro-cracks that can contain the electrolyte are present. §.§.§ Distributed diffusion model The distributed diffusion model by Wu and De Lorenzis <cit.> captures the enhanced transport of ions through cracks by defining an effective diffusion coefficient tensor that has two main components; D_eff,π = D_π,1 p (1-ϕ)^m + D_π,2ϕ^m The first term accounts for the characteristics of diffusion in porous materials, through a factor p and D_π,1=D_π,1I. In the materials of relevance for this study (metals), p=0. The second term accounts for the anisotropy in diffusivity that results from the presence of cracks. This is accomplished through the definition of a parameter m, to be fitted to experiments, and sufficiently large values of the diffusion coefficient matrix D_π,2. For the capacity term, it is often assumed to be independent of the fracture state when the material is porous <cit.>. However, since we are considering a non-porous material, we elect to distribute the capacity term consistently with the diffusion term. This results in the following weak form formulation of the mass balance given in <ref>: 0 = ∫_Ω_sϕ^m Ċ_πδ C - ∇·(ϕ^m D_π,2∇ C_π)δ C - z_π F/RT∇·(D_π,2 C_πϕ^m ∇φ)δ C + R_πϕ^m δ C dΩ_s + ∫_Γ_d^±ν_πδ C dΓ_d^± Finally, distributing the surface-based reactions through the interface distribution function γ (<ref>) transforms the last term of <ref> into a domain-wide integral: ∫_Γ_d^±ν_πδ C dΓ_d^± = ∫_Ω_s 2γν_πδ C dΩ_s where the factor 2 is introduced to account for the fact that the two fracture surfaces react with the electrolyte. From <ref>, the strong form for the mass balance distributed over the domain Ω_s is extracted as: 0 = ϕ^m Ċ_π - ∇·(ϕ^m D_π,2∇ C_π) - z_π F/RT∇·(D_π,2C_πϕ^m ∇φ)+φ^m R_π + (1/ℓϕ^2+ℓ|∇ϕ|^2) ν_π One thing to note here is that this equation becomes trivial for the case of a non-fractured domain where ϕ=0, resulting in the local solution for the concentration becoming undefined. This will be discussed in <ref>. In a similar manner, the weak form for the electroneutrality condition, <ref>, is obtained as: 0 = ϕ^m ∑_π z_π C_π Together, <ref> describe the behaviour of the electrolyte potential and ion concentrations within the domain. §.§.§ Physics-based model We have built our physics-based model by considering a fictitious domain Ω_f, which represents the electrolyte contained within a fracture with opening height h, as shown in <ref>. In this fictitious domain, the weak form of the Nernst-Planck mass balance, <ref>, is given as: 0 = ∫_Ω_fĊ_πδ C - ∇·(D_π∇ C_π)δ C - z_π F/RT∇·(D_π C_π∇φ)δ C + R_πδ C dΩ_f + ∫_Γ_d^±ν_πδ C dΓ_d^± We assume long and thin cracks, as it is commonly the case in corrosive and hydrogen-containing environments, allowing the integrals over domain Ω_f to be transferred to the discrete discontinuity description of domain Ω (<ref>) through: ∫_Ω_f□ dΩ_f = ∫_Γ_d h□ dΓ_d , ∫_Ω_f∇·□ dΩ_f = ∫_Γ_d∂/∂ s(h□) dΓ_d and ∫_Γ_d^±□ dΓ_d^± = ∫_Γ_d 2□ dΓ_d using the fracture opening height h. This allows the weak form for the electrolyte to be defined solely on the discrete discontinuity as: 0 = ∫_Γ_d hĊ_πδ C - ∂/∂ s(h D_π∂ C_π/∂ s)δ C - z_π F/RT∂/∂ s(h D_π C_π∂φ/∂ s)δ C + h R_πδ C + 2ν_πδ C dΓ_d where the terms containing gradients are transformed into unidirectional derivatives along the discontinuity due to the assumption of negligible changes in the direction normal to it. Finally, since the complete weak form is defined on the fracture face, the phase field distribution function from <ref> can be used to distribute the weak form over the complete domain Ω, resulting in: 0 = ∫_Ω_sγ hĊ_πδ C - ∇·(γ h D_π∇ C_π)δ C - z_π F/RT∇·(γ h D_π C_π∇φ)δ C + h γ R_πδ C + 2γν_πδ C dΩ_s where the assumption of a constant field normal to the fracture is included in the diffusivity matrix D_π. This is achieved by constructing the diffusivity in crack-aligned coordinates (using a rotation matrix R) as: 𝐑D_π𝐑^T=[ D_π 0; 0 D_∞/h ] which assigns the diffusivity of the ionic species D_π to the direction tangential to the crack, while notably enhancing diffusion in the normal direction. As this high diffusivity gets multiplied by γ h, it disappears when the metal is not cracked (where γ h = 0), while it is constant (and independent of the crack opening height) when the metal is cracked, enforcing negligible concentration gradients normal to the crack. As a result, the concentration within the phase field description will approximate a one-dimensional diffusion along the crack path, consistent with the description of the Nernst-Planck equations for narrow cracks. Based on Eq. (<ref>), the mass balance for the ion species within the electrolyte is given in its strong form as: 0 = γ hĊ_π - ∇·(γ h D_π∇ C_π) - z_π F/RT∇·(γ h D_π C_π∇φ) + h γ R_π + 2γν_π Accordingly, the electroneutrality condition reads: 0 = γ h ∑_π z_π C_π While the expression for the surface adsorbed hydrogen mass balance, <ref>, is reformulated to 2γ(N_adsθ̇_ads - (ν_Va-ν_Va') + ν_Ha + 2 ν_T + (ν_A-ν_A') - (ν_Vb-ν_Vb') + ν_Hb) = 0 Here, one should note that while the surface θ_ads is defined in the entire domain, consistent with a phase field description, it only becomes physically meaningful at the electrolyte-metal interface. Thus, when no surfaces are present, the surface coverage is kept at zero, with the method used to enforce this being described in <ref>. §.§.§ Estimating the opening heights The model described in the previous section requires the crack opening height h. This opening height is obtained based on the phase field and displacements following the procedure from <cit.>. That is, for every integration point, a surface normal vector is computed as n = ∇ϕ/|∇ϕ| which produces a vector normal to the phase field representation of the crack. Using this vector, a line is constructed which crosses the full width of the phase field zone and passes through the integration point currently being considered, as shown in <ref>. For the current integration point being considered, the crack opening height is then obtained by integrating along this line as: h = ∫𝐮·∇ϕ dn In order to calculate these integrals, the displacements and phase field gradients are first calculated at all integration points within the domain. This integration point data is then combined to create scattered interpolation functions <cit.>, which given the coordinates of an arbitrary point within the domain return the displacements and gradients based on linear interpolation between the closest integration points. Using these interpolation functions, the integral from <ref> is evaluated using numerical integration, obtaining the opening height in the current integration point. This integration is then repeated for all integration points within the complete domain, and opening heights are recalculated each time the staggered solution scheme begins solving for the chemical sub-problem (see <ref>). It should be noted that, as the phase field gradients are discontinuous between neighbouring elements, the use of scattered interpolants to calculate this opening height is not exact, as opposed to directly evaluating the gradients at required locations using the finite element shape functions. However, this method does allow for unstructured meshes and three-dimensional cases to easily be considered, without the need for computationally costly element searches during the integration step. §.§.§ Model comparison Comparing the distributed diffusion model, <ref>, with the physics-based model, <ref>, shows that both can be cast into the following general form: 0 = β_cĊ_π - ∇·(R^Tβ_dR D_π∇ C_π) - z_π F/RT∇·(R^Tβ_dR D_π C_π∇φ) + β_c R_π + β_sν_π 0 = β_c∑_π z_π C_π 0 =β_s(N_adsθ̇_ads - (ν_Va-ν_Va') + ν_Ha + 2 ν_T + (ν_A-ν_A') - (ν_Vb-ν_Vb') + ν_Hb) using the rotation matrix R to align the diffusion within the fracture to its orientation. The capacity, diffusion, and surface distributors are accordingly defined as: Distributed diffusion Physics-based β_c = ϕ^m β_c = h ( 1/2ℓϕ^2+ℓ/2|∇ ϕ|^2 ) β_d = [ ϕ^m D_π,2/D_π 0; 0 0 ] β_d = ( 1/2ℓϕ^2+ℓ/2|∇ ϕ|^2 ) [ h 0; 0 D_∞ ] β_s = 1/ℓϕ^2+ℓ|∇ ϕ|^2 β_s = 1/ℓϕ^2+ℓ|∇ ϕ|^2 The main differences between both models can be readily seen upon inspection of Eqs. (<ref>)-(<ref>). First, the distributed diffusion model requires calibration of two additional parameters: the exponential factor m and the enhanced diffusion within the fracture D_π,2. These would be expected to have a sensitivity to the crack opening height h. In contrast, the description of diffusion in the direction tangential to the crack does not depend on any empirical parameters and naturally incorporates the role of h. In this regard, it should be noted that the focus on the physics-based model is not to accurately describe the kinetics of fluid flow within a propagating crack but to accurately capture electrolyte behaviour and its sensitivity to the crack geometry, as crack growth can be highly sensitive to the electrochemical conditions that arise in occluded geometries such as cracks. A second main difference between the two models lies in their description of the transport in the direction normal to the fracture, with the distributed diffusion model assuming that the presence of cracks does not contribute to this transport, while one of the assumptions of the physics-based model is the absence of any gradients normal to the fracture, which is enforced by assigning a large value to D_∞. § NUMERICAL IMPLEMENTATION The governing equations, <ref>, are solved using the finite element method. The system of equations is solved in an iteratively staggered manner. First, a solution for the phase field is obtained by solving <ref>. Next, the displacements are updated by solving <ref>. Then, a Newton-Raphson solver is used to iteratively solve <ref> in a concurrent fashion, so as to update the electrolyte potential, the concentrations of ionic species, the surface coverage, and the hydrogen lattice concentration. Once all the fields have been updated, the convergence of the total system of equations is evaluated, with global iterations taking place until convergence is reached. The solution process is summarised in <ref>. By using this staggered scheme, we avoid the well-known convergence difficulties that arise when simultaneously solving for the phase field and displacement <cit.>. Additionally, the non-local mapping for the fracture height, <ref>, only needs to be conducted once per electro-chemical solution step since the displacements and phase field parameter are constant during its solution process. §.§ Spatial and temporal discretisation The temporal discretisation of the governing equations is performed using a backward Euler scheme. For the spatial discretisation, quadratic elements are used to discretise the degrees of freedom as: 3𝐮 = ∑N_u^el𝐮^el ϕ = ∑𝐍_ϕ^el^el C_L = ∑𝐍_L^el𝐂_L^el φ = ∑𝐍_φ^el^el θ_ads = ∑𝐍_θ^el^el C_π = ∑𝐍_C^el𝐂_π^el One thing to note about the use of these quadratic elements is the requirement in <ref> of second-order derivatives to calculate the hydrostatic stress gradient. These gradients are poorly defined using quadratic C^0 inter-element continuous shape functions. While this could be resolved using elements with a higher inter-element continuity, for instance using NURBS <cit.>, T-splines <cit.>, or Hermitian polynomials <cit.>, no issues were encountered during simulations provided a sufficiently fine mesh was used to discretise the displacement. As phase field methods already impose requirements on the maximum element size, the meshes adopted were sufficiently fine to accurately characterise the hydrostatic stress gradients. §.§ Residuals and stiffness matrices We proceed to formulate the residuals and stiffness matrices for each of the governing fields and associated balance equations. §.§.§ Phase field evolution sub-problem The first step of the staggered solution scheme is solving the phase field evolution, <ref>. This equation is given in discretised weak form as: 𝐟_ϕ^t+Δ t = ∫_Ω_s1/ℓ𝐍_ϕ^T𝐍_ϕ^t+Δ t + ℓ(∇𝐍_ϕ)^T∇𝐍_ϕ^t+Δ t - 2 (1-k_0)𝐍_ϕ^T (1-𝐍_ϕ^t+Δ t) ℋ^t+Δ t dΩ_s - ∫_ΓℓN_ϕ^T∇𝐍_ϕ^t+Δ t·𝐧 dΓ where the last term, the boundary condition ∇ϕ·𝐧, is set equal to zero hereafter. The discretised history variable is defined as: ℋ^t+Δ t = max( ℋ^t, 1/2𝐮^t+Δ t^TB_u^TDB_u𝐮^t+Δ t/G_c0( 1-χ𝐍_L𝐂_L^t+Δ t/N_L/𝐍_L𝐂_L^t+Δ t/N_L+exp(-Δ g_b/RT)) ) which uses the displacement to strain mapping matrix =B_u𝐮 and the plane-strain linear-elastic stiffness matrix D. This also approximates the irreversibility condition through enforcing an increasing value for this history parameter. Since the force vector depends linearly on the phase field parameter, it is directly solved through ^t+Δ t=-K_ϕϕ^-1𝐟_ϕ+^t, using the tangent matrix: K_ϕϕ = ∫_Ω_s1/ℓ𝐍_ϕ^T𝐍_ϕ + ℓ(∇𝐍_ϕ)^T∇𝐍_ϕ + 2 (1-k_0)ℋ^t+Δ t𝐍_ϕ^T 𝐍_ϕ dΩ_s §.§.§ Momentum balance sub-problem The second step is solving for the displacements through the momentum balance from <ref>. The discretised weak form is given by: 𝐟_u = ∫_Ω_s(k_0+(1-k_0)(1-𝐍_ϕ^t+Δ t)^2)B_u^T DB_u𝐮^t+Δ t dΩ_s - ∫_Γ(k_0+(1-k_0)(1-𝐍_ϕ^t+Δ t)^2) 𝐍_u^T _ext dΓ Since this equation is linear with regards to the nodal displacements 𝐮, it is directly resolved through 𝐮^t+Δ t=-K_uu^-1𝐟_u+𝐮^t, using the tangent matrix: K_uu = ∫_Ω_s(k_0+(1-k_0)(1-𝐍_ϕ^t+Δ t)^2)B_u^T DB_u dΩ_s Since the phase field is resolved first, and its updated value is used to compute the displacements, this step provides stresses and displacements that are compatible with the current state of the phase field. This is in contrast to a scheme where the displacements are determined first, and then used to update the phase field variable. As the electrochemical system resolved during the next step is strongly dependent on the hydrostatic stress gradient and the displacement field (via the crack opening height), this solution sequence was seen to be beneficial for the overall convergence. §.§.§ Electrochemical sub-problem The last solution step resolves the electrochemical sub-problem. This is defined through the discretised weak form of the interstitial lattice hydrogen mass balance, <ref>, given by: 𝐟_L^t+Δ t = ∫_Ω_s1/Δ t( 1 + N_T/N_Lexp(-Δ g_b/RT)/(𝐍_L𝐂_L^t+Δ t/N_L+exp(-Δ g_b/RT))^2) 𝐍_L^T 𝐍_L(𝐂_L^t+Δ t-𝐂_L^t) dΩ_s + ∫_Ω_sD_L/1-𝐍_L𝐂_L^t+Δ t(∇𝐍_L)^T∇𝐍_L𝐂_L^t+Δ t -D_LV_H/RT(∇𝐍_L)^T 𝐍_L𝐂_L^t+Δ t𝐁_u^*𝐮^t+Δ t dΩ_s - ∫_Γ𝐍_L^T J_ext dΓ + ∑_nds_s L_ss 2 (ν_A - ν_A') + ∑_nds_Γ L_eint(ν_A - ν_A') using the displacement to gradient of hydrostatic stress mapping matrix ∇σ_h=B_u^*𝐮. A lumped integration scheme is used for the last term, the reaction rates of the absorption reaction <cit.>. More details relating to this lumped scheme, its impact on stability and its interaction with the distribution functions for the crack-contained electrolyte are given in <ref>. In addition to the interstitial lattice hydrogen mass balance, the surface adsorbed hydrogen mass balance, <ref>, ionic species mass balances, <ref>, and the electroneutrality condition, <ref>, are also resolved within this solution step. The weak form of this surface mass balance is given by: 𝐟_θ^t+Δ t = ∫_Ω_f 2 β_s N_ads/Δ t𝐍_θ^T𝐍_θ(^t+Δ t-^t) dΩ_f + ∫_Γ N_ads/Δ t𝐍_θ^T𝐍_θ(^t+Δ t-^t) dΓ - ∑_nds_s 2 L_ss(ν_Va-ν_Va'-ν_Ha-2ν_T - ν_A + ν_A'+ ν_Vb - ν_Vb'-ν_Hb) - ∑_nds_Γ 2 L_eint(ν_Va-ν_Va'-ν_Ha-2ν_T - ν_A + ν_A'+ ν_Vb - ν_Vb'-ν_Hb) where the lumped integration is performed over the nodes within the solid domain, nds_s, and over the nodes at the metal-electrolyte interface, nds_Γ. The ion concentration mass balances are given by: 𝐟_cπ^t+Δ t = ∫_Ω_sβ_c𝐍_C^T𝐍_C(𝐂_π^t+Δ t-𝐂_π^t ) + D_π(∇𝐍_C)^T R^Tβ_dR∇𝐍_C𝐂_π^t+Δ t dΩ_s + ∫_Ω_sz_π F D_π/RT(∇𝐍_C)^T R^T β_dR𝐍_C𝐂_π^t+Δ t∇𝐍_φ^t+Δ t dΩ_s + ∫_Ω_e𝐍_C^T𝐍_C(𝐂_π^t+Δ t-𝐂_π^t ) + D_π(∇𝐍_C)^T ∇𝐍_C𝐂_π^t+Δ t dΩ_e + ∫_Ω_ez_π F D_π/RT(∇𝐍_C)^T 𝐍_C𝐂_π^t+Δ t∇𝐍_φ^t+Δ t dΩ_e + ∫_Γ J_ext,π dΓ + ∑_nds_s L_sv R_π + 2L_ssν_π + ∑_nds_e L_ev R_π + ∑_nds_Γ L_eintν_π and the electroneutrality condition is given by: 𝐟_φ^t+Δ t = ∫_Ω_s∑_πβ_c z_π𝐍_φ^T𝐍_C𝐂_π^t+Δ t dΩ_s + ∫_Ω_e∑_π z_π𝐍_φ^T𝐍_C𝐂_π^t+Δ t dΩ_e Since this system of equations is nonlinear, a Newton-Raphson scheme is used within this step to solve <ref> concurrently. This scheme is defined as: [ K_LL K_Lθ 0 0; K_θL K_θθ K_θC K_θφ; 0 K_Cθ K_CC K_Cφ; 0 0 K_φC 0 ][ d𝐂_L; d; d𝐂_π; d ] = -[ 𝐟_L^t+Δ t; 𝐟_θ^t+Δ t; 𝐟_cπ^t+Δ t; 𝐟_φ^t+Δ t ] with the sub-matrices being given by: K_LL = ∫_Ω_s 𝐍_L^T 𝐍_L/Δt ( 1 + N_T/N_L exp(-Δg_b/RT)/(𝐍_L𝐂_L^t+Δt/N_L+exp(-Δg_b/RT))^2 - 2 N_T/N_L^2 exp(-Δg_b/RT) 𝐍_L(𝐂_L^t+Δt-𝐂_L^t)/(𝐍_L𝐂_L^t+Δt/N_L+exp(-Δg_b/RT))^3 ) dΩ_s + ∫_Ω_s D_L/1-𝐍_L𝐂_L^t+Δt(∇𝐍_L)^T∇𝐍_L + D_L/N_L(1-𝐍_L𝐂_L^t+Δt)^2(∇𝐍_L)^T∇𝐍_L𝐂_L^t+Δt 𝐍_L dΩ_s - ∫_Ω_s D_L V_H/RT (∇𝐍_L)^T 𝐍_L𝐁_u^*𝐮^t+Δt dΩ_s + ∑_nds_s 2 L_ss (∂ν_A/∂C_L - ∂ν_A'/∂C_L) I_LL +∑_nds_Γ L_eint (∂ν_A/∂C_L - ∂ν_A'/∂C_L) I_LL K_Lθ = ∑_nds_s 2 L_ss (∂ν_A/∂θ - ∂ν_A'/∂θ) I_Lθ + ∑_nds_Γ L_eint (∂ν_A/∂θ - ∂ν_A'/∂θ) I_Lθ K_θL = ∑_nds_s 2 L_ss ( ∂ν_A/∂C_L - ∂ν_A'/∂C_L ) I_θL + ∑_nds_Γ L_eint ( ∂ν_A/∂C_L - ∂ν_A'/∂C_L ) I_θL K_θθ = ∫_Ω_s 2 (β_s+ϵ) N_ads/Δt 𝐍_θ^T𝐍_θ dΩ_s + ∫_Γ N_ads/Δt 𝐍_θ^T𝐍_θ dΓ -∑_nds_s 2 L_ss (∂ν_Va/∂θ_ads-∂ν_Va'/∂θ_ads-∂ν_Ha/∂θ_ads-2∂ν_T/∂θ_ads - ∂ν_A/∂θ_ads + ∂ν_A'/∂θ_ads+ ∂ν_Vb/∂θ_ads - ∂ν_Vb'/∂θ_ads-∂ν_Hb/∂θ_ads )I_θθ -∑_nds_Γ L_eint (∂ν_Va/∂θ_ads-∂ν_Va'/∂θ_ads-∂ν_Ha/∂θ_ads-2∂ν_T/∂θ_ads - ∂ν_A/∂θ_ads + ∂ν_A'/∂θ_ads+ ∂ν_Vb/∂θ_ads - ∂ν_Vb'/∂θ_ads-∂ν_Hb/∂θ_ads )I_θθ K_θC = - ∑_nds_s 2 L_ss (∂ν_Va/∂C_π-∂ν_Va'/∂C_π-∂ν_Ha/∂C_π + ∂ν_Vb/∂C_π - ∂ν_Vb'/∂C_π-∂ν_Hb/∂C_π )I_θC - ∑_nds_Γ L_eint (∂ν_Va/∂C_π-∂ν_Va'/∂C_π-∂ν_Ha/∂C_π + ∂ν_Vb/∂C_π - ∂ν_Vb'/∂C_π-∂ν_Hb/∂C_π )I_θC K_θφ = - ∑_nds_s 2 L_ss (∂ν_Va/∂φ-∂ν_Va'/∂φ-∂ν_Ha/∂φ + ∂ν_Vb/∂φ - ∂ν_Vb'/∂φ-∂ν_Hb/∂φ ) I_θφ - ∑_nds_Γ L_eint (∂ν_Va/∂φ-∂ν_Va'/∂φ-∂ν_Ha/∂φ + ∂ν_Vb/∂φ - ∂ν_Vb'/∂φ-∂ν_Hb/∂φ ) I_θφ K_C θ = ∑_nds_s 2L_ss ∂ν_π/∂θ I_θ+ ∑_nds_Γ L_eint ∂ν_π/∂θ I_θ K_CC = ∫_Ω_s (ϵ+β_c) 𝐍_C^T𝐍_C + D_π(∇𝐍_C)^T (R^Tβ_dR+ϵI)∇𝐍_C dΩ_s + ∫_Ω_s z_πF D_π/RT (∇𝐍_C)^T (R^T β_d R+ϵI) 𝐍_C ∇ 𝐍_φ^t+Δt dΩ_s + ∫_Ω_e 𝐍_C^T𝐍_C + D_π(∇𝐍_C)^T ∇𝐍_C dΩ_e + ∫_Ω_e z_πF D_π/RT (∇𝐍_C)^T 𝐍_C ∇ 𝐍_φ^t+Δt dΩ_e + ∑_nds_s L_sv ∂R_π/∂C_π + 2L_ss ∂ν_π/∂C_π I_CC + ∑_nds_e L_ev ∂R_π/∂C_π I_CC + ∑_nds_Γ L_ls ∂ν_π/∂C_π I_CC K_C φ = ∫_Ω_s z_πF D_π/RT (∇𝐍_C)^T (R^T β_d R+ϵI) 𝐍_C𝐂_π^t+Δt ∇ 𝐍_φ dΩ_s + ∑_nds_s 2L_ss ∂ν_π/∂φ + ∫_Ω_e z_πF D_π/RT (∇𝐍_C)^T 𝐍_C𝐂_π^t+Δt ∇ 𝐍_φ dΩ_e + ∑_nds_γ L_eint ∂ν_π/∂φ K_φC = ∫_Ω_s ∑_π(β_c+ϵ) z_π𝐍_φ^T𝐍_C dΩ_s + ∫_Ω_e ∑_πz_π𝐍_φ^T𝐍_C dΩ_e These tangent matrices use the allocation matrices I_xy, to allocate the lumped integration terms to the correct locations within the stiffness matrix, as given by the set of degrees of freedom (x,y). The matrices related to capacity and diffusion terms also contain an offset ϵ, whose presence is explained in <ref>. The system from <ref> is iterated until a converged solution is achieved, using an energy based criterion: E_it = E_it^*/E_0<10^-6 with E^*_it = [𝐟_L^T 𝐟_θ^T 𝐟_cπ^T 𝐟_φ^T ]_it[d𝐂_L; d; d𝐂_π; d]_it Upon convergence, the errors within the phase field evolution and momentum balance are calculated and compared to the criterion E_it^*<10^-6. If this is fulfilled, the simulation proceeds to the next time increment. If the error is exceeded, another staggered iteration is performed solving the phase field, displacements and electrochemical systems. §.§ Stabilising effect of lumped integration One issue when simulating electro-chemical systems using finite elements is the large range of reaction rates present. As these rates depend strongly on the environment, often varying by many orders of magnitude at different locations within the same simulation, it is often not feasible to enforce these reactions to be a priori in instant equilibrium. Furthermore, enforcing a direct equilibrium between reactions is often accomplished by eliminating them from the governing equations of the system, greatly complicating the addition of other reactions involving the eliminated species. However, as a result of the (potentially) very high reaction rates, a stiff system of differential equations is created. Solving this system of equations poses difficulties, with ill-conditioned tangent matrices and results that often exhibit non-physical oscillations. One manner in which these difficulties can be tackled is by using lumped integration for the problematic reaction terms <cit.>. Originally developed to resolve issues with traction oscillations due to contact conditions when using interface elements <cit.>, lumped integration performs the integration of transfer terms (such as electro-chemical reactions) on a node-by-node basis in a consistent manner. For instance, considering the hydrogen absorption term within <ref>: 𝐟_𝐋^abs = ∫_Ω_s 2 β_s 𝐍_C^T (k_A(N_L - 𝐍_L 𝐂_L) 𝐍_θ - k_A'(1-𝐍_θ )𝐍_L 𝐂_L) dΩ_s 𝐟_θ^abs = - ∫_Ω_s 2 β_s 𝐍_θ^T (k_A(N_L - 𝐍_L 𝐂_L) 𝐍_θ - k_A'(1-𝐍_θ) 𝐍_L 𝐂_L) dΩ_s Using a standard Gauss integration scheme, these integrals are directly evaluated through a sum over their integration points. In contrast, when using a lumped integration scheme, consistent weights for these surface reactions are first determined as: 𝐋_ss = ∫_Ω_sβ_s𝐍^T dΩ_s = ∑_el_s∑_ip w_ipβ_s(ϕ_ip) 𝐍^T where the lumped integration weights associated with each node are obtained using a standard Gauss integration scheme, as a sum over elements and integration points. Having calculated consistent weights for each node, the lumped integration of <ref> is performed as a sum over all nodes: 𝐟_L^abs = ∑_n=nds_s 2𝐋_ss^n (k_A(N_L - 𝐂_L^n) ^n - k_A'(1-^n) 𝐂_L^n) 𝐢_L^n 𝐟_θ^abs = ∑_n=nds_s 2𝐋_ss^n (k_A(N_L - 𝐂_L^n) ^n - k_A'(1-^n) 𝐂_L^n) 𝐢_θ^n using the superscript n to indicate the nodal values, and 𝐢_L^n to denote the row of the force vector corresponding to the correct degree of freedom and node n. To illustrate the effect of this lumped integration scheme on the systems tangent matrices, we shall calculate these using Gauss and lumped integration (setting k_A=k_A'=3, N_L=1, β_s=1). For brevity, quadratic line elements are used here, while quadratic quad and triangular elements are used within the actual implementation. Setting 𝐂_L==0 results in the following tangent matrices: K_Gauss = [ 0.6 0.3 0.1 -0.6 -0.3 -0.1; 0.3 0.4 0.3 -0.3 -0.4 -0.3; 0.1 0.3 0.6 -0.1 -0.3 -0.6; -0.6 -0.3 -0.1 0.6 0.3 0.1; -0.3 -0.4 -0.3 0.3 0.4 0.3; -0.1 -0.3 -0.6 0.1 0.3 0.6 ] K_Lumped = [ 1 0 0 -1 0 0; 0 1 0 0 -1 0; 0 0 1 0 0 -1; -1 0 0 1 0 0; 0 -1 0 0 1 0; 0 0 -1 0 0 1 ] As is evident from the first three elements of the two matrices, the tangent matrix obtained using Gauss integration allows interactions between neighbouring nodes, transferring species to different locations via chemical reactions. In contrast, the lumped matrix solely allows reactions to occur between degrees of freedom co-located in the same node. The effect of this is also seen by looking at the eigenmodes described by these matrices, with the non-zero eigenmodes shown in <ref>. The two lowest eigenmodes when using a Gauss scheme correspond to transfer of chemical species between neighbouring nodes without any change in the total amount of these species. It is only with the addition of the third and highest eigenmode that chemical reactions become possible. In contrast, the lumped tangent matrix obtains three equal eigenmodes, corresponding to reactions between degrees of freedom in the same node. In a similar manner, the nodal integration weights for the volume reaction terms used within the phase field electrolyte description are given by: 𝐋_sv = ∫_Ω_sβ_c𝐍_L dΩ_s and the lumped weights used for the free electrolyte and metal-electrolyte interface reactions are given by: 𝐋_ev = ∫_Ω_e𝐍_C dΩ_e 𝐋_eint = ∫_Γ𝐍_L dΓ This lumped integration is applied to all reaction terms. While some reactions are not dominant and would not cause any numerical difficulties, this is highly dependent on the local conditions. Furthermore, as the lumped integration scheme is consistent with the weak form, using a lumped integration instead of a Gauss integration scheme has little or no effect on the obtained solution <cit.>. §.§ Prevention of ill-constrained degrees of freedom One issue while solving <ref> or <ref> is that, for elements where ϕ≈ 0, the multiplication with either the phase field parameter or the surface distribution function can cause unconstrained degrees of freedom. To remedy this, while not altering the obtained solution for the well-defined degrees of freedom, inconsistent tangent matrices are constructed using altered distribution functions. These distribution functions include a small offset ϵ to prevent the system from becoming ill-defined, for instance defining the contribution to the internal force vector of the ion capacity term as: 𝐟_cπ^capacity = ∫_Ω_s h ( 1/2ℓϕ^2+ℓ/2|∇ϕ|^2 ) 1/Δ t𝐍_C'𝐍_C(𝐂_π^t+Δ t - 𝐂_π^t) dΩ_s while the tangential term contributing to the tangential matrix of the system is defined as: K_ππ^capacity = ∫_Ω_s(ϵ + h ( 1/2ℓϕ^2+ℓ/2|∇ϕ|^2 )) 1/Δ t𝐍_C'𝐍_C dΩ_s This allows the electro-chemical degrees of freedom to remain constrained. Since the offset is solely introduced in the tangent matrix, it does not alter the converged solution state, instead only altering the rate at which this converged state is obtained. Small values of ϵ prevent the system from becoming ill-constrained, but significantly alter the conditioning number of the matrix due to the many orders of magnitude difference between the terms within and outside the cracks. Increasing the value of ϵ improves this matrix conditioning and enhances the stability of the solver, at the cost of requiring more iterations to obtain a well-converged solution. A value of ϵ = 10^-12 is used throughout this paper. §.§ Initialisation of phase field parameter and history field For the initialisation at the start of simulations, we set the displacements, electrolyte potential, surface coverage, and interstitial lattice, Fe^2+ and FeOH^+ concentrations to zero. The H^+ and OH^- concentrations are initialised as equal to the imposed boundary pH, and the Na^+ and Cl^- concentrations are set equal to the boundary values. At the start of the simulations, an initial fracture is assumed to be already present. While it is common for this fracture to be represented geometrically <cit.>, we choose to include it by setting initial values for the phase field and history variable based on the distance dx from the preferred initial fracture: ϕ^init = exp(-|dx|/ℓ) ℋ^init = 1/ℓ 𝐍_ϕ^init + ℓ(∇𝐍_ϕ^init)^T(∇𝐍_ϕ^init)/k_0-2(1-k_0)(1-𝐍_ϕ^init) which is based on the one-dimensional solution of the phase field function. While this does not provide an exact solution for higher dimensional cases, it is sufficient to trigger the localisation of the phase field required to obtain a fracture consistent with the preferred initial crack after the first time increment. The main advantage of including the initial fracture through the phase field is the automatic inclusion of the electrolyte within the initial crack, whereas had this been represented geometrically, an additional set of equations would have been required. § RESULTS The accuracy and applicability of the described model is demonstrated through a set of case studies. First, we study electrolyte behaviour in a stationary crack, so as to benchmark our physics-based treatment of electrolytes within cracks against discrete simulations and other existing phase field-based models (see <ref>). Then, in <ref>, we simulate the propagation of cracks exposed to a hydrogen-containing electrolyte, to showcase the main predictive capabilities of the model. Finally, in <ref>, we extend our analysis to a case study containing both free and crack-contained electrolytes. These three case studies all use the metal properties given in <ref>, the electrolyte properties given in <ref>, and the reaction rate constants listed in <ref>, with this set of properties corresponding to an iron-based metal in contact with seawater. §.§ Benchmark case study: handling electrolyte-containing cracks We first investigate the capabilities of the physics-based model presented in Section <ref>. To this end, we consider a boundary value problem containing two metallic regions divided by an electrolyte. This benchmark geometry allows us to compare the predictions of our physics-based model with the results obtained using: (i) the distributed diffusion model <cit.>, and (ii) a discrete simulation where the electrolyte is considered a separate domain. As shown in <ref>, the metal domain is constrained at the bottom and subjected to an applied vertical displacement U_ext at the top edge, which results in the creation of a thin electrolyte layer of h=U_ext. The explicit interface simulations directly simulate this domain using the method described in Ref. <cit.>. For the phase field simulations, the electrolyte layer is replaced by the initial presence of the phase field variable, <ref>, which is initialised using <ref>. As the crack is stationary, the magnitude of G_c0 is taken to be sufficiently high (2 · 10^10 J/m^2) to prevent the phase field from spreading past the region initialised through the history field. On the left side of the domain, constant concentrations and zero electrolyte potential are imposed. The metal has dimensions H=50 mm, L=5 mm, and is discretised using quadratic Lagrangian elements of size 0.2 mm ×0.2 mm. The temporal discretisation is performed using a backward Euler method with an initial time increment of 30 s, increasing by 5% each time step to simulate a total duration of 200 hours. Simulations are performed for the following magnitudes of the applied displacement (and thus the electrolyte height): U_ext=10^-4, 10^-3, 10^-2 and 10^-1 mm. In the phase field-base simulations (physics-based and distributed diffusion), results are obtained for the following choices of phase field length scale ℓ = 0.5, 0.75, 1, and 1.25 mm. The contours of interstitial lattice hydrogen concentration C_L calculated for the case of a phase field length scale ℓ=1 mm are shown in <ref>. It can be readily seen that the physics-based phase field formulation and the discrete simulations are in perfect agreement, showing a strong sensitivity to crack opening height (imposed through U_ext). For small opening heights only a limited amount of hydrogen enters the metal, whereas for wider cracks significantly more hydrogen ingress takes place. The sensitivity to the crack geometry is more pronounced for small crack openings, with the hydrogen uptake predictions eventually saturating as U_ext increases, suggesting that there is an upper limit after which the opening height becomes less dominant in the hydrogen absorption process. This upper limit appears to correspond to the result obtained with the distributed diffusion model, which is unable to capture the smaller hydrogen uptake associated with smaller crack openings. The electrolyte pH predictions, shown in <ref>, provide a similar qualitative picture. Here, the phase field-based predictions are again based on the choice of ℓ=1 mm, and pH contours are given over a height equal to U_ext (the discrete electrolyte height, with figures being scaled for visibility purposes). The results are shown for a time of 200 hours. Again, the distributed diffusion model delivers crack height-insensitive results that appear to coincide with those associated with large crack openings. In contrast, the physics-based model obtains pH distributions similar to those of the discrete fracture simulations. Since the pH within the fracture directly influences the surface adsorbed hydrogen, an accurate estimation is paramount. Further results (not shown here) indicate that the agreement between the physics-based and discrete simulations also extends to the prediction of the concentration of other ionic species and the spatial distribution in electrolyte potential. In contrast, the distributed diffusion model is limited to characterising the environments intrinsic to high crack opening heights. One behaviour that the physics-based model is unable to capture is the two-dimensional distribution of the pH obtained for the U_ext=10^-2 mm simulations in the discrete and distributed diffusion models. These two models show a slight rise of the pH near the metal-electrolyte interface with a lower pH in the centre of the crack. In contrast, the physics-based model is built in such a way so as to enforce a zero concentration gradient in the direction normal to the crack. As a result, it is expected that the physics-based model starts to deviate from the direct simulation and distributed diffusion model for large opening heights, h>>𝒪(1 mm), where these two-dimensional effects dominate. To quantify the behaviour of the system over time, we use the volume-averaged interstitial lattice hydrogen concentration, C_L=∫ C_LdΩ / ∫ 1 dΩ. The evolution of this average hydrogen concentration is shown in <ref> for all combinations of fracture model, applied displacement, and phase field length scale considered. For low imposed displacements, the physics-based model obtains a near perfect match with the discrete simulation result, independently of the phase field length scale adopted. As the crack opening height increases, the results start showing some sensitivity to the choice of phase field length scale, with smaller values of ℓ providing the most accurate results in terms of hydrogen uptake. However, even for the largest imposed displacement, the physics-based model reproduces the temporal behaviour correctly. In contrast, the distributed diffusion model overestimates the total hydrogen entry for all cases, with its results being independent of the imposed displacement and having a similar dependence on the length scale as the physics-based model. It can thus be concluded that the physics-based model presented here is a suitable strategy to endow phase field models with the ability of accurately predicting the electrolyte-crack interplay, capturing the sensitivity to the crack geometry. §.§ Electrolyte-driven crack propagation The second case study aims at assessing the ability of the model in predicting the growth of cracks that contain aqueous electrolytes. To this end, we consider a square domain of dimensions 10 mm ×10 mm with an initial crack of length 5 mm, as shown in <ref>. This is a paradigmatic benchmark in the phase field fracture community <cit.>. The square domain is discretised using a uniform mesh with the element dimensions being 0.1 mm×0.05 mm. A constant external displacement U_ext=0.01 mm is imposed on the top edge, with this displacement being insufficient to cause the crack to propagate by itself. Over time, hydrogen is absorbed within the metal, reducing the material toughness and allowing the crack to propagate. The combination of imposed displacement and fracture energy has been selected such that no significant propagation occurs in the absence of hydrogen, while modest amounts of hydrogen ingress cause the domain to fully fracture. To track the evolution of these fractures, the total crack length is estimated based on the phase field distribution function, <ref>, such that: a = ∫_Ωϕ^2/2ℓ+ℓ/2|∇ϕ|^2 dΩ While this does not provide the exact length over which the crack has propagated, it provides a good indication of the rate at which it evolves. Both the physics-based model and the distributed diffusion model are used to simulate the case, using phase field length scales ℓ = 0.125, 0.25, 0.375 and 0.5 mm. Due to the difficulties of discretising the interior of a moving crack, no discrete fracture simulations were performed. The results obtained are shown in <ref>, in terms of the evolution in time of the volume-averaged interstitial hydrogen concentration and of the crack length. As shown in Fig. <ref>a, and in agreement with expectations, the distributed diffusion model shows a larger hydrogen uptake initially, compared to the physics-based model. As a result of this higher uptake, the crack propagates sooner for the distributed diffusion model simulations, see <ref>. Since the displacement on the top surface is constant throughout the simulation, this crack develops solely due to the role of hydrogen in reducing the fracture resistance of the material. In contrast to the static crack case from <ref>, this case shows a strong length-scale dependence for both the physics-based and distributed-diffusion results. The results show some sensitivity to the choice of phase field length scale, for both the distributed diffusion and physics-based models, with larger values of ℓ leading to earlier failures. This can be rationalised as follows. First, note that the choice of phase field length scale determines the strength of the material, as evident from the critical stress obtained for a one-dimensional solution of the phase-field problem, σ_c = 9/16 √(EG_c/(6ℓ)) <cit.>. Although the boundary value problem under consideration involves a long crack (and thus toughness-dominated behaviour is expected <cit.>), the magnitude of the strength imposes an upper limit on the hydrostatic stress levels that can be attained, and these govern hydrogen uptake. For example, under steady state conditions, the lattice hydrogen concentration reads, C_L = C_0exp( V_Hσ_H/RT) where C_0 is the reference, far-field hydrogen lattice concentration. The interplay between the material strength and the hydrogen localisation is shown in <ref>, where contours of lattice hydrogen concentration are shown for two values of the phase field length scale, after a time of 240 hours. The results show how decreasing the magnitude of ℓ (i.e., increasing σ_c) results in higher levels of interstitial hydrogen. Notably, this length scale dependence becomes more pronounced after the onset of crack growth. The damaged region is larger for higher ℓ values, providing a larger region of exposure to the hydrogen-containing electrolyte. This can be seen in Fig. <ref>a, where the differences between the predictions obtained with different ℓ values are seen to increase with time. These results confirm the ability of the proposed scheme to capture the influence of hydrogen uptake on propagating cracks. §.§ Coupling free-flowing and crack-contained electrolytes The last case study addresses the most general scenario, one where there is a separate, free-flowing electrolyte domain, in addition to a solid domain and an electrolyte-containing crack domain. A sketch of the boundary value problem under consideration is shown in <ref>. The boundary value problem involves a metal of size L× H_m = 4× 1 cm, containing a pit in the centre with a radius of r_pit=2 mm, and an initial crack of length a_0=1 mm. On top of this metal, an electrolyte layer of height H_e=5 mm is present. This electrolyte is simulated directly through the Nernst-Planck equations; <ref>. It should be noted that, while the fracture-contained electrolyte takes the displacements of the metal into account through the crack opening height, the free electrolyte does not include any effects resulting from geometry changes. On the right side of the metal, a constant displacement U_ext=0.02 mm is imposed. To demonstrate the ability of the model in capturing the sensitivity to the external environment, simulations are conducted for a wide range of applied metal potentials, going from E_m=-0.5 V_SHE to E_m=0.3 V_SHE. Lower potentials, typical of cathodic protection conditions, strongly accelerate hydrogen reactions, while higher potentials enhance the corrosion rate. The domain is discretised using quadratic triangular elements, using small elements with characteristic size of 0.1 mm near the expected crack path, and larger elements up to 1 mm for the electrolyte and metal away from the crack. This results in a total of 37,000 nodes, with a total of 368,000 degrees of freedom. Representative results for the spatial distribution of the interstitial hydrogen concentration are shown in <ref>. The results are given for a time of t=10 hours and two choices of applied potential (E_m=-0.4 V_SHE and E_m=0.2 V_SHE). While for negative potentials the hydrogen is absorbed from both the exterior and crack surfaces, the majority of the hydrogen for the positive potential simulation enters through the crack. These results can be rationalised by inspecting the pH contours obtained, which are shown in <ref>. Here, it is worth noting that although the pH is calculated in the entire domain, its physical meaning is limited to electrolyte-containing regions and thus results are only shown for regions where ϕ>0.1. The calculations show that the pH is dominated by the hydrogen evolution reactions for negative metal potentials (<ref>a), causing the electrolyte to become highly basic within the defect. In contrast, the accelerated corrosion occurring for positive metal potentials lowers the pH within the pit and crack regions relative to the exterior surface. As a result of this low pH, hydrogen uptake is enhanced within the crack for the applied potential E_m=0.2 V_SHE, while a smaller sensitivity to the existence of defects is observed for E_m=-0.4 V_SHE. The volume-averaged lattice hydrogen concentration C_L and the associated loss of load carrying capacity are given in <ref>, as a function of time. Results are given for a wide range of applied potentials so as to showcase the ability of the model in predicting the sensitivity to the environment of the hydrogen uptake and the failure time. As shown in <ref>, the initial external load is not sufficient to cause an immediate fracture of the metal and thus crack growth requires the accumulation of sufficient hydrogen in the crack tip region. For the E_m=-0.2 V_SHE and E_m=-0.1 V_SHE cases, no crack propagation occurs within 100 days and the average interstitial lattice hydrogen (<ref>) starts to plateau, indicating that under these circumstances (environment, material, applied load) no fracture propagation due to hydrogen embrittlement will ever occur. For all other metal potentials, the hydrogen absorption is sufficient to cause a crack to develop, reducing the external traction to zero. Comparing Figs. <ref>a and <ref>b one can see the role of localised hydrogen uptake in driving the cracking process. The cases dominated by corrosion (low E_m) show a lower average hydrogen concentration at failure, as for these cases the majority of the hydrogen is absorbed into the metal through the crack walls near the fracture tip. In contrast, the cases with a negative metal potential absorb the majority of hydrogen from the exterior surface, away from the crack tip and stress concentrations, and thus require this hydrogen to diffuse towards the crack. This results in a higher average hydrogen concentration at the point of failure. It can be seen that the most aggressive environment corresponds to the one with the lowest applied potential (E_m=-0.5 V_SHE), as the hydrogen reactions are greatly enhanced. However, the interplay between applied potential and hydrogen embrittlement susceptibility is not straightforward. As discussed in the context of Figs. <ref> and <ref>, the enhanced corrosion process associated with positive applied potentials can lead to a localised reduction in pH, and thus an increase in hydrogen uptake, which can overcompensate the reduction in hydrogen uptake associated with the deceleration in hydrogen reaction rates. The results obtained not only showcase the ability of the model to shed light into the complex interplay between the environment and electro-chemo-mechanical failures but also demonstrate its potential in delivering predictions over technologically-relevant scales, despite the large number of degrees-of-freedom involved (12 per node). § CONCLUSIONS We have presented a new phase field-based theoretical and computational framework for simulating electro-chemo-mechanical fracture. For the first time, the modelling framework combines: (i) an electrochemical description of electrolyte behaviour, capable of handling an arbitrary number of ionic species and changes in electrolyte potential, (ii) surface reaction modelling at the electrolyte-electrode interface, (iii) species absorption and subsequent stress-driven bulk diffusion within the electrode metal, and (iv) a phase field description of fracture that incorporates toughness degradation due to the presence of aggressive species. Moreover, we present a novel formulation to represent the electrolyte contained within cracks within the context of phase field fracture models. This formulation is based upon the governing equations for the electrolyte, mapping from an electrolyte represented in a discrete manner to a smeared representation of the electrolyte. This approach is compared to the the widely used distributed diffusion model, showing that both can be described through similar schemes, only altering the capacity, surface, and diffusion distribution functions. The theoretical framework was implemented using the finite element method. The coupled electrical-chemical-deformation-fracture problem was solved in a staggered manner, with the primary fields (nodal degrees-of-freedom) being: (i) the electrolyte potential, (ii) the concentrations of relevant ionic species, (iii) the interface coverage of absorbable species, (iv) the concentration of diluted species in the bulk metal, (v) the displacement field, and (vi) the phase field order parameter. Given the number of fields involved, special emphasis is placed on improving stability and efficiency. Among others, we introduce a lumped integration scheme that greatly reduces oscillations and enables adopting large time increments without convergence problems. Also, strategies are adopted to prevent ill-constrained degrees-of-freedom. To demonstrate the potential of our computational framework we particularise our generalised model to the analysis of metallic fracture due to the uptake of hydrogen from aqueous electrolytes, a technologically-relevant problem that is pervasive across the defence, transport, construction and energy sectors. Several boundary value problems are addressed to showcase the ability of the model to adequately simulate the behaviour of electrolytes contained within cracks and to capture the interplay between fracture and electro-chemo-mechanical phenomena. Key findings include: * The physics-based formulation presented to describe electrolytes within cracks is shown to capture the sensitivity to crack opening height, unlike other existing models. Predictions of hydrogen uptake and ionic species distribution show an excellent agreement with discrete fracture simulations. Moreover, the predictions of this physics-based model display a negligible sensitivity to the choice of phase field length scale ℓ for stationary cracks, with some sensitivity being observed in propagating cracks due to the relation between ℓ and the material strength. * The model is shown to be capable of adequately predicting the interplay between the environment, the material properties and the applied load for both crack-contained electrolytes and the more general case of free-flowing electrolytes. Widely observed experimental trends can now be rationalised in terms of changes in electrolyte behaviour, hydrogen uptake and toughness degradation. * The analysis of defect-containing metals exposed to free-flowing, hydrogen-containing electrolytes reveals that high applied potentials, which favour corrosive reactions relative to the hydrogen evolution reaction, can result in early failures due to local acidification of the electrolyte solution in the defect region. § ACKNOWLEDGMENTS Financial support through grant EP/V009680/1 (“NEXTGEM") from the UK Engineering and Physical Sciences Research Council (EPSRC) is gratefully acknowledged. Tim Hageman additionally acknowledges support through the research fellowship scheme of the Royal Commission for the Exhibition of 1851, and Emilio Martínez-Pañeda additionally acknowledges financial support from UKRI's Future Leaders Fellowship programme [grant MR/V024124/1]. The authors also acknowledge computational resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232). § DATA AVAILABILITY The code used to produce the results presented in this paper, together with documentation detailing the use of this code, are made freely available at <www.imperial.ac.uk/mechanics-materials/codes> and <www.empaneda.com>. Documentation is also provided, along with example files that enable reproduction of the results shown in <ref>. 10 url<#>1urlprefixURL href#1#2#2 #1#1 Sieradzki1987 K. Sieradzki, R. C. Newman, Stress-corrosion cracking, Journal of Physics and Chemistry of Solids 48 (11) (1987) 1101–1113. Turnbull1987 A. Turnbull, D. Ferriss, Mathematical modelling of the electrochemistry in corrosion fatigue cracks in steel corroding in marine environments, Corrosion Science 27 (12) (1987) 1323–1350. Turnbull2010 A. Turnbull, L. Wright, L. Crocker, New insight into the pit-to-crack transition from finite element analysis of the stress and strain distribution around a corrosion pit, Corrosion Science 52 (4) (2010) 1492–1498. Winzer2005 N. Winzer, A. Atrens, G. Song, E. Ghali, W. Dietzel, K. U. Kainer, N. Hort, C. Blawert, A Critical Review of the Stress Corrosion Cracking (SCC) of Magnesium Alloys, Advanced Engineering Materials 7 (8) (2005) 659–693. Martinez-Paneda2021 E. Martínez-Pañeda, Progress and opportunities in modelling environmentally assisted cracking, RILEM Technical Letters 6 (2021) 70–77. Tapia-Ruiz2020 N. Tapia-Ruiz, A. Robert Armstrong, H. Alptekin, al, X. Wang, R. Xiao, P. Li, Y. Zhao, Y. Shen, S.-H. Bo, Fracture behavior in battery materials, Journal of Physics: Energy 2 (2) (2020) 022002. Zhao2022 Y. Zhao, R. Wang, E. Martínez-Pañeda, A phase field electro-chemo-mechanical formulation for predicting void evolution at the Li–electrolyte interface in all-solid-state batteries, Journal of the Mechanics and Physics of Solids 167 (2022) 104999. Boyce2022 A. M. Boyce, E. Martínez-Pañeda, A. Wade, Y. S. Zhang, J. J. Bailey, T. M. Heenan, P. R. Brett, Dan J. L., Shearing, Cracking predictions of lithium-ion battery electrodes by X-ray computed tomography and modelling, Journal of Power Sources 526 (2022) 231119. Ai2022 W. Ai, B. Wu, E. Martínez-Pañeda, A coupled phase field formulation for modelling fatigue cracking in lithium-ion battery electrode particles, Journal of Power Sources 544 (2022) 231805. Gangloff2012 R. P. Gangloff, B. P. Somerday, Gaseous Hydrogen Embrittlement of Materials in Energy Technologies, Woodhead Publishing Limited, Cambridge, 2012. Sofronis1989 P. Sofronis, R. M. McMeeking, Numerical analysis of hydrogen transport near a blunting crack tip, Journal of the Mechanics and Physics of Solids 37 (3) (1989) 317–350. DiLeo2013 C. V. Di Leo, L. Anand, Hydrogen in metals: A coupled theory for species diffusion and large elastic-plastic deformations, International Journal of Plasticity 43 (2013) 42–69. Diaz2016 A. Díaz, J. M. Alegre, I. I. Cuesta, Coupled hydrogen diffusion simulation using a heat transfer analogy, International Journal of Mechanical Sciences 115-116 (2016) 360–369. IJHE2016 E. Martínez-Pañeda, S. del Busto, C. F. Niordson, C. Betegón, Strain gradient plasticity modeling of hydrogen diffusion to the crack tip, International Journal of Hydrogen Energy 41 (24) (2016) 10265–10274. Hageman2022 T. Hageman, E. Martínez-Pañeda, An electro-chemo-mechanical framework for predicting hydrogen uptake in metals due to aqueous electrolytes, Corrosion Science (2022) 110681 CS2020b E. Martínez-Pañeda, A. Díaz, L. Wright, A. Turnbull, Generalised boundary conditions for hydrogen transport at crack tips, Corrosion Science 173 (2020) 108698. Serebrinsky2004 S. Serebrinsky, E. A. Carter, M. Ortiz, A quantum-mechanically informed continuum model of hydrogen embrittlement, Journal of the Mechanics and Physics of Solids 52 (10) (2004) 2403–2430. Yu2016a H. Yu, J. S. Olsen, A. Alvaro, V. Olden, J. He, Z. Zhang, A uniform hydrogen degradation law for high strength steels, Engineering Fracture Mechanics 157 (2016) 56–71. EFM2017 S. del Busto, C. Betegón, E. Martínez-Pañeda, A cohesive zone framework for environmentally assisted fatigue, Engineering Fracture Mechanics 185 (2017) 210–226. Martinez-Paneda2018 E. Martínez-Pañeda, A. Golahmar, C. F. Niordson, A phase field formulation for hydrogen assisted cracking, Computer Methods in Applied Mechanics and Engineering 342 (2018) 742–761. Wu2020b J.-Y. Wu, T. K. Mandal, V. P. Nguyen, A phase-field regularized cohesive zone model for hydrogen assisted cracking, Computer Methods in Applied Mechanics and Engineering 358 (2020) 112614. Cui2022 C. Cui, R. Ma, E. Martínez-Pañeda, A generalised, multi-phase-field theory for dissolution-driven stress corrosion cracking and hydrogen embrittlement, Journal of the Mechanics and Physics of Solids 166 (2022) 104951. Bourdin2000 B. Bourdin, G. A. Francfort, J.-J. Marigo, Numerical experiments in revisited brittle fracture, Journal of the Mechanics and Physics of Solids 48 (4) (2000) 797–826. Borden2012 M. J. Borden, C. V. Verhoosel, M. A. Scott, T. J. Hughes, C. M. Landis, A phase-field description of dynamic brittle fracture, Computer Methods in Applied Mechanics and Engineering 217-220 (2012) 77–95. Luo2023 C. Luo, L. Sanavia, L. De Lorenzis, Phase-field modeling of drying-induced cracks: Choice of coupling and study of homogeneous and localized damage, Computer Methods in Applied Mechanics and Engineering 410 (2023) 115962. Ambati2015a M. Ambati, T. Gerasimov, L. De Lorenzis, Phase-field modeling of ductile fracture, Computational Mechanics 55 (5) (2015) 1017–1040. Miehe2016c C. Miehe, S. Teichtmeister, F. Aldakheel, Phase-field modelling of ductile fracture: a variational gradient-extended plasticity-damage theory and its micromorphic regularization., Philosophical transactions. Series A, Mathematical, physical, and engineering sciences 374 (2066) (2016) 20150170. Borden2016 M. J. Borden, T. J. R. Hughes, C. M. Landis, A. Anvari, I. J. Lee, A phase-field formulation for fracture in ductile materials: Finite deformation balance law derivation, plastic degradation, and stress triaxiality effects, Computer Methods in Applied Mechanics and Engineering 312 (2016) 130–166. Carrara2020 P. Carrara, M. Ambati, R. Alessi, L. De Lorenzis, A framework to model the fatigue behavior of brittle materials based on a variational phase-field approach, Computer Methods in Applied Mechanics and Engineering 361 (2020) 112731. CMAME2022 Z. Khalil, A. Y. Elghazouli, E. Martínez-Pañeda, A generalised phase field model for fatigue crack growth in elastic – plastic solids with an efficient monolithic solver, Computer Methods in Applied Mechanics and Engineering 388 (2022) 114286. Golahmar2023 A. Golahmar, C. F. Niordson, E. Martínez-Pañeda, A phase field model for high-cycle fatigue: Total-life analysis, International Journal of Fatigue 170 (2023) 107558. Alessi2023 R. Alessi, J. Ulloa, Endowing Griffith's fracture theory with the ability to describe fatigue cracks, Engineering Fracture Mechanics 281 (January). CPB2019 Hirshikesh, S. Natarajan, R. K. Annabattula, E. Martínez-Pañeda, Phase field modelling of crack propagation in functionally graded materials, Composites Part B: Engineering 169 (2019) 239–248. Kumar2021 P. K. A. V. Kumar, A. Dean, J. Reinoso, P. Lenarda, M. Paggi, Phase field modeling of fracture in Functionally Graded Materials: G -convergence and mechanical insight on the effect of grading, Thin-Walled Structures 159 (2021) 107234. Reinoso2017 J. Reinoso, A. Arteiro, M. Paggi, P. P. Camanho, Strength prediction of notched thin ply laminates using finite fracture mechanics and the phase field approach, Composites Science and Technology 150 (2017) 205–216. CST2021 W. Tan, E. Martínez-Pañeda, Phase field predictions of microscopic fracture and R-curve behaviour of fibre-reinforced composites, Composites Science and Technology 202 (2021) 108539. Mitrou2023 A. Mitrou, A. Arteiro, J. Reinoso, P. P. Camanho, Modeling fracture of multidirectional thin-ply laminates using an anisotropic phase field formulation at the macro-scale, International Journal of Solids and Structures 273 (2023) 112221. CMAME2021 M. Simoes, E. Martínez-Pañeda, Phase field modelling of fracture and fatigue in Shape Memory Alloys, Computer Methods in Applied Mechanics and Engineering 373 (2021) 113504. Hasan2022 M. M. Hasan, M. Zhang, T. Baxevanis, A Finite-Strain Phase-Field Description of Thermomechanically Induced Fracture in Shape Memory Alloys, Shape Memory and Superelasticity 8 (4) (2022) 356–372. Sun2021 X. Sun, R. Duddu, Hirshikesh, A poro-damage phase field model for hydrofracturing of glacier crevasses, Extreme Mechanics Letters 45 (2021) 101277. Clayton2022 T. Clayton, R. Duddu, M. Siegert, E. Martínez-Pañeda, A stress-based poro-damage phase field model for hydrofracturing of creeping glaciers and ice shelves, Engineering Fracture Mechanics 272 (2022) 108693. CS2020 E. Martínez-Pañeda, Z. D. Harris, S. Fuentes-Alonso, J. R. Scully, J. T. Burns, On the suitability of slow strain rate tensile testing for assessing hydrogen embrittlement susceptibility, Corrosion Science 163 (2020) 108291. Duda2018 F. P. Duda, A. Ciarbonetti, S. Toro, A. E. Huespe, A phase-field model for solute-assisted brittle fracture in elastic-plastic solids, International Journal of Plasticity 102 (2018) 16–40. Anand2019 L. Anand, Y. Mao, B. Talamini, On modeling fracture of ferritic steels due to hydrogen embrittlement, Journal of the Mechanics and Physics of Solids 122 (2019) 280–314. Huang2020 C. Huang, X. Gao, Phase field modeling of hydrogen embrittlement, International Journal of Hydrogen Energy 45 (38) (2020) 20053–20068. JMPS2020 P. K. Kristensen, C. F. Niordson, E. Martínez-Pañeda, A phase field model for elastic-gradient-plastic solids undergoing hydrogen embrittlement, Journal of the Mechanics and Physics of Solids 143 (2020) 104093. Mandal2021 T. K. Mandal, V. P. Nguyen, J. Y. Wu, Comparative study of phase-field damage models for hydrogen assisted cracking, Theoretical and Applied Fracture Mechanics 111 (2021) 102840. Kehler2008 B. A. Kehler, J. R. Scully, Predicting the effect of applied potential on crack tip hydrogen concentration in low-alloy martensitic steels, Corrosion 64 (5) (2008) 465–477. Carneiro-Neto2016 E. B. Carneiro-Neto, M. C. Lopes, E. C. Pereira, Simulation of interfacial pH changes during hydrogen evolution reaction, Journal of Electroanalytical Chemistry 765 (2016) 92–99. Wu2016 T. Wu, L. De Lorenzis, A phase-field approach to fracture coupled with diffusion, Computer Methods in Applied Mechanics and Engineering 312 (2016) 196–223. Schuler2020 L. Schuler, A. G. Ilgen, P. Newell, Chemo-mechanical phase-field modeling of dissolution-assisted fracture, Computer Methods in Applied Mechanics and Engineering 362 (2020) 112838. Miehe2016b C. Miehe, S. Mauthe, Phase field modeling of fracture in multi-physics problems. Part III. Crack driving forces in hydro-poro-elasticity and hydraulic fracturing of fluid-saturated porous media, Computer Methods in Applied Mechanics and Engineering 304 (2016) 619–655. Lee2016c S. Lee, M. F. Wheeler, T. Wick, Pressure and fluid-driven fracture propagation in porous media using an adaptive finite element phase field model, Computer Methods in Applied Mechanics and Engineering 305 (2016) 111–132. Santillan2018 D. Santillán, R. Juanes, L. Cueto-Felgueroso, Phase Field Model of Hydraulic Fracturing in Poroelastic Media: Fracture Propagation, Arrest, and Branching Under Fluid Injection and Extraction, Journal of Geophysical Research: Solid Earth 123 (3) (2018) 2127–2155. Chukwudozie2019 C. Chukwudozie, B. Bourdin, K. Yoshioka, A variational phase-field model for hydraulic fracturing in porous media, Computer Methods in Applied Mechanics and Engineering 347 (2019) 957–982. Kristensen2021 P. K. Kristensen, C. F. Niordson, E. Martínez-Pañeda, An assessment of phase field fracture: crack initiation and growth, Philosophical Transactions of the Royal Society A 379 (2203) (2021) 20210021. Miehe2010b C. Miehe, M. Hofacker, F. Welschinger, A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits, Computer Methods in Applied Mechanics and Engineering 199 (45-48) (2010) 2765–2778. Barrera2016 O. Barrera, E. Tarleton, H. W. Tang, A. C. F. Cocks, Modelling the coupling between hydrogen diffusion and the mechanical behaviour of metals, Computational Materials Science 122 (2016) 219–228. AM2020 R. Fernández-Sousa, C. Betegón, E. Martínez-Pañeda, Analysis of the influence of microstructural traps on hydrogen assisted fatigue, Acta Materialia 199 (2020) 253–263. Oriani1970 R. A. Oriani, The diffusion and trapping of hydrogen in steel, Acta Metallurgica 18 (1) (1970) 147–157. Diaz2019 A. Díaz, I. I. Cuesta, E. Martinez-Pañeda, J. M. Alegre, Analysis of hydrogen permeation tests considering two different modelling approaches for grain boundary trapping in iron, International Journal of Fracture 2019 223:1 223 (1) (2019) 17–35. Sarkar2011 S. Sarkar, W. Aquino, Electroneutrality and ionic interactions in the modeling of mass transport in dilute electrochemical systems, Electrochimica Acta 56 (24) (2011) 8969–8978. Liu2014 Q. Liu, A. D. Atrens, Z. Shi, K. Verbeken, A. Atrens, Determination of the hydrogen fugacity during electrolytic charging of steel, Corrosion Science 87 (2014) 239–258. Yoshioka2020 K. Yoshioka, D. Naumov, O. Kolditz, On crack opening computation in variational phase-field models for fracture, Computer Methods in Applied Mechanics and Engineering 369 (2020) 113210. ScatteredInterpolant1 I. Amidror, Scattered data interpolation methods for electronic imaging systems: a survey, Journal of Electronic Imaging 11 (2) (2002) 157 – 176. ScatteredInterpolant2 The MathWorks Inc., https://uk.mathworks.com/help/matlab/ref/scatteredinterpolant.htmlMatlab version r2023a, scatteredinterpolant (2023). <https://uk.mathworks.com/help/matlab/ref/scatteredinterpolant.html> Hughes2005 T. J. R. Hughes, J. A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement, Computer Methods in Applied Mechanics and Engineering 194 (39-41) (2005) 4135–4195. Borden2011 M. J. Borden, M. A. Scott, J. A. Evans, T. J. R. Hughes, Isogeometric finite element data structures based on Bézier extraction of NURBS, International Journal for Numerical Methods in Engineering 87 (1-5) (2011) 15–47. Bazilevs2010 Y. Bazilevs, V. M. Calo, J. A. Cottrell, J. A. Evans, T. J. R. Hughes, S. Lipton, M. A. Scott, T. W. Sederberg, Isogeometric analysis using T-splines, Computer Methods in Applied Mechanics and Engineering 199 (5-8) (2010) 229–263. Scott2011 M. A. Scott, M. J. Borden, C. V. Verhoosel, T. W. Sederberg, T. J. R. Hughes, Isogeometric finite element data structures based on Bézier extraction of T‐splines, International Journal for Numerical Methods in Engineering 88 (2) (2011) 126–156. Hageman2021c T. Hageman, R. de Borst, Unequal order T-spline meshes for fracture in poroelastic media, Journal of Mechanics 37 (2021) 669–679. Bogner1965 F. K. Bogner, R. L. Fox, L. A. Schmit, The generation of interelement compatible stiffness and mass matrices by the use of interpolation formulae, Proceedings of the Conference on Matrix Methods in Structural Mechanics (1965) 397–444. Ciarlet1972 P. G. Ciarlet, P. A. Raviart, General lagrange and hermite interpolation in Rn with applications to finite element methods, Archive for Rational Mechanics and Analysis 46 (3) (1972) 177–199. Hageman2022b T. Hageman, E. Martínez-Pañeda, Stabilising Effects of Lumped Integration Schemes for the Simulation of Metal-Electrolyte Reactions, Journal of The Electrochemical Society 170 (2) (2023) 021511. Schellekens1993 J. C. J. Schellekens, R. de Borst, On the numerical integration of interface elements, International Journal for Numerical Methods in Engineering 36 (1) (1993) 43–66. Miehe2010 C. Miehe, F. Welschinger, M. Hofacker, Thermodynamically consistent phase-field models of fracture: Variational principles and multi-field FE implementations, International Journal for Numerical Methods in Engineering 83 (10) (2010) 1273–1311. Miehe2015 C. Miehe, M. Hofacker, L. M. Schänzel, F. Aldakheel, Phase field modeling of fracture in multi-physics problems. Part II. Coupled brittle-to-ductile failure criteria and crack propagation in thermo-elastic-plastic solids, Computer Methods in Applied Mechanics and Engineering 294 (2015) 486–522. Gerasimov2019 T. Gerasimov, L. De Lorenzis, On penalization in variational phase-field models of brittle fracture, Computer Methods in Applied Mechanics and Engineering 354 (2019) 990–1026. Quinteros2022 L. Quinteros, E. García-Macías, E. Martínez-Pañeda, Micromechanics-based phase field fracture modelling of CNT composites, Composites Part B: Engineering 236 (2022) 109788. Egger2019 A. Egger, U. Pillai, K. Agathos, E. Kakouris, E. Chatzi, I. A. Aschroft, S. P. Triantafyllou, Discrete and Phase Field Methods for Linear Elastic Fracture Mechanics: A Comparative Study and State-of-the-Art Review, Applied Sciences 9 (12) (2019) 2436.
http://arxiv.org/abs/2307.00984v1
20230703130317
Predicting beauty, liking, and aesthetic quality: A comparative analysis of image databases for visual aesthetics research
[ "Ralf Bartho", "Katja Thoemmes", "Christoph Redies" ]
cs.CV
[ "cs.CV", "cs.DB" ]
1]Ralf Bartho 1]Katja Thoemmes 1]Christoph Rediescorresponding author: christoph.redies@med.uni-jena.de [1]Experimental Aesthetics Group, Institute of Anatomy I, University of Jena School of Medicine, Jena, Germany Predicting beauty, liking, and aesthetic quality: A comparative analysis of image databases for visual aesthetics research [ August 1, 2023 ========================================================================================================================== In the fields of Experimental and Computational Aesthetics, numerous image datasets have been created over the last two decades. In the present work, we provide a comparative overview of twelve image datasets that include aesthetic ratings (beauty, liking or aesthetic quality) and investigate the reproducibility of results across different datasets. Specifically, we examine how consistently the ratings can be predicted by using either (A) a set of 20 previously studied statistical image properties, or (B) the layers of a convolutional neural network developed for object recognition. Our findings reveal substantial variation in the predictability of aesthetic ratings across the different datasets. However, consistent similarities were found for datasets containing either photographs or paintings, suggesting different relevant features in the aesthetic evaluation of these two image genres. To our surprise, statistical image properties and the convolutional neural network predict aesthetic ratings with similar accuracy, highlighting a significant overlap in the image information captured by the two methods. Nevertheless, the discrepancies between the datasets call into question the generalizability of previous research findings on single datasets. Our study underscores the importance of considering multiple datasets to improve the validity and generalizability of research results in the fields of experimental and computational aesthetics. § INTRODUCTION Computer vision research has benefited from the availability of large, high-quality (peer-reviewed) image datasets. Arguably, the most influential dataset in computer vision has been ImageNet <cit.>, which was developed for visual recognition tasks. Currently, the complete ImageNet dataset contains more than 14 million images with a large variety of annotations. It is easily accessible and freely available for research. The fact that many research groups worked on the same dataset allowed for good comparability of results, a well-connected research field, high visibility, and rapid propagation of new methods, which together led to significant breakthroughs <cit.>. As obvious as the benefits of a universally accepted dataset may be, defining and creating such a dataset for aesthetic research is difficult. Compared to the classification of objects, aesthetic evaluation is a far more complex process, which is influenced not only by image features, but also by cognitive, emotional, and contextual factors (see , or , for models of aesthetic perception). This complexity may be the reason why aesthetic research, despite many attempts, has not yet produced a universally accepted reference dataset such as ImageNet. Instead, the situation is characterized by the coexistence of many diverse datasets. Experimental Aesthetics and Computational Aesthetics are two major subfields of empirical aesthetics research (for a review, see ). Experimental Aesthetics uses psychological research paradigms to investigate principles that underly aesthetic experience. One of its goals is to identify objective properties of visual stimuli that contribute to subjective aesthetic evaluations in the beholder. In this field, researchers often use comparatively small and specialized datasets (e.g. ). Datasets often comprise hand-picked, high-resolution photographs or scans of paintings, and the aesthetic ratings are collected in laboratory experiments. Computational Aesthetics uses research paradigms from computer science. Its aim is to increase the accuracy of predictions for aesthetic ratings. In this field, large and diverse datasets have been preferred (e.g. ). The images are often scraped from internet photo platforms in medium to low resolution. Most of the images are photographs and the aesthetic ratings are collected through crowd sourcing. While the two fields of research are approaching very similar questions from different perspectives, mutually beneficial exchange between them is rather scarce. Studies in both Experimental Aesthetics and Computational Aesthetics are usually based on specifically selected (or even purpose-build) datasets adversely affecting both comparability and generalizability of results. The datasets used in Experimental Aesthetics and Computational Aesthetics thus differ greatly in terms of number, origin and type of images. Moreover, the datasets were rated according to different aesthetic rating types, such as beauty, liking or aesthetic quality. With this in mind, the goal of this study is not to evaluate the quality of existing datasets or to recommend the use of one dataset over another. Rather, we want to gain insight into the extent to which research results can be replicated across different datasets. The heterogeneity of the datasets used in aesthetic research also extends to the image features (or Statistical Image Properties; SIPs). Not only has a vast number of image features been studied, but the methods used to calculate the same or similar features may differ between studies. For example, methods for measuring visual complexity <cit.> have been based on JPEG compression effectiveness <cit.>, edge density <cit.> or luminance and color gradients <cit.>. <cit.> examined different complexity measures and found correlations between measures ranging from ρ=0.60 to ρ=0.82, raising the question of whether studies that use these different complexity measures really analyze the same visual phenomenon. Such variability is even found for very basic SIPs. For example, <cit.> compute Aspect Ratio as height by width while <cit.> and <cit.> calculate it as width by height. In addition to classic hand-crafted image features, the use of learned features from neural networks has also increased in both Experimental Aesthetics and Computational Aesthetics. Convolutional Neural Networks (CNNs; ) have lead to considerable advances and they now represent state-of-the-art methods in almost all classic areas of computer vision (for example, object recognition, scene segmentation, and facial recognition; ). In the field of aesthetic research, they are also getting more and more attention recently. Standard CNNs trained for object recognition can be used to predict aesthetic judgments or to discriminate art from non-art, even though they have never been trained for these tasks <cit.>. All approaches have in common that they extract learned features from internal representations of neuronal networks. Despite the fact that neural networks are powerful and versatile tools, the complex information encoded in their layers of several thousand 'neurons' cannot be readily interpreted. By contrast, SIPs that reflect the distribution of color or luminance gradients have a relatively clear relationship to the function of human visual system. A recent study by <cit.> found that low-level (objective) SIPs such as hue or saturation are more strongly represented in the lower layers of CNNs, while high-level (subjective) image features such as concreteness or dynamics tend to be found in the upper layers. To what extent the information captured by hand-crafted SIPs and the features learned by neural networks overlap, remains to be elucidated. Furthermore, the methods for measuring the quality of predictions of aesthetic ratings are also not consistent across aesthetic research. Work in Computational Aesthetics often binarizes ratings and relies on classification models. These models, such as Support Vector Machines (SVM) or neural networks, are comparatively complex, have a high degree of freedom, require hyperparameter optimization and carry a high risk of overfitting. Usually, overfitting can be suppressed by additional regularization methods, which further increase the complexity of the classification models. In contrast, work in Experimental Aesthetics often models the data by using (multiple) linear regression models, which have a comparatively low degree of freedom, are easy to interpret and replicate, and do not require hyperparameter fine tuning. However, they have a much lower prediction accuracy. Taken together, the field of visual aesthetics is challenged by a lot of heterogeneity in (1) available image datasets with corresponding aesthetic annotations, (2) the computation of image features, and (3) modeling methods to predict aesthetic ratings by image features. This diversity leads to relatively poor comparability of research results. As a first step to find more common ground, we compare different aesthetic datasets in the present study, both in terms of aesthetic ratings, image features, and modeling. In the first part of the present paper, we provide an overview of the image datasets used in aesthetic research. In our study, we include twelve datasets that contain at least 500 images with corresponding aesthetic annotations for one or more aesthetic rating type (liking, beauty, aesthetic quality) and compare them based on objective image features (SIPs). To determine the reproducibility of results across the datasets, we assess to what extent the aesthetic ratings in each dataset can be predicted (A) by a set of 20 SIPs, (B) by the features in the different layers of a VGG19 convolutional neural network <cit.> and C) by a combination of both. The 20 SIPs and the VGG19 features have been studied previously with respect to aesthetic ratings (see Section <ref>). Note that we do not intend to obtain state-of-the-art prediction accuracy in this context. Consequently, to keep the measurement of the prediction accuracy as simple and reproducible as possible, we use regression models whenever possible. In the second part of the present paper, we shed some light onto the meaning of VGG19 layers by mapping SIPs onto them. We examine the extent to which the information captured by the 20 SIPs and the layers of the VGG19 overlap. Our results will help to understand what aspects of aesthetic perception CNNs capture internally. Last but not least, we hypothesize why CNNs can predict aesthetic ratings even though they have never been trained for this task. § MATERIAL & METHODS §.§ Image Datasets In order to collect as many published image datasets with aesthetic ratings as possible, we performed an extensive literature search in the fields of Computational Aesthetics and Experimental Aesthetics and related subject areas. The literature search was supplemented by pertinent information from the homepage of the Grappa project at the University of Leuven[<www.grappaproject.eu/>], and the review by <cit.>. These two sources already listed six and seven datasets, respectively, of the twelve datasets used in the present study (Table 1). In order to be included in our study, a dataset had to meet two criteria: firstly, it had to contain a minimum of 500 images, and secondly, the rating types had to be closely associated with aesthetic preference, such as liking, aesthetic quality, or beauty. Some datasets that met these criteria were not included in the present study because they were not publicly available and their authors did not respond to our enquiries. Figure <ref> shows the five images with the highest and lowest ratings, respectively, for nine of the selected datasets. For copyright reasons, images of the AVA, EVA and Photo.net datasets cannot be shown. In the following sections, we will point out some of the characteristics of the twelve datasets (Table 1). Besides the respective aesthetic ratings, most datasets contain rich additional meta-information, which is not listed here because it is not directly relevant to our research question. §.§.§ Paintings Datasets The JenAesthetics dataset <cit.> is one of three datasets analyzed here that contain only paintings. The images originate from the Google Art Project, which provided open access to images of paintings from various public museums. When the 1629 images from the Google Art Project were selected, the main focus was on good preservation of the paintings, high resolution and good image quality. JA contains colored oil paintings from different periods of Western art. The vast majority of the paintings is representational, with few abstract artworks only. In a lab study, both beauty (JA[beauty]) and aesthetic quality (JA[aesth]) were surveyed on a continuous scale using a sliding bar. The Vienna Art Picture System (VAPS; ) is another dataset of paintings, which is similar to the JA dataset. It features high-resolution images of well-preserved artworks from different art periods of Western art. Furthermore, 180 of the 999 images are abstract paintings. Ratings of liking were collected for this dataset. The WikiArt Emotions Dataset (WikiEmotions; ) is the third paintings dataset that we examined. As the name already indicates, it is a subset of the WikiArt dataset[<www.wikiart.org>] and contains images from different art periods. Liking ratings were obtained with the crowd sourcing tool CrowdFlower. Almost half of the images in the WikiEmotions dataset are abstract paintings. This is a much larger fraction than in the JA and VAPS datasets. §.§.§ AI-Generated Datasets The ArtPics dataset <cit.> is comparatively small and differs from the other datasets in that it is confined to AI-generated images, which show objects, plants, or animals in the center of the image on a large white background (Figure <ref>). The images were generated using Neural Style Transfer by applying the style of various paintings to the content of photographs of objects, plants, or animals. All images have the same relatively low resolution of 600 × 450 pixels. §.§.§ Photographs Datasets The Aesthetics and Attributes Database (AADB; ) contains images from Flickr[<www.flickr.com>]. When selecting images for this dataset, the authors removed non-photographic images and adult content. AADB has ratings for aesthetic quality. Each image was rated by a total of 5 different participants on Amazon Mechanical Turk (AMT). The rating scale extends from 0 to 5, contrary to the information in the original paper, which mentions a scale from 1 to 5. According to one of the authors (Shu Kong, personal correspondence), this typographical mistake does not affect their results because the ratings were normalized. The Aesthetic Ratings from Online Data dataset (AROD; ) contains images and aesthetic ratings retrieved from online information that was available on Flickr. The aesthetic measure is calculated as the ratio between the favs and the views that an image has received on Flickr. This ratio is mapped logarithmically to account for the large differences between the individual images in the number of favs and views. The rating scores of the AROD dataset thus differ from most of the other datasets analyzed, which mostly collect their ratings using explicit rating scales. In a user study, the authors showed a clear aesthetic preference for images with higher AROD scores compared to images with lower scores. Thus, their measure can be interpreted as a measure of aesthetic preference. The Aesthetic Visual Analysis dataset (AVA; ) is one of the oldest, largest, and most widely used datasets for visual aesthetics research, especially in the field of Computational Aesthetics. The images and aesthetic ratings of the AVA dataset originate directly from the DPChallenge[<www.dpchallenge.com>]. At this website, an online community of photography enthusiasts rates images for photographic challenges (e.g. Animal People Interaction, Chocolate, Death) on a scale from 1 to 10. Images in the AVA dataset are likely to receive high ratings when they are of particularly high aesthetic quality, meet photographic quality standards, or possibly play with the theme of the challenge in a humorous way. Thus, the reference context for the ratings is not consistent across images, which is one of the major limitations of this dataset. The Explainable Visual Aesthetics dataset (EVA; ) is a small subset of the AVA images that contains images with an AVA rating between 4 and 9 only (original AVA scale 1-10). Furthermore, AVA images that show adult content, advertisement or a lot of text were not included. An online survey was conducted to rate the aesthetic quality of the selected images. EVA is therefore one of the few image datasets, for which two different aesthetic ratings are available (AVA score and aesthetic quality, hereafter referred to as EVA score). We analyzed the EVA images using both scores. As the name suggests, the Flickr-AES <cit.> is another dataset that uses images from Flickr.com. In its design, Flickr-AES is almost identical to the AADB dataset. Aesthetic quality ratings were collected using AMT on a nearly identical scale of 1-5, with each image also rated by an average of 5 people. Of all the datasets examined, Flickr-AES images have the smallest median resolution (Image Size; Figure <ref>). In this respect, it differs strongly from the other datasets that also use Flickr images. HiddenBeauty <cit.> is one more dataset with images from Flickr.com. It differs from the other three Flickr datasets in the rating type beauty and the use of CrowdFlower as a survey method. In terms of size, rating scale and number of raters per image, it is very similar to AADB and Flickr-AES (Table <ref>). With 900 images, the Open Affective Standardized Image Set (OASIS; ) is the smallest dataset. Originally published with valence and arousal ratings, a later study by <cit.> additionally collected ratings for beauty, which we will use here. Nonetheless, the images were specifically selected to evoke positive and negative emotions, setting the OASIS dataset apart from others. As in the ArtPics dataset, all images in the OASIS dataset have exactly the same resolution of 500 × 400 pixel, which is comparatively low considering the resolution of modern desktop screens. The online platform Photo.net was one of the first websites to provide a large number of images along with user ratings <cit.>. The aesthetic quality of images in Photo.net was rated directly by the community of users on a scale of 1 to 7. Unlike AVA ratings, which are based on diverse photo challenges, and AROD ratings, which are calculated indirectly from favs and views, Photo.net ratings are relatively similar to surveyed ratings, even though they come from an online platform. There is very little overlap between the individual datasets in the images they contain - with the exception of the EVA dataset, which is a subset of the AVA dataset. Across the four datasets derived from Flickr, we identified an exceedingly small number (<20) of images that are present in more than one of the datasets. A similar picture is obtained for the three datasets that contain traditional Western art (JA, VAPS, and WikiEmo). JA and VAPS share 61 images, JA and WikiEmo 77 images, and VAPS and WikiEmo 65 images. However, only a fraction of these images are exact duplicates and images of a given painting can differ substantially in saturation, contrast or resolution. The datasets analyzed differ widely in the number of images they contain (Table <ref>). For example, the AROD dataset is 380 times larger than the VAPS dataset. These large differences could possibly introduce a strong bias in the results. To avoid effects of sample size, we carried out our analyses with a randomly selected subset of 500 images per dataset. For the large datasets in particular, a sample of 500 images might not be representative. However, it is not our intention to obtain representative results for the individual datasets, but to test the generalizability of the research results for different datasets. We thus treated each of the subsets as an independent dataset with aesthetic ratings, resulting in twelve comparable datasets. The random selection of images from each dataset ensures that the subsets include a wide range of image types and aesthetic responses. Because the JA and EVA datasets each have two different aesthetic ratings, a total of fourteen different ratings are studied. Note that the same sample of 500 images is used for each of the two JA ratings and the two EVA ratings, respectively. The number of ratings per image also differs widely between the datasets (Table <ref>). It ranges from five ratings per image for the AADB, Flickr-AES and HiddenBeauty datasets to an average of 210 ratings per image for the AVA dataset. Note that for the AROD dataset, the number 6868 denotes the number of views per image on Flickr. The low number of ratings in some databases raises questions about how representative they are, especially if rating procedures differ between datasets. <cit.> report significant rater agreement (i.e. correlations) when drawing pairwise comparisons of rankings from their five raters. However, agreement among raters (i.e. shared taste) depends largely on the individual respondents as well as heterogeneity in the image material <cit.>. For aesthetic ratings based on small sample sizes, reliability is thus very questionable. The correlation between the EVA aesthetic score and the AVA score for all images in the EVA dataset underlines this problem. The Pearson's correlation coefficient of r = .49 (p < 0.05) is relatively low for two aesthetic ratings for the same images. This discrepancy may be due to the different survey procedures used, i.e., photographic challenges for the AVA scores versus an online experiment with explicit rating scales for all EVA scores. Overall, when comparing different datasets, one must consider the methodological differences in data collection and the possible lack of overlap in what appear to be superficially similar aesthetic ratings. §.§ Statistical Image Properties - handcrafted image features Statistical image properties (SIPs) are objective image properties that are calculated from physical image information according to a fixed mathematical procedure and ideally capture aspects related to human vision. Examples are hue and saturation of color, or complexity and spatial distribution of luminance gradients. In Computational Aesthetics, SIPs are often referred to as handcrafted image features. Unfortunately, easy-to-use scripts or code for calculating the SIPs have not always been published alongside the scientific publications that introduce them. Therefore, it can be difficult for researchers to handle the number and variations of published SIPs. As a consequence, SIPs are not always used consistently across research groups. On this background, the 20 SIPs studied in the present paper were selected based on the following three criteria: 1) The 20 SIPs capture a wide range of visual features; 2) the SIPs have been studied in previous aesthetic research projects; and 3) the algorithms for the calculation of the SIPs have been documented in detail or we had access to the original code for the calculations. In the following paragraphs, we will briefly describe the SIPs used in the present work. For more detailed information, the reader is referred to the cited original literature (for a general review, see <cit.>). Color Measures (Hue, Saturation, Luminance, Lab(a), Lab(b), Color Entropy). Color is a central feature of many artworks. We calculate SIPs that capture color information in HSV and CIELAB color space. Specifically, we compute the mean Hue and the mean Saturation in HSV space, as well as the mean values for all LAB channels (henceforth abbreviated Luminance, Lab(a), Lab(b); note that Luminance is also referred to as intensity in other work). As a complement to the established mean color values, we calculate the Shannon entropy of the hue channel to capture the colorfulness of an image (Color Entropy; or HSV[H] entropy see <cit.>). This measure assumes high values if an image displays many color hues that are distributed with about equal intensity across the entire range of hues, regardless of which colors these are in detail. An image with only one color hue would have a very low Shannon entropy in the hue channel. All of these color SIPs have been previously studied in aesthetic research <cit.>. Aspect Ratio and Image Size. In the present work, Image Size and Aspect Ratio are calculated as follows: Aspect Ratio = image width/image height Image Size = image width + image height Although both measures are comparatively easy to calculate, some remarks are still necessary. First, the method of calculating the Aspect Ratio is not consistent in the literature. For example, <cit.> calculate Aspect Ratio as height by width while <cit.> and <cit.> calculate it as width by height. In the present work, we use the latter method, because it is also the convention used to specify display formats. However, the different calculation methods have only a limited effect on the results, since both values correlate strongly with each other. Second, the image size of the images in the original dataset is not necessarily the same as the size when the ratings were collected. For some of the surveys, the images were scaled. In addition, the maximum resolution of some PC displays limit the possible image resolution. Therefore, we determined Image Size after scaling images to their actual display size, where indicated. If not specified, the original image size was used if it was displayed on a 1920 × 1200 pixel display without scaling. Images exceeding this resolution were scaled down (keeping the Aspect Ratio) to fit the display size. Third, we follow <cit.> for the calculation of Image Size as the sum of image height and image width. Contrast and Luminance Entropy. Contrast is a widely studied feature in aesthetic research and there are many different methods to calculate it. It is unclear to what extent these different methods capture the same visually perceivable image property <cit.>. In the present work, Contrast is defined as the root mean square (rms) contrast <cit.>, which is the standard deviation of the L channel of the CIELAB color space. We also calculate the Shannon entropy <cit.> of the L channel of the CIELAB color space. Since different entropy measures are calculated in the present work, we refer to this entropy measure as Luminance Entropy. In other publications <cit.>, it is often referred to simply as entropy or Shannon entropy. Edge-Orientation Entropy. Second-Order Edge-Orientation Entropy is used to measure how independently (randomly) edge orientations are distributed across an image <cit.>. To obtain this measure, the orientation of each edge element is related to the orientation of all other edge elements in the same image by pairwise comparison. An image whose edges all have the same orientation and are distributed over the image at regular intervals would have a very low Edge-Orientation Entropy. An image with edge elements that have a random orientations and are randomly distributed over the image would have maximal Edge-Orientation Entropy. In this case, the orientations of the edge elements would be maximally independent of each other across the image. PHOG Measures (Self-Similarity, Complexity and Anisotropy). Self-Similarity, Complexity and Anisotropy measures are assessed using the (Pyramid of) Histograms of Orientation Gradients ([P]HOG) method, which was originally developed for object recognition and image categorization <cit.>. For details on the computation of Self-Similarity, Complexity, and Anisotropy, see the appendix in <cit.>. In brief, Self-Similarity captures how similar the histograms of gradient orientations are in a pyramid of subregions of an image compared to the histogram of the entire image or other subregions. High values for Self-Similarity indicate that the subregions are more similar to the entire image. Anisotropy measures how different the strengths of the gradients are across orientations in an image. Lower anisotropy indicates that the strength of the oriented gradients is more uniform across orientations. Higher anisotropy means that oriented gradient strength differs more strongly. Complexity is calculated as the mean gradient strength throughout an image. Higher complexity indicates a stronger mean gradient. Fourier Slope and Fourier Sigma. Fourier Slope and Fourier Sigma are based on the Fourier power spectrum of the gray-scale version of an image. Roughly speaking, the Fourier Slope indicates the relative strength of high spatial frequencies versus low spatial frequencies. The Fourier Sigma indicates how linearly the log-log plot of the Fourier spectrum decreases with increasing spatial frequency. Higher values for Fourier Sigma correspond to larger deviations from a linear course. For a more detailed description of these SIPs, see <cit.>. Symmetry-lr and Symmetry-ud. <cit.> developed a symmetry measure that is based on the first layer of CNN filters from a pre-trained AlexNet <cit.>. Since these filters capture both color-opponent features, luminance edges, and texture information, the symmetry measures computed from these filters more closely match the human perception of symmetry than earlier measures based on the symmetry of gray-scale pixels. For the present work, left/right symmetry (Symmetry-lr) and up/down symmetry (Symmetry-ud) were calculated with this method. For a broader overview of the importance and previous results on symmetry in aesthetics research, see <cit.>. Sparseness and Variability of Low-Level CNN Features. <cit.> used the first convolutional layer of a pre-trained AlexNet to also measure Sparseness/Richness and Variability of the feature responses. A max-pooling operation was applied to each map of the filter responses of the 96 filters in the first CNN layer. Sparseness is calculated as the median of the variances of each max-pooling map. Variability is the variance over all entries of all max-pooling maps. Note that in the original paper by <cit.>, Sparseness of SIPs was referred to as the inverse of Richness. In the present work, we decided to use the term Sparseness because the calculated variance relates directly to it (and not to its inverse value). For a visualization of Sparseness, see the boxplots in Figure <ref> for the JA dataset (traditional oil paintings; low Sparseness) compared to the ArtPics dataset (style-transferred objects on large white background; high Sparseness). §.§ Image Features Learned by the VGG19 Network With respect to their predictive power for aesthetic ratings, we compared the above-mentioned handcrafted features to image features learned by a CNN. In contrast to the SIPs described above, the learned features are usually high-dimensional and not easy to relate to known aspects of human vision. Because of their great predictive power, however, understanding and deciphering them promises significant advances in basic aesthetic research. The VGG19 is one of the most widely used CNN architectures. For example, it is the basis of many neural style transfer algorithms <cit.>. Based on its previous successful application in aesthetics research, we decided to use the VGG19 for the present study. VGG19 denotes the version of the VGG network with 16 convolutional and 3 fully connected layers. Like the 20 SIPs described above, we will examine the learned features of the 16 convolutional layers in terms of their predictive power with respect to the aesthetic ratings for each of the twelve datasets. We will then ask to what extent the image information captured by the SIPs and by the VGG19 overlap. Details of the procedure are described in the following section. §.§ Statistical Methods Our analysis is based on subsets of the original databases with 500 randomly selected images each. An identical image number ensures good comparability of the twelve datasets when investigating the predictive power of the SIPs and the VGG19 features for the aesthetic ratings. Because the number of SIPs is 20, we decided to study the same number of the VGG19 features to ensure comparability of the results. To this aim, we use principal component analysis (PCA) to reduce the feature vectors of each VGG19 layer to the first 20 PCA components with the strongest variance. All subsequent analyses are based on 20 predictive variables, i.e. either the 20 SIPs or the 20 PCA components of the respective VGG19 layer. The only exceptions are the OASIS and ArtPics datasets, which, as mentioned earlier, contain images with fixed resolution, so the values for the SIPs Aspect Ratio and Image Size do not differ for the images. For these two datasets only, Image Size and Aspect Ratio are not included as predictor variables. Because Experimental Aesthetics and Computational Aesthetics do not necessarily employ the same research paradigms (e.g., for significance testing, measuring predictive accuracy, or statistical tests), we use statistical methods in the present work that are standard in either field. As the regression method to model the data, we use linear regression almost exclusively. As a measure of predictive accuracy, we calculate adjusted R^2 values, which are closely related to the root-mean-square error that is commonly employed in Computational Aesthetics. Because both the SIPs and the ratings are not necessarily normally distributed (see Figure <ref>), we use Spearman's rank correlation coefficients to measure the relation between individual SIPs and the aesthetic ratings. To measure the overall effect size of variables in multiple linear regression models, we calculate standardized β values. We also face the problem of possible multicollinearity and of other types of interferences between the 20 predictor variables. As <cit.> showed, multicollinearity between predictor variables should not necessarily be eliminated because, in some cases, it even has desirable effects. However, multicollinearity can also lead to a deterioration in predictive accuracy. In Experimental Aesthetics, the selection of the best predictor variables is usually accomplished by verifying that the variables play a consistent and interpretable role in the models and pass certain significance tests. In Computational Aesthetics, the focus is instead on generalizability of the statistical models. Here, the selection of predictor variables is often based on their predictive accuracy on a test dataset. Significance and generalizability are not necessarily interchangeable, and some statistical methods satisfy only one of the two requirements. We therefore developed a methodology for selecting predictor variables for linear regression models that is a combination of cross-validation <cit.> and forward feature selection <cit.>. Cross-validation is an established method in Computational Aesthetics, but it is rarely used in Experimental Aesthetics. We apply a 100-fold repeated 2-fold cross-validation in this work. The measured adjusted R^2 value is the mean of these 100 repetitions. This choice is motivated by the comparatively small and sometimes very heterogeneous samples of 500 images per dataset. The forward feature selection method used in the present paper starts with an empty set of variables. First, a regression model is formed for each of the 20 variables and cross-validation is used to find the model with the highest adjusted R^2 value from these 20 variables. The variable corresponding to this model represents the first variable of the final set of variables. Regression models are then built again for the remaining 19 variables, with the models now consisting of two predictive variables, the best-of-20 variable selected in the previous step, and one variable from each of the remaining 19. Cross-validation is again used to determine the adjusted R^2 value for each of the 19 possible combinations. The variable corresponding to the best regression model is then added as a second variable to the final set of variables. Iteratively, additional variables are added to this set from the remaining variables not yet included until the adjusted R^2 value of the model no longer improves. This final set of variables then forms the final regression model, whose adjusted R^2 value is determined with cross-validation. Only for the final set of variables, the standardized β values are determined. Note that the same combination of regression model, forward feature selection, and cross-validation is applied for the 20 SIPs, the 20 VGG19 PCA components and the combination of both. The combination of cross-validation and forward feature selection merges paradigms from Experimental Aesthetics and Computational Aesthetics; it yields robust models with good predictive accuracy and penalizes redundant predictor variables. As a result, the remaining (reduced) subset of the original 20 predictor variables are highly significant in the regression model, which is considered essential in Experimental Aesthetics (but not so much in Computational Aesthetics). We also compare the 20 SIPs and the 20 VGG19 PCA components with respect to their ability to classify the content of images. For this comparison, we use a Support Vector Machine (SVM) classifier with an RBF kernel and, as in linear regression, the combination of cross-validation and forward feature selection. § RESULTS This section is organized as follows: First, we provide a descriptive statistical analysis of all twelve datasets. Second, we report the Spearman coefficients of correlation between each SIP and the respective aesthetic ratings for each dataset, followed by the standardized β values for multiple regression models based on all 20 SIPs together. We then compare the adjusted R^2 values of these regression models with the adjusted R^2 values of the VGG19 layer features and the combination of both. Finally, we give details on whether and where the SIPs are represented in the layers of the VGG19. §.§ Descriptive Dataset Statistics For each of the twelve datasets, Figure <ref>A shows boxplots of the 20 SIPs in alphabetical order. For almost all SIPs, the datasets show roughly similar patterns. An exception is Image Size, which exhibits the largest differences between the datasets. This heterogeneity reflects differences in the design and in technical limitations between the datasets. The pattern of SIPs in some of the datasets is strikingly similar. For example, the SIPs of the EVA and AVA datasets are almost identical, possibly because the EVA dataset is a subset of the AVA images. Moreover, the two datasets of traditional Western paintings, JA and VAPS, also show strong similarities (low Contrast, high Edge Entropy, low Fourier Sigma, and low Sparseness). The third dataset of paintings, WikiEmotions, stands out from the other two painting datasets by its lower Edge Entropy and significantly higher variance in Image Size. As mentioned earlier, the AADB, Flick-AES and HiddenBeauty datasets are conceptually very similar. With the exception of Aspect Ratio, Edge Entropy and Image Size, the three datasets also show very similar distributions of their SIPs. The dataset that is most dissimilar from the others is the ArtPics dataset. It differs in almost all SIPs. This disparity is likely due to the uniformly white background in all images for the ArtPics dataset (Figure <ref>), which is not seen in the other datasets. Figure <ref>B illustrates the distributions of the scaled aesthetic ratings for each dataset. The distributions are based on the random selection of 500 images per dataset and are thus very similar to the distributions of the respective complete datasets that were reported in the original papers, indicating that the subsets are largely representative (data not shown). A few datasets have a skewed, non-symmetric distribution of the ratings. The EVA, HiddenBeauty, JA(aesth), OASIS, Photo.net, and WikiEmotions datasets show a moderately to strongly left-skewed distribution of ratings, while the ArtPics dataset is the only one with a clear right-skewed distribution. With the exception of AROD, Flickr-AES, JA(beauty), and OASIS datasets, most datasets do not cover the full width of their rating scales and show a very peaked distribution of the ratings. §.§ Correlations between SIPs and ratings We calculated the Spearman coefficients ρ for the correlation between the aesthetic ratings and each of the 20 SIPs for each dataset. Figure <ref> shows all correlations and reveals that the SIPs differ largely in the number and strength of the correlations for each dataset. All SIPs show significant correlations with a number of datasets, ranging from three datasets for Lab[b] to ten datasets for Fourier Sigma. Most SIPs show positive and negative correlations, respectively, in different datasets. There are only three SIPs (Fourier Slope, Aspect Ratio and Image Size) that are always positively correlated with the ratings. Overall, the correlations are weak to moderate with a range between ρ = -0.34 (Luminance in WikiEmotions) and ρ=0.39 (Image Size in WikiEmotions). Focusing on the datasets, a similarly heterogeneous picture emerges. In the AADB dataset, 14 out of 20 SIPs have a significant correlation with the human ratings, while in the AVA dataset only four SIPs correlate significantly. The pattern of correlations is rather heterogeneous even across datasets of similar image type, image origin, rating metric or survey question (Table <ref>). For example, the datasets that contain images of paintings only (JA[aesth], JA[beauty], VAPS, and WikiEmotions) differ with respect to their patterns of correlations. Likewise, the datasets with Flickr images (AADB, AROD, Flickr-AES, HiddenBeauty) are very similar conceptually, but they do not show a common pattern of correlations either. Thus, the results of one dataset do not necessarily match those of another dataset, even if it is of a similar type. Although the datasets do not show a consistent correlation pattern overall, there are some similarities and differences between datasets containing photos vs. traditional art when doing within- and across-genre comparisons. Therefore, we analyzed the similarity of the correlation patterns in more depth. For the correlation coefficients of each row in Figure <ref>, we determined the Euclidean distance to those of each other row. This gives us a rough estimate of the similarity of the correlation patterns, with smaller distance values indicating greater similarity between the datasets. Figure <ref> shows all pairwise distances. Among four traditional art datasets, the patterns of correlations between the SIPs and the aesthetic ratings exhibit relatively high similarities between the two JA datasets and with the VAPS dataset (blue color in Figure <ref>). By contrast, the fourth dataset (WikiEmotions) displays low similarity in its correlation patterns with all other datasets (brown color in Figure <ref>). In contrast, within the nine photo datasets (independent of their source), there is generally high similarity, except for AADB. Comparing the art and photo datasets, it is evident that dissimilarities prevail. Nonetheless, the Flickr-AES photo dataset shows some similarity with three of the four art datasets (JA[aesth], JA[beauty], and VAPS). Interestingly, despite the ArtPics dataset (which contains AI-generated images) standing out in descriptive terms (as seen in Figure <ref>), it still displays similarities with most of the photo datasets but not so much with the art datasets. §.§ Multiple Linear Regression To eliminate possible redundancies in the predictive models due to cross-correlations between the SIPs (Suppl. Figure 1), we carried out a multiple linear regression analysis and obtained standardized β coefficients. As described in Section <ref>, we chose a combination of cross-validation and forward feature selection to eliminate redundant and poor predictor variables that do not increase the explained variance (adjusted R^2) of the linear models. Figure <ref> shows the standardized β coefficients for the SIPs selected for the final reduced model for each dataset. The colored entries indicate the SIPs that were selected as the best predictor variables during forward feature selection, all of them having a significant (p < 0.05) effect on the respective aesthetic ratings when the other variables were controlled for. There are large differences between the datasets regarding the standardized β values. The number of selected SIPs per dataset varies between three (AROD, AVA, EVA[AVAscore]) and eight (Flick-AES), suggesting that a relatively small number of the original 20 SIPs is sufficient to predict the respective aesthetic ratings in all datasets. Nevertheless, the set of selected SIPs differs between the datasets. Even when looking at conceptually similar datasets (datasets with the same image type, image origin, rating metric or survey question, Table <ref>), no clear overall pattern is apparent. Interestingly, for some datasets (AROD, AVA, EVA[AVAscore], HiddenBeauty, Photo.net), the SIPs capturing color information were not selected for the best models; in these cases, the best models were built with SIPs predominantly based on grayscale versions of the images. With regard to the frequency of the SIPs in the different models, some SIPs (Symmetry-lr , Lab[b], and Sparseness) were never selected and are therefore not shown in Figure <ref>; other SIPs were selected only once (Lab[a], Hue, and Variability). By contrast, Fourier Slope was selected for ten out of fourteen datasets. Almost half of the SIPs occur in the models with both positive and negative standardized β coefficients. For five SIPs (Aspect Ratio, Image Size, Luminance Entropy, Fourier Slope, Symmetry-ud) β coefficients were either all positive or all negative, respectively; these variables were also selected in at least two datasets. Notably, the β coefficients for Fourier Slope always assume positive values. The last column in Figure <ref> lists the adjusted R^2 values for each linear model based on the forward feature selection and cross-validation. The differences between the adjusted R^2 values for the different datasets are large, with R^2 values ranging from 0.04 for the AROD dataset to 0.24 for the Flickr-AES dataset. When we clustered the datasets according to their image origin, image type, survey method, survey question, rating scale or number of Ratings per image (see Table <ref>), we did not observe marked similarities within the clusters with regard to the adjusted R^2 values. For example, the range of the adjusted R^2 values is 0.08 to 0.21 for the datasets containing paintings (JA[aesth] + JA[beauty], VAPS, and WikiEmotions), and 0.04 to 0.24 for the datasets that are derived from Flickr (AADB, AROD, Flickr-AES, and HiddenBeauty). §.§ Predictive accuracy: Handcrafted features versus learned features To directly compare the predictive power of the SIPs and the VGG19 features for aesthetic ratings, we compute adjusted R^2 values also for each layer of the VGG19 for each dataset, as described in Section (<ref>). Results are listed in Table <ref>. Remarkably, the upper layers of VGG19 yield almost the same adjusted R^2 values as the 20 handcrafted SIPs. The only exceptions are the AVA, Flickr-AES and EVA(AVAscores) datasets, where SIPs predict better, and the ArtPics dataset, where layer 13 of VGG19 achieves a better prediction. In general, the adjusted R^2 values tend to increase throughout the layers of the VGG19, reaching a maximum (bold numbers in Table <ref>) between layer 11 and layer 15 and decreasing again afterwards. This pattern is surprisingly consistent for all datasets, regardless of their image type, image origin, survey question, survey method, and rating scale. Since the SIPs and the feature vectors of the VGG19 can predict the aesthetic ratings with similar accuracy, the question is whether they do so by using the same image information. We addressed this question by comparing the regression models using the 20 SIPs, the 20 PCA components of VGG19 layer 13, and a combination of both. Note that the same feature selection procedure (forward feature selection and cross-validation) was used in all three cases. Models based on the SIPs only had an average of 4.6 variables (Figure <ref>). The regression models that consisted of both SIPs and VGG19 PCA components had an average of 8.1 variables. Table <ref> compares the adjusted R^2 values achieved with the values of either the SIPs or the VGG19 features alone and the combined regression models. On average, SIPs explain 12.14% of the variance (adjusted R^2) in aesthetic ratings. VGG19 layer 13 explains on average 11.14%. When combining the SIPs and VGG19 layer 13, the average adjusted R^2 increases to 17.07% explained variance. The effect is not purely additive, so there are some shared features, but the increase clearly suggests that the SIPs and the VGG19 layer 13 features also explain some unique variance. To substantiate this assumption, we next investigated whether and where SIPs are represented in the layers of VGG19. §.§ Mapping SIPs onto VGG19 feature layers To study the relation between the SIPs and the VGG19 feature layers, we used the 20 PCA components of the respective convolutional layer of VGG19 to predict the individual SIPs by using multiple linear regression. Again, we applied forward feature selection and cross-validation, as described in Section <ref>, and determined the adjusted R^2 value of the best model. To describe the results, we split the SIPs into three groups that each share a similar pattern of predictability by the different layers of the VGG19 (Figure <ref>A-C). Group A includes SIPs that are best predicted by the first layers of VGG19. It includes most of the SIPs that capture low-level color information, such as Hue or Saturation. In this group (Figure <ref>A), all SIPs show a marked drop of the adjusted R^2 values in the fifth layer. The adjusted R^2 values decrease steadily with increasing layer number and finally reach their minimum at the highest layer. Figure <ref>B shows a second group of SIPs (Group B), which consists of the three PHOG measures (Complexity, Self-Similarity and Anisotropy) as well as Sparseness and Contrast. These SIPs start with relatively high adjusted R^2 values in the first layer, then increase slightly to a plateau. Around layer 13, the values start to drop sharply and reach their minimum at the highest convolutional layer (L16). Group C consists of the remaining SIPs and follows an inverted u-shape, starting with very low adjusted R^2 values in the first layers, then rising steeply to reach their maximum in the middle layers (L3-L13; Figure <ref>C). Towards the last convolutional layer (L16), the adjusted R^2 values decrease strongly again. Nevertheless, most of these SIPs reach their maximum just before layer 13. Interestingly, Group C contains almost all of the best predictor variables such as Symmetry-ud, Aspect Ratio, Fourier Slope and Image Size (Table <ref>). In contrast to the SIPs of Group A and B, the SIPs of Group C do not reach adjusted R^2 values above 0.60 in any layer and are therefore comparatively poorly represented in the VGG19 layers. At least for Image Size this is not surprising, since all images are scaled to a square resolution of 244×244 pixel as input for the VGG19. From these results, we conclude that the majority of the SIPs can be predicted well by the VGG19 layers, although the prediction accuracy varies strongly between the layers. Furthermore, the VGG19 layers that best predict the aesthetic ratings (around layer 13; Table <ref>) differ from the layers that best represent the SIPs (Figure <ref>). Together, these results suggest that the good predictive power of the higher VGG19 layers is not based entirely on the image information reflected by the SIPs. As these higher layers were originally trained for object recognition, we additionally analyzed the ability of the VGG19 to classify content annotations for two selected datasets. In Figure <ref>D, we show the classification accuracy of each VGG19 layer for the content annotations of two datasets: The JA dataset, which is an example of a dataset of traditional art, and the HiddenBeauty dataset, which is an example of a dataset of photographs. The classification values are based on the same 20 PCA components for each layer as in the regression analysis. Again, 100-fold repeated 2-fold cross-validation was applied. Not surprisingly, the classification accuracy increases throughout the layers and reaches a maximum at the highest layer for both datasets. For comparison, the stippled lines in Figure <ref>D indicate the classification accuracy for the content annotations based on the 20 SIPs, which is much lower than that of the VGG19 layers. § DISCUSSION We compare twelve different image datasets of artworks or photographs. Human participants rated the images in previous studies according to a variety of aesthetic criteria, such as beauty, liking and aesthetic quality, respectively (Table <ref>). In the present study, we ask how well these ratings can be predicted based on statistical image properties (SIPs) that have previously been used in visual aesthetic research (for details, see Materials & Methods section). In terms of predictive power, we compare the SIPs to the features of a convolutional neural network (CNN) that has been trained to recognize visual objects <cit.>. At first sight, our results suggest that the image datasets differ widely not only in their pictorial content, artistic style, method of production and aesthetic annotations (Table <ref>), but also in how well single or multiple SIPs or CNN layers can predict the aesthetic ratings (Figures <ref>, <ref>; Table <ref>). Despite this large variability, a closer look reveals some common patterns and consistencies, which will be discussed in the following sections. §.§ SIPS predict aesthetic ratings As can be expected, the median values for the different SIPs resemble each other in some datasets more than in others (Figure <ref>). Similarities seem to coincide with specific conceptual correspondences between the datasets. For example, the AVA and EVA datasets share a common origin (DPChallenge), and the JenAesthetic and VAPS datesets both represent traditional Western paintings. These similarities suggest that the SIPs capture at least some of the image characteristics of the datasets at the descriptive level. To examine these similarities in more detail, we correlated the 20 SIPs with the aesthetic ratings for each dataset. We started the analysis by carrying out a single regression analysis and calculated Spearman coefficients ρ for the relation between all SIPs and all datasets (Figure <ref>). The number of significant correlations diverge to a large degree for the SIPs and the datasets, even for related datasets. Nevertheless, we had the impression that at least some of the datasets shared similar patterns of correlations. To quantify the (dis-)similarity of the SIP patterns in each pair of datasets, we calculated Euclidian distances between the ρ values for all possible pairs of datasets (Figure <ref>). For the painting datasets, the obtained results indicate similarities in the correlation patterns between the JenAesthetics and VAPS datasets, while the third dataset (WikiEmotions) is more dissimilar. Thus, even within the painting datasets, the similarity of the correlation patterns is not entirely consistent. Whether these differences are possibly due to the higher proportion of abstract images in the WikiEmotions dataset remains to be investigated. For the nine datasets of photographs, the correlation patterns are more similar between each other in general (with the exception of the AADB dataset) but dissimilar to the painting datasets. Thus, paintings and photographs tend to represent distinct categories of images with respect to how image properties mediate aesthetic ratings by human beholders. In addition, some of the 20 SIPs correlated with each other to varying degrees (Suppl. Figure 1), suggesting that simple regression models with all SIPs can be partially redundant. We therefore carried out multiple linear regression with forward feature selection and cross-validation to reduce the number of predictive SIPs and their redundancies. We also studied the predictive power of the reduced models by calculating adjusted coefficients of determination (R^2 values). Again, results for the different datasets vary extensively with respect to the explained variance (between 4% and 24%), and the number and effect size (standardized β values) of the predictive SIPs in the reduced models (Figure <ref>). Hoewever, we noted the following consistencies. As expected, the SIPs that are good predictors in the multiple linear regression models (Figure <ref>), also match the SIPs that correlate directly with the ratings for a given dataset in general (Figure <ref>). A striking example is Fourier Slope, which is a predictive variable in the reduced models for ten out of fourteen datasets. Moreover, the standardized β coefficients for Fourier Slope (Figure <ref>) as well as the Spearman coefficients ρ (Figure <ref>) are all positive. Fourier Slope is therefore a highly consistent predictor in a majority of datasets of both paintings and photographs. At the other extreme are SIPs that are inconsistent predictors because they contributed only once or not at all to the reduced models (Hue, Lab[a], Lab[b], Sparseness, Symmetry-lr, and Variability). However, some SIPs are not included in the respective multiple linear models, although they show a relatively strong correlation with the respective ratings (for example, Fourier Sigma in the AVA dataset). This finding can be attributed to multicollinearity between individual SIPs. If two SIPs are highly correlated, they are not likely to be both included in the same model because the selection procedure penalizes models with redundant variables. Second, some SIPs are a significant component of the multiple linear models, although they do not show significant correlations with the ratings (e.g., Saturation and Luminance Entropy in Flickr-AES dataset). This is possible because correlations and standardized β values are identical only for models that comprise one single predictor variable. Moreover, Friedman and Wall (2005) have shown that variables that do not correlate with the target variable can still explain some of the variance left by the other predictor variables in a multiple linear model. In summary, both the correlations coefficients ρ and the standardized β values show notable differences between the individual datasets with respect to the SIPs. Thus, it is difficult to give a short answer to the question of which SIPs are the best predictors for the aesthetic ratings. With this uncertainty in mind, we classified the SIPs into the following four groups (Table <ref>): SIPs in Group 1 show significant correlations for only a few datasets, and thus can be considered inconsistent predictor variables in our image datasets. These SIPs are Edge Entropy, Lab(b), and Symmetry-lr. SIPs in Group 2 show significant correlations for single linear regression models for most datasets, but are almost never selected for the multiple linear models. This suggests that they have some predictive power, but other selected SIPs already cover the relevant information (see Suppl. Figure 1 for single linear correlations between the SIPs). These SIPs are Sparseness, Variability, Lab(a), Hue, and Color Entropy. Group 3 consists of SIPs that have significant correlations and are selected frequently for the multiple linear models, but correlation coefficients ρ can be positive or negative, depending on the dataset. These SIPs are Luminance, Saturation, Contrast, Fourier Sigma, Complexity, Anisotropy, Symmetry-ud and Luminance Entropy. Group 4 comprises the strongest predictor variables. They have many significant correlations, are selected for the multiple linear models and show non-alternating (exclusively positive or negative) correlation coefficients for all relevant datasets. These SIPs are Fourier Slope, Aspect Ratio and Image Size. While we consider it out of the scope of the present work to delve into a detailed discussion on possible reasons why each variable belongs to its respective group, we will offer possible explanations for the SIPs in Group 4. Aesthetic ratings increase as Image Size increases. Image Size, in turn, directly relates to the resolution of an image. Image resolution relates to image quality ratings in general, although image content and size can modify this relation <cit.>. Similarly, Aspect Ratio, which represents the width-to-height ratio of an image, exhibits a positive correlation with aesthetic ratings, with the highest mean Aspect Ratio observed in the AROD dataset (1.38) and the lowest in the VAPS dataset (1.06). The Aspect Ratio of most display devices is around 1.77 (16:9), which indicates that most images fall short of this optimal ratio. Consistently, both the Spearman correlations ρ and the standardized β values show that the aesthetic ratings increase with higher aspect ratios, as the images can fill out the display more completely. Regarding Fourier Slope, previous research <cit.> has shown that slopes between -2 and -3 are preferred by human observers in random-phase images, while ratings below and above this range decrease (inverted U-shape curve). This optimal range corresponds to the range of slope values typically found in large sets of artworks <cit.>. In the present study, the average Fourier Slope of all datasets analyzed was -3.18, with Flickr-AES having the largest mean Fourier Slope (-3.0) and HiddenBeauty having the smallest (-3.26). However, because 93% of all images had a Fourier Slope below -2.5, our images cover mostly the ascending part of the inverted U-shaped curve (slope values below -2.5). The overall positive effect of Fourier Slope is thus compatible with these earlier results. The SIPs that represent color information predominantly (Hue, Lab[b], Color Entropy, and Lab[a]) are relatively weak predictor variables and thus belong to Groups 1 and 2, with the exception of Saturation (Group 3). Color information is represented also by the SIPs that are based on the more global CNN features (Symmetry-ud, Symmetry-lr, Variability, and Richness) because about half of these features react in a color-specific way at the lower CNN layers <cit.>. It is out of question that color plays an important role in the aesthetic ratings, especially for paintings, but also for color photographs. Accordingly, only for five out of fourteen different aesthetic ratings, the linear models do not contain SIPs representing color. Because of the high correlations between some of the color SIPs (for example, between Color Entropy and Saturation; Suppl. Figure 1), it is difficult to pin down individual color features that mediate the aesthetic ratings in a consistent and specific way. In future studies, SIPs could be identified that more specifically cover the color information that is relevant for aesthetic ratings. We propose that SIPs that capture color information more globally, i.e. that are not based on pixel averages, may be better suited to determine the relationship between colors and aesthetic ratings. §.§ Partial overlap between SIPs and VGG19 features Previous studies demonstrated that CNNs trained for object recognition can distinguish artworks from non-art images <cit.> and predict aesthetic ratings without additional fine-tuning <cit.>. In the present study, we replicate these findings using a range of different image datasets. Specifically, we show that the upper layers of the VGG19 network can predict aesthetic ratings with almost the same accuracy as handcrafted SIPs. Moreover, we find that SIPs and VGG19 features only partially overlap in their ability to predict these ratings, suggesting that SIPs and VGG19 features each captures some unique information that is not captured by the other (Table <ref>). Our results indicate that the higher layers of VGG19 are particularly effective in classifying image content, with the 16th layer achieving the highest classification rate (Figure <ref>D). Given this result, it is not surprising that the content information in these layers also predicts the aesthetic ratings. However, we also find that the best predictions are achieved with the 13th layer, rather than the 16th layer. We offer the following hypothesis to explain this finding: by around the 13th layer, the content classification rate is already quite high, while visual image features, which are also primarily captured by the SIPs, are still strongly represented. In contrast, these features are only weakly present in the 16th layer. As a result, the highest adjusted R^2 values are obtained around the 13th VGG19 layer, where both visual image features and content information are simultaneously represented. Taken together, our findings suggest that it is not just the content of an image, but also how that content is presented, that contributes to its aesthetic appeal. §.§ Conclusions and implications for aesthetics research In summary, we analyzed twelve different datasets of paintings and photographs, comprising a total of fourteen aesthetic ratings. These datasets varied in their number, origin, and type of images, the methodology used to collect aesthetic ratings, the type of aesthetic rating, the rating scale employed, and the number of ratings per image. Our results show that the datasets also differ widely in the pattern of image properties that predict their aesthetic ratings. By quantifying the similarities of the correlation patterns, we nevertheless demonstrate that the art datasets and photography datasets, respectively, are more similar to datasets of their own kind (Figure <ref>). Moreover, some SIPs (Figure <ref>; Groups 3 and 4 in Table <ref>) are more consistent predictors across datasets than others (Groups 1 and 2 in Table <ref>). Despite these overall consistencies, we find that even datasets that are conceptually almost identical, differ in the precise set of SIPs that best predict ratings and in the overall adjusted R^2 values. Our results give rise to the general question of how reliable and reproducible aesthetic ratings are when they were obtained on the basis of a single dataset. They highlight the possible effect of details in the collection of rating data, for example, in the selection of the stimuli, the demographics of the participants, the rating terms, the SIPs analyzed etc. Our results suggest that even small differences in the design of the experiment can have a significant impact on the results. This sensitivity places limits on the generalizability of research findings when they are based on the analysis of a single dataset only. Moreover, our selection of fourteen datasets for the present study does not represent all published image datasets with aesthetic ratings in this highly fragmented research field. We restricted our analysis to linear regression for better comparability of the predictive variables between datasets. More flexible, non-linear statistical models may possibly lead to modifications of these results. With these caveats in mind, what implications do our results have on the study design for aesthetic research that aims to predict subjective ratings based on objective image properties? We believe that our findings should not be an incentive to restrict studies to a single datasets, but rather to include as many datasets and as many study designs as possible in the analysis. Findings that generalize well to a set of different studies can then be regarded as robust. We therefore encourage researchers to make use of the diverse existing datasets in the aesthetic community. They provide a great opportunity to put the robustness of effects to the test. § AUTHOR CONTRIBUTIONS RB designed the study, retrieved and analyzed the data, wrote the code and drafted the manuscript. CR coordinated the study. CR and KT contributed to the conception of the study and helped draft the manuscript. All authors have approved the final article. § DISCLOSURE/CONFLICT-OF-INTEREST STATEMENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. § FUNDING This work was supported by funds from the Institute of Anatomy I, Jena University Hospital, University of Jena, Germany. § SUPPLEMENTARY MATERIAL
http://arxiv.org/abs/2307.01492v1
20230704055554
FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation
[ "Zhiqi Li", "Zhiding Yu", "David Austin", "Mingsheng Fang", "Shiyi Lan", "Jan Kautz", "Jose M. Alvarez" ]
cs.CV
[ "cs.CV", "cs.RO" ]
FB-OCC: 3D Occupancy Prediction based on Forward-Backward View Transformation Zhiqi Li^1,2, Zhiding Yu^1, David Austin^1, Mingsheng Fang^2, Shiyi Lan^1, Jan Kautz^1, Jose M. Alvarez^1 [0.15cm] ^1NVIDIA    ^2Nanjing University =========================================================================================================================================================== This technical report summarizes the winning solution for the 3D Occupancy Prediction Challenge, which is held in conjunction with the CVPR 2023 Workshop on End-to-End Autonomous Driving and CVPR 23 Workshop on Vision-Centric Autonomous Driving Workshop. Our proposed solution FB-OCC builds upon FB-BEV, a cutting-edge camera-based bird's-eye view perception design using forward-backward projection. On top of FB-BEV, we further study novel designs and optimization tailored to the 3D occupancy prediction task, including joint depth-semantic pre-training, joint voxel-BEV representation, model scaling up, and effective post-processing strategies. These designs and optimization result in a state-of-the-art mIoU score of 54.19% on the nuScenes dataset, ranking the 1st place in the challenge track. Code and models will be released at: <https://github.com/NVlabs/FB-BEV>. § INTRODUCTION 3D occupancy prediction, which refers to predicting the occupancy status and semantic class of every voxel in a 3D voxel space, is an important task in autonomous vehicle (AV) perception. Predicting 3D occupancy is important to the development of safe and robust self-driving systems by providing rich information to the planning stack <cit.>. The challenge track requires participants to developing occupancy prediction algorithms that solely utilize camera input during inference. In addition, the challenge permits the use of open-source datasets and models, which facilitates the exploration of data-driven algorithms and large-scale models. The impact of this challenge is significant by providing a playground for the latest state-of-the-art 3D occupancy prediction algorithms in real-world scenarios. In the context of challenge, besides our efforts in model structure design, we emphasize the importance of both model scale and model pre-training techniques. This focus stems from several motivations. First, there have been a number of bird's-eye view (BEV) perception solutions with state-of-the-art performance <cit.>. These solutions can be adapted to 3D occupancy prediction with certain modifications. However, there is still limited knowledge regarding the impact of large-scale models and pre-training on the occupancy prediction task. As will be reported in this work, the use of large-scale models and pre-training techniques stands as crucial factors contributing to our success. § METHOD In this section, we will present our solution in details with the following aspects covered. Section <ref> will elaborate on our model design. Section <ref> will discuss the efforts in model pre-training and scaling up. Finally, Section <ref> will outline our post-processing strategies. §.§ Model design Here, we give an introduction to FB-BEV. It is known that view transformation is a central module of camera-based 3D perception. In the literature, this module is based on two dominant view transformation strategies: forward projection (represented by List-Splat-Shoot <cit.>) and backward projection (represented by BEVFormer <cit.>). FB-BEV adopts a unified design that leverages both methods, promoting the benefits from each method with improved perception results while overcoming their limitations. In the case of FB-OCC, we use forward projection to generate the initial 3D voxel representation. We then condense the 3D voxel representations into a flattened BEV feature map. The BEV feature map is treated as queries within the BEV space and attends the image encoder features to acquire dense geometry information. The fusion features of the 3D voxel representation and the optimized BEV representations are then fed into the subsequent task head. In the forward projection module, we adhere to the principles of Lift-Splat-Shoot (LSS) <cit.> to account for the uncertainty in the depth estimation of each pixel. This allows us to project the image features into the 3D space based on their corresponding depth values. In contrast to LSS, which models BEV features, we directly model 3D voxel representations to capture more detailed information in the 3D space. Additionally, we adopt BEVDepth <cit.> to utilize point clouds in generating accurate depth ground truth, which helps supervise the depth prediction of our model for improved accuracy. LSS tends to produce relatively sparse 3D representations. To tackle this issue, we incorporate a backward projection method to optimize these sparse 3D representations. Considering the computational burden, we employ BEV representation instead of 3D voxel representation at this stage. The backward projection method draws inspiration from BEVFormer <cit.>. However, unlike BEVFormer, which employs randomly initialized parameters as BEV queries, we compress the obtained 3D voxel representation into a BEV representation, thereby incorporating stronger semantic priors. Furthermore, our backward projection method leverages the depth distribution during the projection phase, enabling more precise modeling of projection relationships. Following the acquisition of the 3D voxel representations and optimized BEV representation, we combine them through the process of expanding the BEV features, resulting in the final 3D voxel representations. The voxel encoder and the occupancy prediction head, as depicted in Figure <ref> and Figure <ref>, are outlined below. To train the model, we use a distance-aware Focal loss function L_fl inspired by M2BEV <cit.>, Dice loss L_dl, affinity loss L_scal^geo and L_scal^sem from MonoScene <cit.>, lovasz-softmax loss L_ls from OpenOccupancy <cit.>. In addition, we also need a depth supervision loss L_d and a 2D semantic loss L_s, which will be introduced in the next section. §.§ Scaling up and pre-training Scaling the model size has traditionally been a convenient approach to improving model accuracy. However, in the field of 3D vision-only perception, researchers have discovered that employing a more powerful 2D backbone often leads to overfitting <cit.>. For instance, on the nuScenes 3D object detection task, the largest backbone, such as VIT-L <cit.> with approximately 300M parameters, and commonly used backbones like ConvNext-B <cit.> and VoVNet-99 <cit.> with around 100M parameters, tend to encounter this issue. To address this challenge, we explore the utilization of the 1B-parameter backbone, InternImage-H <cit.>, for multi-camera 3D perception tasks. However, directly applying this backbone would result in severe overfitting due to the limited number of samples available for training, specifically the 40K samples in the nuScenes dataset <cit.>. To overcome this limitation, we leverage the opportunity provided by this competition, which allows participants to utilize additional public data. By augmenting our data resources, we can train our large-scale model more effectively. Building upon the open-source InternImage-H checkpoint, we conduct model training on the Object365 dataset <cit.>, which is a vast 2D object detection dataset comprising 2 million images. This pre-training on large-scale 2D detection tasks enhances the model's semantic perception capabilities. However, there still exists a certain domain gap when applying the pre-trained model to downstream 3D perception tasks. Therefore, we further perform targeted pre-training on the model specifically for 3D perception tasks. An effective approach for pre-training is to enhance the model's geometric awareness through depth estimation tasks. Consequently, we conduct extensive pre-training on the nuScenes dataset, primarily focusing on depth estimation. It is worth noting that depth pre-training lacks semantic-level supervision. To mitigate the risk of the model becoming excessively biased towards depth information, potentially leading to the loss of semantic priors (especially given the large-scale nature of the model, which is prone to overfitting), we simultaneously aim to predict the 2D semantic segmentation labels alongside the depth prediction task, as shown in Figure <ref>. However, nuScenes does not provide semantic segmentation labels for 2D images. To address this issue, we employ the popular Segment Anything Model (SAM) <cit.> for automatic labeling. For thing categories with bounding box annotations provided by nuScenes, we utilize box prompts to generate high-quality semantic masks for each object. Unfortunately, for stuff categories such as road surfaces or buildings, bounding box annotations are not available. Nonetheless, nuScenes offers corresponding point cloud semantic segmentation labels for these categories. To generate semantic masks for the stuff categories, we project LiDAR points belonging to these categories onto the image. For each category, we randomly select three points as point prompts to generate the corresponding semantic masks, which yields satisfactory mask quality. With the 2D image semantic mask labels and the ground truth depth maps, we train the model using a joint depth estimation task and semantic segmentation task. This pre-training task closely aligns with the final occupancy prediction task, enabling the direct generation of 3D occupancy results using depth values and semantic labels. The pre-trained model serves as an improved starting point for the subsequent training of the occupancy prediction task. §.§ Post-processing §.§.§ Test-time augmentation In test-time augmentation, the input image is horizontally flipped and the 3D space is horizontally and vertically flipped during inference. This gives a total of eight prediction results for each frame. The final prediction is obtained by computing the mean of all the results. We additionally observe that occupancy prediction accuracy significantly degrades with distance. Temporal test-time augmentation (TTA) is thus used to mitigate this issue. For static voxels, we leverage the predicted voxels that are close to the ego car in previous frames to replace the voxels co-located in the current frame. §.§.§ Ensemble We perform a weighted sum of all independent results, where the weight of each voxel is determined by two multiplying factors. The first factor is model the weight related to the overall mIoU of each result. The second factor is the specific category weight related to the IoU of this voxel's category. We use NNI <cit.> to search the different weight values automatically. § EXPERIMENTS §.§ Datasets and metrics Dataset. The occupancy dataset is built based on the existing nuScenes dataset <cit.>. For each frame, they provide occupancy annotations within the range of [-40m, -40m, -1m, 40m, 40m, 5.4m], and the resolution of each voxel is 0.4m. The dataset contains 18 classes, where one indicates a free voxel that is occupied by nothing. The dataset also provides the camera mask to indicate whether the voxel is visible from any cameras. Metrics. For this challenge, we mainly evaluate our models based on mIoU, which can be formulated as follows: mIoU = 1/C∑_c=1^CTP_c/TP_c+FP_c+FN_c, where TP_c, FP_c, and FB_c correspond to the number of true positive, false positive, and false negative predictions for class c, and C is the total number of classes. §.§ Implementation details Training Strategies. For training large-scale models, we use a batch size of 32 on 32 NVIDIA A100 GPUs, AdamW optimizer with a learning rate of 1× 10^-4 and a weight-decay of 0.05. The learning rate of the backbone is 10 times smaller. We train our models around 50 epochs for occupancy tasks. The temporal windows used by every model are determined based on the GPU memory. For the Intern-H backbone, we use 6 previous frames. When GPU memory is sufficient, we use up to 16 historical frames. Following SOLOFusion <cit.>, we use online temporal sequences during training which is much more efficient. Network Details. For large-scale models, the image features from the backbone are downsampled with a stride of 16. The input image scale is 640×1600. We use commonly used data augmentation strategies, including flip, and rotation on both image and 3D space. The depth net predicts 80 discrete depth categories covering the depth from 2m to 42m. The resolution of generated 3D voxel features is 200×200×16. The backward projection module uses 1 layer since the input BEV queries already contain meaningful information. During the training phase, we ignore the invisible voxels from cameras. §.§ Ablation study Training large-scale models requires huge computing resources. In our exploration, we first verify the effects of different models at a smaller scale. In this setting, the input scale is 256× 704, the resolution is 100×100×8, and the image backbone is ResNet-50 <cit.>. We list the milestones of our exploration in Table. <ref>. Version A is our vanilla baseline. In Version B, we use depth supervision following BEVDepth. For Version C, we ignore the invisible voxels from cameras during the training phase. Version D model fixes several major bugs in Version C, especially regarding the abnormal IoU on the other category. For Version E, we use temporal information from the previous 16 frames. We leverage the joint depth and semantic pre-training in Version F. For Version G, we optimize the loss function by adding the Dice loss and using 3D transformation to align voxel features from different timesteps. The results of Version H is the test-time augmentation results of Version G. §.§ Scaling up After exploring the basic design of FB-OCC, we scale up the model size by using larger backbones and image input size, as shown in Table <ref>. Compared to Version H, version I uses VoVNet-99 backbone and other modifications, including using 960×1760 image input and a voxel resolution of 0.4m. Compared to Version I, Version J leverages a ViT-L backbone and ViT-Adapter <cit.>. For our most powerful Version K, we scale up the model to over 1 billion parameters with the InternImage-H backbone. §.§ Post-processing In our final submission, we adopt our ensemble strategy with seven models for the best accuracy. The main difference between different models is the use of different backbones. By combining the results of all the models through ensemble techniques, we were able to achieve our best result on the test set with a mIoU score of 54.19%. § CONCLUSION In this report, we describe our winning solution for the 3D Occupancy Prediction Challenge in conjunction with CVPR 2023. Our solution demonstrates state-of-the-art model design that yields excellent BEV perception. It also shows the effectiveness of visual foundation models and large-scale pre-training in 3D occupancy prediction. unsrt
http://arxiv.org/abs/2307.02406v1
20230705162810
Mixing of the symmetric beta-binomial splitting process on arbitrary graphs
[ "Richard Pymar", "Nicolás Rivera" ]
math.PR
[ "math.PR", "60J27, 60K35, 82C22, 37A25" ]
Quantum Fisher Information and multipartite entanglement in spin-1 chains Elisa Ercolessi Received ; accepted ========================================================================= We study the mixing time of the symmetric beta-binomial splitting process on finite weighted connected graphs G=(V,E,{r_e}_e∈ E) with vertex set V, edge set E and positive edge-weights r_e>0 for e∈ E. This is an interacting particle system with a fixed number of particles that updates through vertex-pairwise interactions which redistribute particles. We show that the mixing time of this process can be upper-bounded in terms of the maximal expected meeting time of two independent random walks on G. Our techniques involve using a process similar to the chameleon process invented by <cit.> to bound the mixing time of the exclusion process. § INTRODUCTION In the field of econophysics, interacting particle systems have been widely used to analyse the dynamics of wealth held by agents within a network, providing insights into the distribution and flow of money within the system <cit.>. These are typically characterised by pairwise interactions between agents (represented by vertices in a graph) resulting in a redistribution of the wealth they hold (represented by particles on the vertices). One class of such systems which has found applications in econophysics are reshuffling models in which each agent in an interacting pair receives a random fraction of the total wealth they hold. In the uniform reshuffling model introduced in <cit.> and discussed rigorously in <cit.>, the random fraction is chosen uniformly. In this paper, we introduce and analyse the mixing time of the symmetric beta-binomial splitting process: a continuous-time interacting particle system on a finite connected (weighted) graph with a conservation property. Informally, the process updates by choosing randomly an edge from the graph, and redistributing the particles on the vertices of the edge according to a beta-binomial distribution. This process is a generalisation of the uniform reshuffling model, is a discrete-space version of a Gibbs sampler considered in <cit.> and is related to the binomial splitting process of <cit.> (sometimes called the binomial reshuffling model <cit.>). Our focus is to provide general upper bounds on the mixing time of the symmetric beta-binomial splitting process on any connected graph. We achieve this through use of a chameleon process, a process which so far has only been used to bound the mixing time of exclusion processes <cit.>. We demonstrate how a chameleon process can be used more generally to understand how systems of interacting particles mix; in particular we establish a connection between the maximal expected meeting time of two independent random walks and the mixing time of the beta-binomial splitting process. Despite giving the same name to this auxiliary process, our version of the chameleon process is substantially different from those used previously; in particular it is engineered to deal with multiple particles occupying a single vertex (an event which cannot happen in the exclusion process). As is typical with proofs that use a chameleon process, the results we obtain are not optimal in the sense that the multiplicative constants appearing in the statements are not optimized. On the other hand, the strength of this approach is in allowing us to prove results for arbitrary graphs with arbitrary edge weights. §.§ Model and main result The m-particle symmetric beta-binomial splitting process with parameter s>0 on a finite connected graph G=(V,E,(r_e)_e∈ E) (with vertex set V, edge set E and (r_e)_e∈ E a collection of positive edge weights) is the continuous-time Markov process (ξ_t)_t≥0 on state space Ω_G,m:={ξ∈N_0^V: ∑_v∈ Vξ(v)=m}, with infinitesimal generator ℒ^BB(G,s,m)f=∑_{v,w}∈ Er_{v,w}/∑_e∈ Er_e(𝒫_{v,w}^BB(G,s,m)-1)f, f:Ω_G,m→R, where 𝒫_{v,w}^BB(G,s,m)f(ξ):=E_ℒ[ξ'_{v,w}][f], and ξ'_{v,w} is the random variable defined as ξ'_{v,w}(u):= X u=v ξ(v)+ξ(w)-X u=w ξ(u) with X∼(ξ(u)+ξ(v),s,s). We recover the uniform reshuffling model by setting s=1. We remark that in the binomial splitting process of <cit.>, the random variable X is chosen instead according to a binomial distribution (recall we obtain a binomial with probability parameter 1/2 by sending s→∞ in the above beta-binomial). The symmetric beta-binomial splitting process (BBSP) on a connected graph with positive edge weights is irreducible on Ω_G,m and, by checking detailed balance, one can determine that the m-particle BBSP on G with parameter s (denoted BB(G,s,m)) has unique equilibrium distribution π^BB(G,s,m)(ξ)∝∏_v∈ VΓ(s+ξ(v))/ξ(v)!, ξ∈Ω_G,m. Recall that the total variation distance between two probability measures μ and ν defined on the same finite set Ω is μ-ν_TV:=∑_ω∈Ω(μ(ω)-ν(ω))_+, where for x∈R, x_+:=max{x,0}. For any irreducible Markov process (ξ_t)_t≥0 with state space Ω, and equilibrium distribution π, the ε-total variation mixing time is t_mix(ε):=inf{t≥0: max_ξ_0∈Ωℒ[ξ_t]-π_TV≤ε} for any ε∈(0,1). We write t_mix^BB(G,s,m)(ε) for the ε-total variation mixing time of BB(G,s,m). For i and j distinct vertices of G, we also write M̂_i,j(G) for the meeting time of two independent random walks started from vertices i and j, each moving as BB(G,s,1), that is, the time that the two walks are on neighbouring vertices and the edge between them rings for one of the walks. Recalling that BetaBin(1,s,s)∼Bernoulli(1/2), we see that M̂_i,j(G) does not depend on s and is just the meeting time of two independent random walks on the graph obtained from G by halving the edge weights. We assume throughout that V={1,…,n}. Our main result is as follows. Fix s∈Q positive and write s=b/a with a and b coprime. There exists a universal constant C>0 such that for any size n connected graph G with positive edge weights, and any integer m≥2, ∀ ε∈(0,1/4), t_mix^BB(G,s,m)(ε)≤ C(s)log(n+m/ε)max_i,jEM̂_i,j(G), where C(s)=Ca(p^*)^-2log(12a(p^*)^-2) log(a+b), p^*=(5/12)^2s/(6B(s,s)) for s<20, and p^*=1/6(1-20/s+1) when s≥20, with B(·,·) the beta function. Observe that p^*→1/6 as s→∞, whereas p^*→0 as s→0. The quantity 1/s can be seen as measuring the strength that particles tend to “clump together”, with the strength increasing as 1/s→∞. Thus it is not surprising to obtain an upper-bound which increases as s→0, as breaking apart clumps of particles takes longer. Our methodology does not allow us to immediately deduce results in the case of s irrational. §.§ Related work The beta-binomial splitting process is closely related to the binomial splitting process (although our methods do not obviously extend to this model). In <cit.>, the authors show that the binomial splitting process (as well as a more general version in which vertices have weights) exhibits total variation cutoff (abrupt convergence to equilibrium) at time 1/2t_rellog m (with t_rel the relaxation time) for graphs satisfying a finite-dimensional geometry assumption provided the number of particles m is at most order n^2 (they also obtain a pre-cutoff result without this restriction on particle numbers). For instance on the cycle their results show that the binomial splitting process mixes at time Θ(n^2log m) for m≤ n^2. On the other hand, for the beta-binomial splitting process on the cycle, our results give an upper bound of (n^2log(n+m)) (with the implicit constant depending on the parameter s). The beta-binomial splitting process has, in a certain sense, more dependency between the movement of the particles compared with the binomial splitting process, which in turn means any analysis on the mixing time is more involved. To see this, consider that in the binomial splitting process, when an edge rings each particle on the edge decides which vertex to jump to independently of the other particles; this independence is not present in the beta-binomial splitting process. There has been a flurry of activity in recent years analysing mixing times of continuous mass (rather than discrete particles) redistribution processes <cit.>. The uniform reshuffling model (when run on the complete graph) is the discrete-space version of a Gibbs sampler on the n-simplex, the mixing time of which is analysed in <cit.> and <cit.>. In <cit.>, the total variation mixing time of the Gibbs sampler is shown to be (n^2log n); the argument can be used (as noted by <cit.>) to obtain a mixing time of (n^2log n) of the uniform reshuffling model on the complete graph (in which edge weights are all 1/(n-1)), provided the number of particles m is at least n^5.5. The arguments in <cit.> improve this result when m>n^18.5, obtaining (nlog n) as the mixing time of the uniform reshuffling model on the complete graph in this regime. Our results improve the best known bound on the mixing time of the uniform reshuffling model on the complete graph to (n^2log n) for m≤ n^5.5. More generally, the symmetric beta-binomial splitting process is a discrete-space version of a Gibbs sampler on the n-simplex, in which mass is redistributed across the vertices of a ringing edge according to a symmetric beta random variable. In <cit.>, cutoff is demonstrated at time 1/π^2 n^2log n for this model on the line, provided the beta parameter (which we denote by s here) is at least 1. While our upper-bound for the discrete-space model holds also for some s∈(0,1), we are restricted to s∈Q by the nature of our analysis. A continuous-space version of the binomial splitting process is the averaging process (also known as the repeated average model), introduced by <cit.>. In this model, when an interaction occurs between two vertices, their mass is redistributed equally between them. Mixing times for this process have been studied with total variation cutoff demonstrated on the complete graph <cit.>, and on the hypercube and complete bipartite graphs <cit.>. A general lower bound for the mixing time of the averaging process on any connected graph is obtained by <cit.>. Lastly, a model similar in flavour to the beta-binomal splitting process and which also has applications in econophysics is the immediate exchange process proposed in <cit.> and its generalisation <cit.>. In the discrete version of the generalised immediate exchange process, when an edge updates, each vertex on the edge gives to the other vertex a random number of its particles, chosen according to a beta-binomial distribution. Again, however, our methods do not obviously extend to this model (for our methodology it is important that updates are distributionally symmetric over the vertices on a ringing edge), and obtaining bounds on the mixing time of this process appears to be an open problem. §.§ Heuristics In order to bound the total variation (TV) distance between the time-t states of two BB(G,s,m) processes started from arbitrary configurations, we use the triangle inequality to reduce the problem to bounding the TV distance between the time-t states of two BB(G,s,m) configurations which start from neighbouring configurations (<ref>), that is, configurations which differ by the action of moving a single particle (from any vertex to any other). We can then bound this latter TV distance by the TV distance of the time-t states of two processes which each evolve similarly to a BB(G,s,m) process but with the incongruent particle marked to distinguish it from the rest (<ref>). We call this process a MaBB (marked beta-binomial splitting) process. A chameleon process will then be used to bound the total variation distance between two MaBB processes (or, more precisely, between a MaBB process and one in which the marked particle is “at equilibrium” given the configuration of non-marked particles, Proposition <ref>). In the chameleon process associated with a MaBB, the non-marked particles are replaced with black particles (which are coupled to evolve identically to the non-marked particles). The purpose of the chameleon process is to provide a way to track how quickly the marked particle in the MaBB becomes mixed. We achieve this by having red particles in the chameleon process, with each additional red particle on a vertex corresponding to an increase in the probability that in the MaBB, the marked particle is on that vertex. It turns out that (see (<ref>)), if we construct the MaBB appropriately, then given we are at equilibrium and we observe the non-marked particles in a certain configuration ξ, the probability that the marked will be on vertex v is proportional to aξ(v)+b, where ξ(v) is the observed number of non-marked particles on v, and a and b are the coprime integers with s=b/a. In the chameleon process this will correspond to having aξ(v)+b red particles on v, for every v∈ V. It turns out that bounding how long it takes the chameleon process to reach an all-red state (where there are aξ(v)+b red particles on each vertex v when the black particles are in configuration ξ) when we condition on this happening before reaching a no-red state (an event we call Fill) is key to bounding the total variation distance between two MaBB processes. This calculation is carried out in Section <ref>. §.§ Outline of the rest of the paper The rest of the paper is structured as follows. In Section <ref> we identify five key properties enjoyed by the BBSP, which includes writing the equilibrium distribution (<ref>) explicitly in terms of a and b. In Section <ref>, we give the construction of the MaBB process; firstly we present the dynamics of a single step, and then we show how the MaBB can be constructed `graphically'. The chameleon process is constructed in Section <ref>. We again give the dynamics of a single step, before showing how the same graphical construction can be used to build the entire trajectory of the chameleon process. Properties of the chameleon process, which allow us to make the connection to the MaBB, are presented in Section <ref>. In Section <ref> we show that choosing the round length (a parameter of the chameleon process) to be on the order of the maximal expected meeting time of two independent random walks on G suffices to ensure that there are, in expectation, a significant number of pink particles created during each round. This, in turn, can be used to show that an all-red state is reached quickly, given event Fill. We complete the proof of Theorem <ref> in Section <ref>. An Appendix follows, in which we collect some of the proofs requiring lengthy case analyses. Finally, we give a possible simulation of the chameleon process over three time steps to illuminate the reader further on its evolution. § KEY PROPERTIES OF THE BETA-BINOMIAL SPLITTING PROCESS We fix s∈Q positive (with s=b/a for a and b coprime), connected graph G of size n∈N, and integer m≥2, and demonstrate five properties of BB(G,s,m) needed to prove Theorem <ref>. For e∈ E and ξ,ξ'∈Ω_G,m, we denote by P_e^BB(G,s,m)(ξ,ξ') the probability that, given the BB(G,s,m) configuration is ξ and edge e rings, the new configuration is ξ'. Further, for v∈ V, we also write C_ξ,v for the BB(G,s,m+1) configuration which satisfies C_ξ,v(u)=ξ(u)+δ_v(u), for u∈ V. BB(G,s,m) satisfies the following properties: * BB(G,s,m) is irreducible on Ω_G,m. * BB(G,s,m) is reversible with equilibrium distribution π^BB(G,s,m)(ξ)∝∏_v∈ V: ξ(v)>01/ξ(v)!∏_i=0^ξ(v)-1(ai+b), ξ∈Ω_G,m. * Updates are symmetric: if the configuration of BB(G,s,m) is ξ and edge e={v,w} rings to give new configuration ξ', then ξ'(v)d=ξ'(w). * Updates have a chance to be near even split: There exists probability p^*∈(0,1/3) such that * if the configuration of BB(G,s,m) is ξ with ξ(v)+ξ(w)≥ 2 and edge e={v,w} rings, with probability at least p^*, the new configuration ξ' has ξ'(v)∈[1/3(ξ(v)+ξ(w)),2/3(ξ(v)+ξ(w))], * if the configuration of BB(G,s,m) is ξ with ξ(v)+ξ(w)=2 and edge e={v,w} rings, the probability that both particles will be on the same vertex in the new configuration is at least 2p^*. Moreover, it suffices to take p^*=(5/12)^2s/(6B(s,s)) for s<20 and p^*=1/6(1-20/s+1) for s≥20. * The heat kernel satisfies the following identity: for any ξ,ξ'∈Ω_G,m, e={v,w}∈ E, (ξ'(v)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',v)+(ξ'(w)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',w) =(ξ(v)+ξ(w)+1)P_e^BB(G,s,m)(ξ,ξ'). Property <ref> is immediate. Recall the process has equilibrium distribution π^BB(G,s,m)(ξ)∝∏_v∈ VΓ(s+ξ(v))/ξ(v)!, ξ∈Ω_G,m. Since s=b/a with a and b coprime, this is of the form (<ref>): π^BB(G,s,m)(ξ) ∝∏_v∈ V: ξ(v)>01/ξ(v)!∏_i=0^ξ(v)-1(i+s) =∏_ξ∈ V: ξ(v)>01/ξ(v)!∏_i=0^ξ(v)-1(i+b/a)∝∏_v∈ V: ξ(v)>01/ξ(v)!∏_i=0^ξ(v)-1(ai+b). Thus Property <ref> holds. Property <ref> holds as a beta-binomial X with parameters (k,s,s) has the same distribution as k-X, for any k∈N. To show Property <ref> holds (with p^*=(5/12)^2s/(6B(s,s))), we first show that with positive probability, if ξ(v)+ξ(w)≥2 then X/(ξ(v)+ξ(w))∈[1/3,2/3] where X∼BetaBin(ξ(v)+ξ(w),s,s). Recall that to sample a BetaBin(ξ(v)+ξ(w), s, s), we can first sample Y∼Beta(s,s) and then given Y, sample Bin(ξ(v)+ξ(w),Y). We first observe that if s≥ 1, for such random variable Y, with probability at least (5/12)^2s/(2B(s,s)) (where B(s,t) is the beta function), Y will be in the interval [5/12,7/12]. This can be seen by noting that the density function of Y in the interval [5/12,7/12] is minimised on the boundary. If instead s<1, then Y will be in [5/12,7/12] with probability at least (1/2)^2s/(2B(s,s)). Further, if s≥20 then we can use Chebyshev's inequality to obtain P(Y∈[5/12,7/12])≥ 1-36/2s+1≥1/2(1-20/s+1). Fix y∈[5/12,7/12] and let Z∼Bin(ξ(v)+ξ(w),y). We observe that P(Z∈[(ξ(v)+ξ(w))/3,2(ξ(v)+ξ(w))/3]) is minimized (over ξ(v)+ξ(w)≥ 2 and y∈[5/12,7/12]) when ξ(v)+ξ(w)=4 and y=7/12, with a value of 1225/3456>1/3. Combining, we obtain that we can take p^*=(5/12)^2s/(6B(s,s)), and when s≥20 we can take p^*=1/6(1-20/s+1). For the second part of Property <ref>, if ξ(v)+ξ(w)=2 then the probability that both particles end up on the same vertex is 1-s/1+2s, which is larger than 2p^* for our choice of p^*. For Property <ref>, observe that P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',v) =ξ(v)+ξ(w)+1ξ'(v)+1B(ξ'(v)+1+s,ξ'(w)+s)/B(s,s), P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',w) =ξ(v)+ξ(w)+1ξ'(v)B(ξ'(v)+s,ξ'(w)+1+s)/B(s,s), P_e^BB(G,s,m)(ξ,ξ') =ξ(v)+ξ(w)ξ'(v)B(ξ'(v)+s,ξ'(w)+s)/B(s,s). Thus (ξ'(v)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',v) =(ξ(v)+ξ(w)+1)!/ξ'(w)!ξ'(v)!B(ξ'(v)+1+s,ξ'(w)+s)/B(s,s), (ξ'(w)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',w) =(ξ(v)+ξ(w)+1)!/ξ'(w)!ξ'(v)!B(ξ'(v)+s,ξ'(w)+1+s)/B(s,s). Adding these and using that B(x,y)=B(x+1,y)+B(x,y+1), we obtain (ξ'(v)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',v)+(ξ'(w)+1)P_e^BB(G,s,m+1)(C_ξ,v,C_ξ',w) =(ξ(v)+ξ(w)+1)!/ξ'(w)!ξ'(v)!B(ξ'(v)+s,ξ'(w)+s)/B(s,s) =(ξ(v)+ξ(w)+1)P_e^BB(G,s,m)(ξ,ξ'). § AN AUXILIARY PROCESS: MABB §.§ Initial MaBB construction In order to use a chameleon process in our setting, we need to introduce an auxiliary process related to the BBSP but with one of the particles distinguishable from the rest. To this end, we shall define a marked beta-binomial splitting process (MaBB) to be a process similar to the BBSP except that one particle is marked. Before giving the construction of the MaBB, we first discuss some of the key properties required. Firstly, we need that the time-t total variation distance of the BBSP to its equilibrium distribution is bounded by the time-t total variation distance of the MaBB to its equilibrium distribution. We can achieve this by using the contraction property of total variation distance as long as it is indeed the case that if we forget the marking in a MaBB we obtain the BBSP, see (<ref>). Secondly, we require that, given a particular edge e rings, the law which governs the movement of the non-marked particles does not depend on the location of the marked particle (this will ensure that the uniform random variables in element 3 of the graphical construction given in Section <ref> can be taken to be independent). This is not to say that the locations of the non-marked particles are independent of the location of the marked – indeed they are not – as the trajectory of the marked particle depends on the trajectories of the non-marked particles. We write Ω_G,m' for the set of configurations of the MaBB, and members of Ω_G,m' are of the form (ξ,y) where ξ∈Ω_G,m-1 with ξ(v) denoting the number of non-marked particles at vertex v, and y∈ V denotes the location of the marked particle. Let P_e^MaBB((ξ,v),(ξ',w)) denote the probability that, given the MaBB configuration is (ξ,v) and edge e rings, the new configuration is (ξ',w). Then in order to ensure that if we forget the marking in the MaBB we obtain the BBSP, it suffices that, for every edge e={v,w} and ξ,ξ'∈Ω_G,m-1, P_e^MaBB((ξ,v),(ξ',v))+P_e^MaBB((ξ,v),(ζ,w))=P_e^BB(G,s,m)(C_ξ,v,C_ξ',v) where ζ∈Ω_G,m-1 satisfies ζ(y)=ξ'(y)+δ_v(y)-δ_w(y) for y∈ V. The reason is that if we forget the marking in either of MaBB configurations (ξ',v) or (ζ,w), we obtain the same BBSP configuration C_ξ',v, and these are the only configurations with this property which are obtainable from (ξ,v) when e rings. We now discuss how the MaBB is constructed and why it satisfies the two desirable properties. The MaBB is coupled to the BBSP so that it updates at the same times. When an edge rings in the BBSP, if the marked particle is absent from the vertices of the ringing edge, the update of the MaBB is as in the BBSP. If instead the marked particle is on one of the vertices of the ringing edge, we first remove the marked particle, then move the remaining (i.e. non-marked) particles as in the BBSP, and then add the marked particle back to one of the two vertices on the ringing edge with a certain law. Specifically, if e={v,w} is the ringing edge and the MaBB configuration before the update is (ξ,v) and after the update the non-marked particles are in configuration ξ', we place the marked particle on v with probability P_e,ξ,ξ'(v,v):=ξ'(v)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ξ',v)/P_e^BB(G,s,m-1)(ξ,ξ'), and place it on w with probability P_e,ξ,ξ'(v,w):=ξ'(w)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ξ',w)/P_e^BB(G,s,m-1)(ξ,ξ'). This exhausts all possibilities (i.e. P_e,ξ,ξ'(v,v)+P_e,ξ,ξ'(v,w)=1) by Property <ref>. Further, it is immediate from this construction that the movement of non-marked particles does not depend on the location of the marked particle. So it remains to show that (<ref>) holds. We see this as follows: P_e^MaBB((ξ,v),(ξ',v))+P_e^MaBB((ξ,v),(ζ,w)) =P_e,ξ,ξ'(v,v)P_e^BB(G,s,m-1)(ξ,ξ')+P_e,ξ,ζ(v,w)P_e^BB(G,s,m-1)(ξ,ζ) =ξ'(v)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ξ',w)+ζ(w)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ζ,w) =ξ'(v)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ξ',w)+ξ'(w)/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ζ,w) =P_e^BB(G,s,m)(C_ξ,v,C_ξ',v), where the last equality uses C_ξ',v=C_ζ,w. This description for the MaBB is useful as it clearly demonstrates that the movement of the non-marked particles does not depend on the location of the marked particle. There is an equivalent (distributionally-speaking) description of the MaBB which is useful for proving some other properties. Note that for y∈{v,w}=e, P_e^MaBB((ξ,v),(ξ',y))=ξ'(y)+1/ξ(v)+ξ(w)+1P_e^BB(G,s,m)(C_ξ,v,C_ξ',y). Thus an update of the MaBB from state (ξ,v) when edge e={v,w} rings can be obtained by first removing the marking on the marked particle (but leaving it on the vertex) to obtain the BBSP configuration C_ξ,v, then updating according to the BBSP, which gives BBSP configuration C_ξ',y with probability P_e^BB(G,s,m)(C_ξ,v,C_ξ',y), and then choosing a particle from edge e uniformly and applying a mark to it (so the marked particle will be on y with probability ξ'(y)+1/ξ(v)+ξ(w)+1). We shall use this alternative description later in the paper (see the proof of Lemma <ref>). For k∈N_0, set χ(k)=ak+b with a and b the coprime integers from Property <ref>. We call this the colour function. The importance of the colour function becomes apparent from the following result. Fix vertices v and w with e={v,w} an edge of the graph. For any ξ, ξ'∈Ω_G,m-1, χ(ξ(v))P_e,ξ,ξ'(v,v)+χ(ξ(w))P_e,ξ,ξ'(w,v)=χ(ξ'(v)). The (yet to be defined) chameleon process will allow us to track possible locations of the marked particle in the MaBB, given the location of the non-marked particles. If we run the MaBB for a long time, and then observe that the configuration of non-marked particles is ξ, the probability the marked particle is on vertex v will be close to π_ξ(v) (defined in (<ref>)). If we scale π_ξ(v) by a(m-1)+bn, we obtain χ(ξ(v)) (see (<ref>)). Together with reversibility, this is essentially the reason why Lemma <ref> is true. Our goal in the chameleon process will be to have χ(ξ(v)) red particles on vertex v, for all v, as this will signal that the marked particle is “mixed” (see Proposition <ref>). In fact, the chameleon process will always have χ(ξ(v)) non-black particles on v (they will be either red, white, or pink), when there are ξ(v) black particles on v. Reversibility of the BBSP (Property <ref>) gives that for any edge e and configurations ζ,ζ'∈Ω_G,m, π^BB(G,s,m)(ζ)P^BB(G,s,m)_e(ζ,ζ')=π^BB(G,s,m)(ζ')P^BB(G,s,m)_e(ζ',ζ). For any v,w∈ V and ξ and ξ' which satisfy ξ(v)+ξ(w)=ξ'(v)+ξ'(w), we have ∑_y∈{v,w}C_ξ,v(y)=∑_y∈{v,w}C_ξ,w(y)=∑_y∈{v,w}C_ξ',v(y)=∑_y∈{v,w}C_ξ',w(y)=ξ(v)+ξ(w)+1. Observe that P^MaBB_e((ξ,v),(ξ',w))>0 is equivalent to P^MaBB_e((ξ',w),(ξ,v))>0 and implies ξ(v)+ξ(w)=ξ'(v)+ξ'(w). Thus using (<ref>) and (<ref>), we have π^BB(G,s,m)(C_ξ,v)(ξ(v)+1)P^MaBB_e((ξ,v),(ξ',w)) =π(C_ξ',w)(ξ'(w)+1)P^MaBB_e((ξ',w),(ξ,v)). By similar arguments we also have π^BB(G,s,m)(C_ξ,w)(ξ(w)+1)P^MaBB_e((ξ,w),(ξ',v)) =π^BB(G,s,m)(C_ξ',v)(ξ'(v)+1)P^MaBB_e((ξ',v),(ξ,w)), and π^BB(G,s,m)(C_ξ,y)(ξ(y)+1)P^MaBB_e((ξ,y),(ξ',y)) =π^BB(G,s,m)(C_ξ',y)(ξ'(y)+1)P^MaBB_e((ξ',y),(ξ,y)), y∈{v,w}. Hence the MaBB process is reversible with equilibrium distribution π^MaBB((ξ,v))∝π^BB(G,s,m)(C_ξ,v)(ξ(v)+1). For each ξ∈Ω_G,m-1, we define π_ξ(v):=π^MaBB((ξ,v))/∑_yπ^MaBB((ξ,y)), so that π_ξ(v)=π^BB(G,s,m)(C_ξ,v)(ξ(v)+1)/∑_yπ^BB(G,s,m)(C_ξ,y)(ξ(y)+1). Property <ref> gives that π^BB(G,s,m)(C_ξ,v)∝aξ(v)+b/ξ(v)+1∏_w∈ V: ξ(w)>01/ξ(w)!∏_i=0^ξ(w)-1(ai+b), and hence π_ξ(v)=aξ(v)+b/a(m-1)+bn=χ(ξ(v))/a(m-1)+bn. It follows that to prove the lemma, it suffices to show that π_ξ(v)P_e,ξ,ξ'(v,v)+π_ξ(w)P_e,ξ,ξ'(w,v)=π_ξ'(v), equivalently, π^MaBB((ξ,v))/∑_yπ^MaBB((ξ,y))P_e,ξ,ξ'(v,v)+π^MaBB((ξ,w))/∑_yπ^MaBB((ξ,y))P_e,ξ,ξ'(w,v)=π^MaBB((ξ',v))/∑_yπ^MaBB((ξ',y)). Note that P_e,ξ,ξ'(v,v)=P^MaBB_e((ξ,v),(ξ',v))/∑_y∈{v,w}P^MaBB_e((ξ,v),(ξ',y)) =P^MaBB_e((ξ,v),(ξ',v))/P̂^MaBB_e(ξ,ξ'), where we define P̂^MaBB_e(ξ,ξ'):=∑_y∈{v,w}P^MaBB_e((ξ,v),(ξ',y)) and note that this does not depend on v. Thus the left-hand side of (<ref>) can be written as π^MaBB((ξ,v))P^MaBB_e((ξ,v),(ξ',v))+ π^MaBB((ξ,w))P^MaBB_e((ξ,w),(ξ',v))/P̂^MaBB_e(ξ,ξ')∑_yπ^MaBB((ξ,y)) =P̂^MaBB_e(ξ',ξ)π^MaBB((ξ',v))/P̂^MaBB_e(ξ,ξ')∑_yπ^MaBB((ξ,y)) using the reversibility of MaBB. Thus showing (<ref>) is equivalent to showing P̂^MaBB_e(ξ',ξ)∑_yπ^MaBB((ξ',y))=P̂^MaBB_e(ξ,ξ')∑_yπ^MaBB((ξ,y)). We use reversibility to show this identity: P̂^MaBB_e(ξ',ξ)∑_yπ^MaBB((ξ',y)) =π^MaBB((ξ',v))P̂^MaBB_e(ξ',ξ)+π^MaBB((ξ',w))P̂^MaBB_e(ξ',ξ) =+∑_y∉{v,w}π^MaBB((ξ',y))P̂^MaBB_e(ξ',ξ) =π^MaBB((ξ',v))∑_y∈{v,w}P^MaBB_e((ξ',v),(ξ,y))+π^MaBB((ξ',w))∑_y∈{v,w}P^MaBB_e((ξ',w),(ξ,y)) = +∑_y∉{v,w}π^MaBB((ξ',y))∑_z P^MaBB_e((ξ',z),(ξ,y)) =∑_y∈{v,w}π^MaBB((ξ,y))(P^MaBB_e((ξ,y),(ξ',v))+P^MaBB_e((ξ,y),(ξ',w))) =+∑_y∉{v,w}π^MaBB((ξ',y))P^MaBB_e((ξ',y),(ξ,y)) =π^MaBB((ξ,v))P̂^MaBB_e(ξ,ξ')+π^MaBB((ξ,w))P̂^MaBB_e(ξ,ξ') =+∑_y∉{v,w}π^MaBB((ξ,y))P^MaBB_e((ξ,y),(ξ',y)) =π^MaBB((ξ,v))P̂^MaBB_e(ξ,ξ')+π^MaBB((ξ,w))P̂^MaBB_e(ξ,ξ') =+∑_y∉{v,w}π^MaBB((ξ,y))∑_zP^MaBB_e((ξ,y),(ξ',z)) =π^MaBB((ξ,v))P̂^MaBB_e(ξ,ξ')+π^MaBB((ξ,w))P̂^MaBB_e(ξ,ξ') =+∑_y∉{v,w}π^MaBB((ξ,y))P̂^MaBB_e(ξ,ξ') =P̂^MaBB_e(ξ,ξ')∑_yπ^MaBB((ξ,y)). §.§ Graphical construction of the MaBB We present a `graphical construction' of the MaBB, which will also be used for the chameleon process. The motivation behind this construction is that it contains all of the random elements from which one can then deterministically construct both the MaBB and the chameleon process. In particular, it allows us to construct the MaBB and the chameleon process on the same probability space. The graphical construction is comprised of the following elements: * A Poisson process of rate ∑_e r_e which gives the times {τ_1,τ_2,…} at which edges ring (we also set τ_0=0). * A sequence of edges {e_r}_r≥ 1 so that edge e_r is the edge which rings at the rth time τ_r of the Poisson process; for each r≥1 and e∈ E, P(e_r=e)∝ r_e. * For each r≥ 1 an independent uniform random variable U_r^b on [0,1] (which will be used to determine how non-marked particles in the MaBB update at time τ_r when edge e_r rings), and an independent uniform random variable U_r^c on [0,1] (used for updating the location of the marked particle in MaBB). * A sequence of independent fair coin flips {d_ℓ}_ℓ≥1 (Bernoulli(1/2) random variables). These are only used in the chameleon process. We now demonstrate how the graphical construction is used to build the MaBB of interest, given an initial configuration. Fix u∈[0,1], e={v,w}∈ E, and ξ∈Ω_G,m-1. Without loss of generality, suppose v<w (recall V=[n]) and suppose {ξ_1,…,ξ_r} are the possible configurations of the non-marked particles that can be obtained from non-marked configuration ξ when edge e rings. Without loss of generality suppose they are ordered so that |ξ_i(v)-1/2(ξ(v)+ξ(w))|≤|ξ_j(v)-1/2(ξ(v)+ξ(w))| if and only if i≤ j, with any ties resolved by ordering earlier the configuration which places fewer particles on v. We now define two deterministic functions MaBB:[0,1]× E×Ω_G,m-1→Ω_G,m-1 and MaBB^*:[0,1]×[0,1]× E×Ω_G,m-1× V→ V. Firstly, we define MaBB(u,e,ξ) to be the configuration of non-marked which satisfies, for each 1≤ i≤ r, MaBB(u,e,ξ)=ξ_i if ∑_j<iP_e^BB(G,s,m-1)(ξ,ξ_j)<u≤∑_j≤ iP_e^BB(G,s,m-1)(ξ,ξ_j). When u is chosen according to a uniform on [0,1] this gives that MaBB(u,e,ξ) has the law of the new configuration of non-marked particles (given e rings and the old configuration is ξ), i.e. for a uniform U on [0,1], MaBB(U,e,ξ) has law P_e^BB(G,s,m-1)(ξ,·). By Property <ref>, if ξ(v)+ξ(w)≥ 2, then u≤ p^* MaBB(u,e,ξ)(v)∈[1/3(ξ(v)+ξ(w)),2/3(ξ(v)+ξ(w))] (this is the reason for choosing the ordering of the new configurations as described in (<ref>), and is used in the proof of Lemma <ref>). Secondly, for m∈ V and u,u'∈[0,1] we set MaBB^*(u,u',e,ξ,m)= m m∉ e m∈ e u'<P_e,ξ,MaBB(u,e,ξ)(m,m), e∖{m} We can now obtain a realisation of the MaBB as follows. Suppose we initialise at state (ξ_0,x_0). Given the state at time τ_i, the MaBB remains constant until the next update at time τ_i+1, at which time ξ_τ_i+1=MaBB(U_i+1^b,e_i+1,ξ_τ_i), m_τ_i+1=MaBB^*(U_i+1^b,U_i+1^c,e_i+1,ξ_τ_i,m_τ_i). § THE CHAMELEON PROCESS §.§ Introduction to the chameleon process As stated previously, the chameleon process will be built using the graphical construction. The chameleon process is an interacting particle system consisting of coloured particles moving on the vertices of a graph (the same graph as the MaBB). Particles can be of four colours: black, red, pink and white. Each vertex v in the chameleon process is occupied at a given time by a certain number, B(v), of black particles and χ(B(v)) non-black particles (recall χ is the colour function). Associated with each vertex is a notion of the amount of redness, called (this terminology is consistent with previous works using a chameleon process). Specifically we write _t(v) for the number of red particles plus half the number of pink particles at vertex v at time t in the chameleon process. If there are B(v) black particles at vertex v at time t then 0≤_t(v)≤χ(B(v)), with the minimum (resp. maximum) attained when all non-black particles are white (resp. red). We use the initial configuration of the MaBB to initialise the chameleon process. Each non-marked particle on a vertex in the MaBB configuration corresponds to a black particle at the same vertex in the chameleon process. The vertex with the marked particle in the MaBB is initialised in the chameleon process with all non-black particles as red. Every other vertex has all non-black particles as white. The chameleon process consists of rounds of length T (a parameter of the process), and at the end of some rounds is a depinking time. Whether we have a depinking time (at which we remove all pink particles, replacing them all with either red or white particles) will depend on the numbers of red, pink and white particles in the graph at the end of that round. If at the start of the round there are fewer red than white particles then we shall assign to each red particle a unique white particle; thus each red particle has a paired white particle. Later, our interest will be in determining how many red particles `meet' their paired white particle during a round, where two particles are said to meet if, at some moment in time, they are both on the same ringing edge (unless they start on the same vertex, they will be on different vertices when they meet). If there are fewer white than red particles at the start of the round we shall reverse roles so that each white particle gets a unique paired red particle. In the chameleon process we can only create new pink particles (by re-colouring red and white particles) at the meeting times of paired particles. It is this restriction which will lead to us taking the round length to be the maximal expected meeting time of two random walks. In previous works using other versions of the chameleon process, the idea of using paired particles is not used (it is not needed). It becomes useful here because a priori there is no constant (not depending on the number of particles or size of the graph) bound on the number of particles which may occupy a vertex. As a result, without using pairing, it turns out we will need to understand the movement of 3 coloured particles simultaneously, rather than the movement of one red and one white until their meeting time. §.§ A single step of the chameleon process Our construction of the chameleon process is such that when an edge rings, we first observe how the non-marked particles move in the MaBB and move the black particles in the same way. Given the new configuration of black particles, the number of non-black particles on the vertices is determined by the colour function χ. After observing the movement of the black particles, we shall then determine the movement of the red particles (and if we are to pinken any) then the pre-existing pink particles (i.e. not any just-created pink particles) and finally the white particles. To specify more precisely an update, we introduce some notation. We shall define a probability θ(v)=θ(v,e,B(v),B(w),B'(v),B'(w),R(v),R(w),P(v),P(w)) which is a function of a vertex v, an edge containing that vertex e={v,w}, and non-negative integers B(v), B(w), B'(v), B'(w), R(v),R(w), P(v), P(w) which satisfy B'(v)≤ B(v)+B(w), B(v)+B(w)=B'(v)+B'(w), R(v)+P(v)≤χ(B(v)), R(w)+P(w)≤χ(B(w)). These `non-primed' integers shall represent the numbers of black/red/pink on the vertices of the edge e just prior to it ringing, and B'(v), B'(w) the number of black particles on v, w just after e rings. For simplicity we write R_v,w for R(v)+R(w) and P_v,w for P(v)+P(w). To define θ(v) we also define integers ℓ(v)=ℓ(v,R_v,w,B'(w)) :={R_v,w-χ(B'(w))}∨0, u(v)=u(v,R_v,w,B'(v)) :=χ(B'(v))∧ R_v,w, u(w)=u(w,R_v,w,B'(w)) :=χ(B'(w))∧ R_v,w=R_v,w-ℓ(v), ℓ^P(v)=ℓ^P(v,R_v,w,P_v,w,B'(w)) :={P_v,w-χ(B'(w))+u(w)}∧ 0, u^P(v)=u^P(v,R_v,w,P_v,w,B'(w)) :={χ(B'(v))-u(v)}∧ P_v,w. We also set ℓ(w)=R_v,w-u(v). The idea behind these definitions is the following. The values of χ(B'(v)) and χ(B'(w)) impose restrictions on the number of non-black particles which can occupy vertices v and w after the update. For example, the number of red particles on v cannot exceed χ(B'(v)); this gives an upper limit of u(v) for the number of red particles that we can place onto v after the update. On the other hand, the number of red particles on w after the update cannot exceed χ(B'(w)), which in turn means that the number of red particles on v has to be at least R_v,w-χ(B'(w)), giving a lower limit of ℓ(v). The difference between these values, i.e. u(v)-ℓ(v)=R_v,w-ℓ(v)-ℓ(w) is the number of flexible reds, that is, the number of red particles which can be either on v or w after the update. It is these flexible reds that we get a chance to pinken, with pink particles representing particles which are half red and half white. Once the values of u(v) and u(w) have been determined, based on how the black particles move, we can then place the pre-existing pink particles. We again have to ensure that the number of non-black particles on v does not exceed χ(B'(v)), and now there could be at most u(v) red particles, so we restrict to placing at most χ(B'(v))-u(v) pink particles; this gives u^P(v). There is a similar restriction on vertex w and through this we obtain a lower bound ℓ^P(v) on the number of pink particles to place onto v. The role of θ(v) is to give the probability of placing the lower limits on v, with 1-θ(v) then the probability of placing the upper limits on v. We choose θ(v) to satisfy θ(v)[ℓ(v)+1/2ℓ^P(v)]+(1-θ(v))[u(v)+1/2 u^P(v)] =(R(v)+1/2 P(v))P_e,B,B'(v,v)+(R(w)+1/2 P(w))P_e,B,B'(w,v)=:m^*(v). This particular choice of θ(v) is necessary to ensure that the expected amount of ink at a vertex (given numbers of black particles on the vertices) matches the probability that the marked particle in the MaBB process is on that vertex (given the location of non-marked particles), see Lemma <ref>. The following lemmas shows that such a θ(v) exists and give bounds on its value. For every e,v,w,B,B',R,P, ℓ(v)+1/2ℓ^P(v)≤ m^*(v)≤ u(v)+1/2 u^P(v), and so in particular there exists θ(v)∈[0,1] satisfying (<ref>). Fix e={v,w}, B and η∈(0,1/2). If B' satisfies P_e,B,B'(v,v), P_e,B,B'(w,w)∈[η,1-η], then θ(v)∈[η,1-η]. The proofs of these lemmas involve lengthy (but straightforward) case analyses and can be found in the Appendix. We now describe in full detail the dynamics of a single step of the chameleon process, including the role of θ(v). We show how this fits with the graphical construction in the next section. We assume that pairings of red and white particles have already happened (these happen at the beginning of each round, more details are provided on this in the next section on how this is achieved through “label configurations”). As a preliminary step, we remove all non-black particles from the vertices of the ringing edge and place them into a pooled pile. They will be redistributed to the vertices during the steps described below. We update the black particles from B to B' according to the law of the movement of the non-marked particles in the MaBB (recall that the movement of non-marked particles does not depend on the location of the marked). Step 1: [Place lower bounds] If there are no red particles on v or w, skip straight to Step 4. Otherwise we proceed as follows. We introduce a notion of reserving paired particles in this step and put the lower bounds ℓ(v) and ℓ(w) of red particles onto vertices v and w. In choosing red particles to use for the lower bounds, it is important to avoid as much as possible the paired red particles (i.e. those reds for which their paired white is also on the ringing edge) so that they can be reserved for the set of flexible reds, as only reds which are both flexible and paired can actually be pinkened. Thus, when choosing from the pooled pile for the ℓ(v)+ℓ(w) reds for the lower bounds, we shall first choose the non-paired reds (and the specific ones chosen – i.e. the vertex they started from at this update step and the label if they have one (see the next section for a discussion on when and how to label particles) – is made uniformly). If there are insufficient non-paired reds, then once they are placed we choose from the paired reds (again uniformly). Step 2: [A fork in the road] With probability 2[θ(v)∧ (1-θ(v))] proceed to Step 3a; otherwise skip Step 3a and proceed to Step 3b. Step 3a: [Create new pink particles] Let k denote the number of paired red particles remaining in the pile after Step 1. Select (uniformly) k∧{⌈ [(|R|∧|W|)+|P|/2]/3⌉-|P|/2} paired red particles from the pile[By taking this minimum we ensure that the number of pink particles created won't exceed a certain threshold.], where |R|:=∑_v∈ VR(v), and similarly for |W| and |P| (where W(v) denotes the number of white particles on v). These are coloured pink and placed onto v. The paired white particles of these selected red particles are also coloured pink and placed onto w. Any paired red and any non-paired red left in the pile are then each independently placed onto v or w equally likely. Now proceed to Step 4. Step 3b: [Place remaining red particles] If θ(v)<1/2 put any remaining red particles from the pile onto v. As a result there will now be u(v) red particles on v. If instead θ(v)≥ 1/2, put any remaining red particles from the pile onto w (and so there are u(w) red particles on w.) Now proceed to Step 4. Step 4: [Place old pink particles] There may be some pink particles remaining in the pool (which were already pink at the start of the update). If not, skip to Step 5; otherwise with probability θ(v), put ℓ^P(v) of these pink particles on v, and the rest (i.e. u^P(w) of them) on w. With the remaining probability, instead put u^P(v) of them on v and the rest on w. Step 5: [Place white particles] The only possible particles left in the pile are white particles. These are placed onto v and w to ensure that the total number of non-black particles now on v is χ(B'(v)) (which also ensures there are χ(B'(w)) non-black particles on w since χ(B(v))+χ(B(w))=χ(B'(v))+χ(B'(w)) and no particles are created or destroyed). The choice of which white particles are put onto v is done uniformly. The next result shows the usefulness of reserving in guaranteeing a certain number of reserved pairs remain in the pool after Step 1. We again defer the proof to the Appendix. Write R^p_v,w for the number of paired red particles on e={v,w}, and set R^q_v,w=R(v)+R(w)- R^p_v,w. If there are k paired red particles on ringing edge e={v,w} then the number that are left remaining in the pooled pile after Step 1 above is at least k∧χ(B'(v))∧χ(B'(w)). Further, on the event that χ(B'(v))/χ(B'(w))∈[γ,1/γ] for some γ∈(0,1), the probability any particular paired red particle remains in the pool after Step 1 is at least γ uniformly over B, R^q_v,w, R^p_v,w and P. The next result gives the expected amount of ink after one step of using this algorithm. We state the result in terms of the first update, given any initial conditions. Recall m^*(v) is defined in (<ref>). For any v,w∈ V, B,R,P initial configurations of black, red and pink particles, and B' the configuration of black particles just after the first update (at time τ_1), E[_τ_1(v)| B, B', R, P, {e_1={v,w}}]=m^*(v). Recall that each red particle contributes 1 to the ink value of the vertex it occupies, and each pink particle contributes 1/2. We first consider the contribution to _τ_1(v) which comes from the particles placed onto v in Step 1. This is straightforward: we place ℓ(v) particles onto v from the pile and these are all red, thus the contribution to _τ_1(v) from Step 1 is simply ℓ(v). At Step 2 we do not place any new particles onto the vertices, but we do decide whether to proceed with Step 3a or Step 3b. If we do Step 3a then each red particle (paired or otherwise) in the pool will in expectation contribute a value of 1/2 to _τ_1(v): either it gets coloured pink as does its paired white and one of them is placed onto v, or it stays red and is placed onto v with probability 1/2. If we do Step 3b and θ(v)<1/2 then we place the remaining red particles on v which gives a total of u(v) red on v. If instead θ(v)≥ 1/2, we do not place any more red particles on v. Finally at Step 4 we place the pre-existing pink particles, each contributing 1/2 to the ink of the vertex they are placed on. Putting these observations together we obtain E[_τ_1(v)| B,B',R,P,{e_1={v,w}}] =ℓ(v)+2[θ(v)∧(1-θ(v))]u(v)-ℓ(v)/2+(1-2[θ(v)∧(1-θ(v))])θ(v)<1/2(u(v)-ℓ(v)) =+θ(v)ℓ^P(v)/2+(1-θ(v))u^P(v)/2 =ℓ(v)+θ(v)<1/2{θ(v)(u(v)-ℓ(v))+(1-2θ(v))(u(v)-ℓ(v))} =+θ(v)≥1/2{(1-θ(v))(u(v)-ℓ(v))}+θ(v)ℓ^P(v)/2+(1-θ(v))u^P(v)/2 =ℓ(v)+(1-θ(v))(u(v)-ℓ(v))+θ(v)ℓ^P(v)/2+(1-θ(v))u^P(v)/2 =θ(v)(ℓ(v)+ℓ^P(v)/2)+(1-θ(v))(u(v)+u^P(v)/2) =m^*(v). §.§ The evolution of the chameleon process We define a “particle configuration” to be a function V→N_0, which, in practice, will be the configuration of red, black, pink or white particles. For S a particle configuration we define |S|:=∑_v∈ VS(v). We also define a “label configuration” to be a function [a(m-1)+bn]→ V∪{0}, which will give the vertex occupied by the labelled particle of a certain colour (and which has value 0 if there is no particle of a given label). We discuss further this labelling now. At the start of every round we shall pair some red particles with an equal number of white particles. The way we do this, and how we track the movement of the paired particles, is by labelling paired red and white particles with a unique number. Suppose there are r red particles at the start of the ℓth round, and this is less than the number of white particles (otherwise, reverse roles of red and white in the following). We label the red particles with labels 1,…, r such that for any pair of vertices v and w, the label of any red particle on vertex v is less than the label of any red particle on vertex w if and only if v<w. In other words, we label red particles on vertex 1 first, then label red particles on 2, and continue until we have labelled all r red particles. We similarly label r white particles with the (same) rule that for any pair of vertices v and w, the label of any white particle on vertex v is less than the label of any white particle on vertex w if and only if v<w. A labelled red particle and a labelled white particle are pairs if they have the same label. For every time, we will have two label configurations: one for the red particles and one for the white. Suppose L is such a label configuration for the red particles at a certain time. Then the number of labelled red particles at this time is equal to max{ i: 1≤ i≤ a(m-1)+bn, L(i)≠0}, so in particular, L(i)=0 for any i larger than the number of labelled red particles. There are several aspects of the update rule which require external randomness: in Step 1, to choose which particles make up the lower bounds, in Step 2 to determine whether we proceed with Step 3a or Step 3b, in Step 3a choosing which paired red particles to pinken and how to place the remaining red particles in the pile, in Step 4 how to place the old pink particles, and in Step 5 to place the white particles. To fit the chameleon process into the framework of the graphical construction, we shall use random variables {U^c_i}_i≥1 as the source of the needed randomness with U^c_i used at time τ_i (and we shall not make it explicit how this is done). Further, and importantly, we shall do this in a way such that the randomness used at Step 1 is independent of the randomness used at Step 2 (it is standard that this is possible, see for example <cit.>). The random variables {U^b_i}_i≥1 are used to determine how the black particles move so that they move in the same way as the non-marked particles in the MaBB. For independent uniforms U, U' on [0,1], an edge e, particle configurations B of black particles, P of pink particles, and R of red particles, and label configurations L^R for red particles, L^W for white particles, we define C(U,U',e,B,R,P,L^R,L^W) to be a quintuple with the first component equal to MaBB(U,e,B), the second (resp. third) component denoting the configuration of red (resp. pink) particles, and the fourth (resp. fifth) component denoting the label configuration of red (resp. white) particle just after edge e rings if before this edge rang the configuration of black, red and pink was given by B, R and P, the label configuration of red particles was L^R, and of white was L^W, and we use U' as the source of randomness for Steps 1–5 as described above (in practice we shall take U' to be U^c_i for some i≥1). The chameleon process with round length T>0 and associated with a MaBB initialised at (ξ,x) is the quintuple (B_t^C,R_t^C,P_t^C,L^R_t,L^W_t)_t≥0 where B_t^C,R_t^C and P_t^C are particle configurations and L^R_t, L^W_t are label configurations for each t≥0, with the following properties: * (Initial values) B_0^C(v)=ξ(v), R_0^C(v)=χ(ξ(x))δ_x(v), and P_0^C(v)=0, for all v∈ V, L_0^R(i)=x 1≤ i≤ N_0:=χ(ξ(x))∧[a(m-1)+bn-χ(ξ(x))], 0 and L_0^W(i)=min{ℓ∈{1,…,n}∖{x}: ∑_k∈[ℓ]: k≠ xχ(ξ(k))≥ i} 1≤ i≤ N_0, 0 * (Updates during rounds) For each i≥1, (B_τ_i^C,R_τ_i^C,P_τ_i^C,L_τ_i^R,L_τ_i^W) =C(U_i^b,U_i^c,e_i,B_τ_i-^C,R_τ_i-^C,P_τ_i-^C). * (Particle configuration updates at end of rounds) For each i≥1 such that ∑_v∈ VP_iT-^C(v)≥min{∑_v∈ VR_iT-^C(v),∑_v∈ V(χ(B_iT-^C(v))-R_iT-^C(v)-P_iT-^C(v))}, we set B_iT^C(v)=B_iT-^C(v), R_iT^C(v)=R_iT-^C(v)+d_iP_iT-^C(v), P_iT^C(v)=0 for all v∈ V; and if i does not satisfy (<ref>) then we set B_iT^C(v)=B_iT-^C(v), R_iT^C(v)=R_iT-^C(v), P_iT^C(v)=P_iT-^C(v) for all v∈ V. * (Label configuration updates at end of rounds) For each i≥1 we define N_i:=∑_v R_iT^C(v)∧[a(m-1)+bn-∑_v (R_iT^C(v)+P_iT^C(v))] and set L_iT^R(j) =min{ℓ∈[n]: ∑_k=1^ℓ R^C_iT(k)≥ j} 1≤ j≤ N_i, 0 L_iT^W(j) =min{ℓ∈[n]: ∑_k=1^ℓ(χ(B^C_iT(k))-R^C_iT(k)-P^C_iT(k))≥ j} 1≤ j≤ N_i, 0 We can obtain the number of white particles W_t^C(v) at time t on a vertex v using W_t^C(v)+R_t^C(v)+P_t^C(v)=χ(B_t^C(v)). We write 𝒞(m) for the space of possible configurations of the chameleon process in which the underlying MaBB has m-1 non-marked particles. We note from this definition that the process also updates at the ends of rounds, i.e. at times of the form iT for i≥1. At these times if the number of pink particles is at least the number of red or white particles (i.e. if (<ref>) holds), then we have a depinking (and call this time a depinking time) in which all pink particles are removed from the system. To do this, we use the coin flips d_i given in the graphical construction. If time iT is a depinking time then we re-colour all pink particles red simultaneously if d_i=1, otherwise if d_i=0 we re-colour them all white. A simulation of the chameleon process for the first few update times appears after the Appendix. § PROPERTIES OF THE CHAMELEON PROCESS §.§ Evolution of ink In this section we suppose that the chameleon process considered is associated with a MaBB initialised at (ξ,x). The total ink in the system only changes at depinking times. This is a straightforward observation as the only particles that change colour at an update time that is not a depinking are paired red and white particles. But since we colour each in the pair pink, the total ink does not change. Let _j denote the ink in the system just after the jth depinking time and D_j the time of the jth depinking. The process {_j}_j≥1 evolves as a Markov chain; the following result gives its transition probabilities. This result is similar to <cit.> for the chameleon process used there. For j∈N, _j+1∈{_j-Δ(_j),_j+Δ(_j)} a.s., where for each r∈N, Δ(r):=⌈min{r,a(m-1)+bn-r}/3⌉. Moreover, conditionally on {_ℓ}_ℓ=0^j, each possibility has probability 1/2. Fix j∈N. After each depinking is performed there are no pink left in the system, and so _j is equal to the number of red particles at time D_j, |R_D_j^C|=∑_vR_D_j^C(v). As the number of non-black particles is fixed at a(m-1)+bn, it follows that the number of white particles at time D_j is |W_D_j^C|=∑_vW_D_j^C(v)=a(m-1)+bn-_j. Observe that every time a red and white particle pair are pinkened, we lose one red and one white, and gain two pink particles. It can be easily checked that for p and s positive integers with p even, p<s⇔⌈ (s+p/2)/3⌉-p/2>0. In other words, while the number of pink particles remains less than the minimum of the number of red and white, the chameleon process will still create new pink particles (recall the number of pink particles created in Step 3a of the chameleon process); conversely, the chameleon process will stop producing new pink particles as soon as the number of pink particles is at least the minimum of the number of red and white particles. Moreover, once it stops producing new pink particles, the number of pink created is the smallest number which ensures that the number of pink is at least the number of red or white; we can see this by observing that p+2(⌈ (s+p/2)/3⌉-p/2)=2⌈ (s+p/2)/3⌉ is the smallest even integer which is at least s-(⌈ (s+p/2)/3⌉-p/2)=(s+p/2)-⌈ (s+p/2)/3⌉. Thus the number of pink particles created just before the next depinking time (at time D_j+1) is the smallest p even satisfying p≥ |W_D_j^C|-p/2 or p≥ |R_D_j^C|-p/2, which is p=2Δ(_j). At the depinking time D_j+1, the pink particles either all become white (and _j+1=_j-Δ(_j)) or they all become red (and _j+1=_j+Δ(_j)). Which event happens depends just on the outcome of the independent fair coin flip d_j+1. The total ink in the system is a martingale and is absorbed in finite time in either 0 or a(m-1)+bn. Further, the event Fill:={lim_t→∞_t=a(m-1)+bn} has probability χ(ξ(x))/(a(m-1)+bn). The fact that total ink is a martingale follows from Lemma <ref> and the behaviour of the chameleon process at depinking times. The probability of event Fill then follows by the martingale property and the dominated convergence theorem (total ink is bounded by a(m-1)+bn), as in the proof of Lemma 7.1 of <cit.>. For ζ∈Ω_G,m-1 and t≥0, P({B_t^C=ζ}∩Fill)=P(B_t^C=ζ)P(Fill)=P(B_t^C=ζ)χ(ξ(x))/a(m-1)+bn. This follows from Lemma <ref> and the fact that event Fill only depends on the outcomes of the coin flips {d_i}_i whereas the movement of the black particles is independent of these coin flips. For all t≥0 and v∈ V, _t(v)≤χ(B^C_t(v)). This follows simply from the fact that the number of non-black particles on a vertex with B black particles is always χ(B). This is true at time 0, and Steps 1 to 5 guarantee this at update times which are not depinkings. Finally, at depinking times we do not change the number of particles on vertices, only their colour. Observe also that _t(v)=χ(B^C_t(v)) if at time t all non-black particles on v are red. The next result shows that, during a single round and until they meet, a pair of paired red-white particles move (marginally) as independent random walks on the graph, which stay in place with probability 1/2 when an incident edge rings. For two independent random walks X,Y on a graph G (each of which move by jumping from their current vertex v to a neighbour w when edge {v,w} rings), we write M^X,Y for their meeting time – the first time they are on neighbouring vertices, and the edge between them rings for one of the walks (they each have their own independent sequence of edge-rings). If the walks start on the same vertex, we say their meeting time is 0. We let Ĝ denote the graph (V,E,{r_e/2}_e∈ E), that is, we halve the rates on the edges of graph G. Fix u,v∈ V, u≠ v, and i∈N_0. Let X and Y be independent random walks on Ĝ with X_0=u, Y_0=v. For any 1≤ j≤∑_vR_iT^C(v)∧∑_vW_iT^C(v), conditionally on L_iT^R(j)=u and L_iT^W(j)=v, for all t∈[iT,iT+{T∧ M^X,Y}), we have (L_t^R(j),L_t^W(j))d=(X_t-iT,Y_t-iT). We make use of Property <ref>. Suppose edge e={v,w} rings during time interval [iT,(i+1)T) and the black particles update from configuration B. Suppose B' is a possible configuration of the black particles as a result of the update. Let B̃ be the configuration of black particles with B̃(v)=B'(w), B̃(w)=B'(v) and for z∉ e, B̃(z)=B'(z)=B(z). As black particles update as non-marked particles in MaBB, B' and B̃ are equally likely to be the configuration of black particles after the update, by Property <ref>. We claim that the probability that a labelled red particle (similarly labelled white particle) will be on v after the update if configuration B' is chosen as the new black configuration is the same as the probability the same labelled red particle (respectively, labelled white particle) will be on w if configuration B̃ is chosen. This will suffice since prior to meeting, a paired red and white particle will never be on the same ringing edge. This claim will follow from showing that ℓ(v)=ℓ̃(w), ℓ^P(v)=ℓ̃^P(w), u(v)=ũ(w), u^P(v)=ũ^P(w) and θ(v)=θ̃(w), where the notation with tilde refers to the update in which B̃ is chosen, and notation without the tilde to the update in which B' is chosen. The identities regarding the lower and upper values are immediate from their definitions. To show θ(v)=θ̃(w), observe that θ̃(v)[ℓ̃(v)+1/2ℓ̃^P(v)]+(1-θ̃(v))[ũ(v)+1/2ũ^P(v)] =(R(v)+1/2 P(v))P_e,B,B̃(v,v)+(R(w)+1/2 P(w))P_e,B,B̃(w,v). But by Property <ref>, we have P_e,B,B̃(v,v) =B̃(v)+1/B(v)+B(w)+1P_e^BB(G,s,m)(C_B,v,C_B̃,v)/P_e^BB(G,s,m)(B,B̃) = B'(w)+1/B(v)+B(w)+1P_e^BB(G,s,m)(C_B,v,C_B',w)/P_e^BB(G,s,m)(B,B') =P_e,B, B'(v,w), and similarly P_e,B,B̃(w,v)=P_e,B,B'(w,w). Plugging these into (<ref>) shows that θ̃(v) solves the same equation as θ(w), hence they are equal; similarly θ(v)=θ̃(w). §.§ From ink to total variation In this section we show a crucial connection between the MaBB initialised at (ξ,x) and its associated chameleon process. To emphasise the dependence of _t on the initial configuration of the MaBB, we shall sometimes write it as _t^(ξ,x). Let (ξ_t,m_t) denote the time-t configuration of a MaBB initialised at (ξ,x)∈Ω'_G,m. For every t≥0 and (ζ,y)∈Ω'_G,m, P((ξ_t,m_t)=(ζ,y))=E[_t^(ξ,x)(y)/χ(ξ(x))B_t^C=ζ]. The proof of Proposition <ref> is similar in spirit to the proof of Lemma 1 of <cit.>. We introduce a new process M^* which will also be constructed using the graphical construction. This process is similar to the chameleon process in that vertices are occupied by particles of various colours (black, red, pink and white). Like in the chameleon process, if there are B black particles on a vertex, then there are χ(B) non-black particles. The process M^* evolves exactly as the chameleon process except we replace Step 3a with Step 3a^', described below. Further, M^* does not have any updates at the ends of rounds (so in particular no depinking times). As a result the number of red, white and pink particles remain constant over time. We use the same terminology (e.g. ink) for process M^*. Step 3a^': Any red particles left in the pile are each independently placed onto v or w equally likely. It can be shown (following the same proof) that Lemma <ref> holds also for M^*: For any v,w∈ V, B,R,P initial configurations of black, red and pink particles, and B' the configuration of black particles just after the first update (at time τ_1), E^M^*[_τ_1(v)| B, B', R, P, {e_1={v,w}}]=m^*(v). Fix (ξ,x)∈Ω'_G,m, random variable _0(y) taking values in [0,χ(ξ(y))]∩(N_0/2) for each y∈ V, and denote by (ξ_t,m_t) the time-t configuration of a MaBB which starts from a random configuration (ξ_0,m_0) satisfying almost surely ∀ y∈ V P(m_0=y|ξ_0)=E^M^*[_0(y)/χ(ξ(x))| B_0^C], where M^* starts with configuration of black particles B_0^C=ξ_0 and with initial ink value of _0(y) at each y∈ V. Then for all t≥ 0, almost surely ∀ y∈ V P(m_t=y| (ξ_s)_0≤ s≤ t)=E^M^*[_t(y)/χ(ξ(x))| (B_s^C)_0≤ s≤ t]. As (ξ_s)_s≥ 0 and (B^C_s)_s≥0 are constructed using the same (U_r^b)_r=1^∞, they are equal almost surely. It suffices to show the statement at the update times. We shall use induction. The base case (time τ_0=0) follows from the assumption. Fix r∈N and suppose the result holds up to (and including) time τ_r-1. Observe that by the strong Markov property and Lemma <ref> (and recall the choice of θ from (<ref>) and also that B^C_τ_r=MaBB(U_r^b,e_r,B^C_τ_r-1)), for any y∈ V, almost surely E^M^*[_τ_r(y)| U_r^b,e_r,B^C_τ_r-1,R_τ_r-1^C,P_τ_r-1^C] =y∈ e_r{[R_τ_r-1^C(y)+1/2 P_τ_r-1^C(y)] P_e_r,B^C_τ_r-1,B^C_τ_r(y,y) =+[R_τ_r-1^C(e_r∖{y})+1/2 P_τ_r-1^C(e_r∖{y})] P_e_r,B^C_τ_r-1,B^C_τ_r(e_r∖{y},y)} +y∉ e_r_τ_r-1(y) =y∈ e_r{_τ_r-1(y)P_e_r,B^C_τ_r-1,B^C_τ_r(y,y)+_τ_r-1(e_r∖{y})P_e_r,B^C_τ_r-1,B^C_τ_r(e_r∖{y},y)} =+y∉ e_r_τ_r-1(y). Taking an expectation, the first term above becomes E^M^*[y∈ e_r_τ_r-1(y)P_e_r,B^C_τ_r-1,B^C_τ_r(y,y)|(B^C_s)_s≤τ_r] =E^M^*[E^M^*[y∈ e_r_τ_r-1(y)P_e_r,B^C_τ_r-1,B^C_τ_r(y,y)|e_r, (B^C_s)_s≤τ_r]|(B^C_s)_s≤τ_r] =E^M^*[y∈ e_rP_e_r,B^C_τ_r-1,B^C_τ_r(y,y)E^M^*[_τ_r-1(y)|(B^C_s)_s≤τ_r-1]|(B^C_s)_s≤τ_r] =E^M^*[χ(ξ(x))P(m_τ_r-1=y| (ξ_s)_s≤τ_r-1) y∈ e_rP_e_r,B^C_τ_r-1,B^C_τ_r(y,y)|(B^C_s)_s≤τ_r], using in the penultimate step that almost surely E^M^*[_τ_r-1(y)|e_r, (B^C_s)_s≤τ_r]=E^M^*[_τ_r-1(y)|(B^C_s)_s≤τ_r-1], since B^C_τ_r=MaBB(U_r^b,e_r,B^C_τ_r-1) and _τ_r-1(y) is independent of e_r and U_r^b; and using the induction hypothesis in the last step. Similarly, E^M^*[y∈ e_r_τ_r-1(e_r∖{y})P_e_r,B^C_τ_r-1,B^C_τ_r(e_r∖{y},y)|(B^C_s)_s≤τ_r] =E^M^*[χ(ξ(x))P(m_τ_r-1=e_r∖{y}| (ξ_s)_s≤τ_r-1, e_r) y∈ e_rP_e_r,B^C_τ_r-1,B^C_τ_r(e_r∖{y},y)|(B^C_s)_s≤τ_r], and thus E^M^*[_τ_r(y)/χ(ξ(x))| (B^C_s)_s≤τ_r] =E^M^*[P(m_τ_r-1=y| (ξ_s)_s≤τ_r-1)[y∈ e_rP_e_r,ξ_τ_r-1,ξ_τ_r(y,y)+y∉ e_r] =E[+y∈ e_rP(m_τ_r-1=e_r∖{y}| (ξ_s)_s≤τ_r-1, e_r) P_e_r,ξ_τ_r-1,ξ_τ_r(e_r∖{y},y)|(ξ_s)_s≤τ_r]. On the other hand, using the definition of MaBB^* from (<ref>), P(m_τ_r=y|(ξ_s)_s≤τ_r) =P(MaBB^*(U_r^b,U^c_r,e_r,ξ_τ_r-1,m_τ_r-1)=y|(ξ_s)_s≤τ_r) =P(m_τ_r-1[m_τ_r-1∉ e_r+m_τ_r-1∈ e_rU_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(m_τ_r-1,m_τ_r-1)] =P(+(e_r∖{m_τ_r-1})m_τ_r-1∈ e_rU_r^c≥P_e_r,ξ_τ_r-1,ξ_τ_r(m_τ_r-1,m_τ_r-1)=y|(ξ_s)_s≤τ_r) =P({m_τ_r-1=y∈ e_r}∩{U_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(y,y)}|(ξ_s)_s≤τ_r) =+P({m_τ_r-1=e_r∖{y}, y∈ e_r}∩{U_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(e_r∖{y},y)}|(ξ_s)_s≤τ_r) =+P({m_τ_r-1=y∉ e_r}|(ξ_s)_s≤τ_r). Using the tower property of conditional expectation we condition further on e_r, and then use that given e_r and (ξ_s)_s≤τ_r, event {U_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(y,y)} is independent of event {m_τ_r-1=y}∩{y∈ e_r}, to obtain P(m_τ_r=y|(ξ_s)_s≤τ_r) =E[E[m_τ_r-1=y(y∈ e_rU_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(y,y)+y∉ e_r)|(ξ_s)_s≤τ_r, e_r]| (ξ_s)_s≤τ_r] =+E[E[y∈ e_rm_τ_r-1=e_r∖{y}U_r^c<P_e_r,ξ_τ_r-1,ξ_τ_r(e_r∖{y},y)|(ξ_s)_s≤τ_r, e_r]| (ξ_s)_s≤τ_r] =E[E[m_τ_r-1=y[y∈ e_rP_e_r,ξ_τ_r-1,ξ_τ_r(y,y)+y∉ e_r]| (ξ_s)_s≤τ_r,e_r]| (ξ_s)_s≤τ_r] =+E[E[y∈ e_rm_τ_r-1=e_r∖{y}P_e_r,ξ_τ_r-1,ξ_τ_r(e_r∖{y},y)| (ξ_s)_s≤τ_r,e_r]| (ξ_s)_s≤τ_r] =E[P^MaBB(m_τ_r-1=y| (ξ_s)_s≤τ_r-1) [y∈ e_rP_e_r,ξ_τ_r-1,ξ_τ_r(y,y)+y∉ e_r] =E[+y∈ e_rP^MaBB(m_τ_r-1=e_r∖{y}| (ξ_s)_s≤τ_r-1, e_r) P_e_r,ξ_τ_r-1,ξ_τ_r(e_r∖{y},y)|(ξ_s)_s≤τ_r]. which agrees with (<ref>) and so completes the inductive step. We now turn to the proof of Proposition <ref>. We shall need a list of times at which updates occur for the chameleon process; recall that the chameleon process updates at times {τ_r}_r≥1 but also at depinking times. To this end, we set τ̂_0=0 and for each r≥1, we set τ̂_r=(min{τ_m: τ_m>τ̂_r-1})∧(min{D_i: D_i> τ̂_r-1, i∈N}). Similarly a hat placed on notation (e.g. ê_r) refers to the (in this example) edge chosen at time τ̂_r. If this is a depinking time then we set ê_r=V. Next, for each r≥1 we introduce process (M_t^r)_t≥0 which is constructed using the graphical construction. Each of these processes is a process in which vertices are occupied by particles of various colours, and we initialise them all with the initial configuration of the chameleon process. Prior to time τ̂_r, process M^r evolves exactly as the chameleon process; at and after time τ̂_r it evolves as M^* (so in particular there are no more changes to the colours of particles). Note that in all these processes the black particles have the same trajectory and this matches the trajectory of the non-marked particles in the MaBB. Note also that M^1 is identical to M^*. We shall prove by induction on r that for all r≥1, ∀ t>0, y∈ V P(m_t=y|ξ_t)=E^M^r[_t^(ξ,x)(y)/χ(ξ(x))| B^C_t] . This will prove the proposition since the chameleon process is the almost sure limit of M^r as r→∞. The case r=1 follows from Lemma <ref> since _0^(ξ,x)(y)=χ(ξ(x)) if y=x and otherwise _0^(ξ,x)(y)=0 (thus the assumption of the lemma holds). We fix r'∈N_0, assume (<ref>) holds for r=r' and show it holds for r=r'+1. Observe that before time τ̂_r', M^r'+1=M^r' so for t<τ̂_r', for all y, almost surely P(m_t=y|ξ_t) =E^M^r'[_t^(ξ,x)(y)/χ(ξ(x))| B^C_t]=E^M^r'+1[_t^(ξ,x)(y)/χ(ξ(x))| B^C_t]. After time τ̂_r', M^r'+1 evolves as M^*; so assuming that for all y∈ V, P(m_τ̂_r'=y|ξ_τ̂_r')=E^M^r'+1[^(ξ,x)_τ̂_r'(y)/χ(ξ(x))| B^C_τ̂_r'], then by Lemma <ref> we have that for all t>τ̂_r', for all y∈ V, P(m_t=y|(ξ_s)_τ̂_r'≤ s≤ t)=E^M^r'+1[^(ξ,x)_t(y)/χ(ξ(x))| (B^C_s)_τ̂_r'≤ s≤ t]. The inductive step is then complete by taking an expectation and using that black particles have the same trajectory as the non-marked, almost surely. Thus it remains to prove (<ref>). We fix y∈ V and decompose according to three events, which partition the probability space: * E_1:=⋃_i≥1{y∉ê_r'}∩{τ̂_r'=τ_i} (the update is not a depinking time and y is not on the ringing edge) * E_2:=⋃_i≥1{y∈ê_r'}∩{τ̂_r'=τ_i} (the update is not a depinking time but y is on the ringing edge) * E_3:=⋃_i≥1{τ̂_r=D_i} (the update is a depinking time) On event E_1, as y is not on a ringing edge at time τ̂_r', the value of _t^(ξ,x)(y) does not change at time τ̂_r' in either of the processes M^r' or M^r'+1; since they agree prior to this time, we deduce that almost surely E^M^r'+1[^(ξ,x)_τ̂_r'(y)/χ(ξ(x))E_1| B^C_τ̂_r']=E^M^r'[^(ξ,x)_τ̂_r'(y)/χ(ξ(x))E_1| B^C_τ̂_r']. On event E_2, we may pinken some particles at time τ̂_r' in process M_r'+1. Nevertheless, by Lemmas <ref> and <ref> (and again since the processes agree prior to this time), we see that their expected ink values agree, i.e. E^M^r'+1[_τ̂_r'(y)/χ(ξ(x))E_2| B^C_τ̂_r']=E^M^r'[_τ̂_r'(y)/χ(ξ(x))E_2| B^C_τ̂_r']. Finally, on event E_3, M^r' does not update. On the other hand, almost surely E^M^r'+1[_τ̂_r'(y)/χ(ξ(x))E_3| B^C_τ̂_r'] =∑_i=1^∞E^M^r'+1[_τ̂_r'(y)/χ(ξ(x))τ̂_r'=D_i| B^C_τ̂_r'] =∑_i=1^∞E^M^r'+1[{d_i=1(R^C_τ̂_r'-1(y)+P^C_τ̂_r'-1(y)/χ(ξ(x)))+d_i=0R^C_τ̂_r'-1(y)/χ(ξ(x))}τ̂_r'=D_i| B^C_τ̂_r'] =∑_i=1^∞E^M^r'+1[R^C_τ̂_r'-1(y)+1/2P^C_τ̂_r'-1(y)/χ(ξ(x))τ̂_r'=D_i| B^C_τ̂_r'] =E^M^r'+1[R^C_τ̂_r'-1(y)+1/2P^C_τ̂_r'-1(y)/χ(ξ(x))E_3| B^C_τ̂_r'] =E^M^r'+1[_τ̂_r'-1(y)/χ(ξ(x))E_3| B^C_τ̂_r'] =E^M^r'[_τ̂_r'(y)/χ(ξ(x))E_3| B^C_τ̂_r']. Putting together equations (<ref>)–(<ref>) and using that E_1,E_2,E_3 form a partition, we obtain that for each y∈ V, E^M^r'+1[_τ̂_r'(y)/χ(ξ(x))| B^C_τ̂_r']=E^M^r'[_τ̂_r'(y)/χ(ξ(x))| B^C_τ̂_r'], and thus by the inductive hypothesis, we have shown (<ref>). Next, we show how Proposition <ref> can be used to bound the total variation distance between two MaBB configurations in terms of the total amount of ink in the chameleon process. Recall from (<ref>) the law π_ζ for ζ∈Ω_G,m-1 and denote by m̃_t a random variable which, conditionally on ξ_t=ζ, has law π_ζ. Recall also the definition of event Fill from Lemma <ref>. Let (ξ_t,m_t) denote the time-t configuration of a MaBB initialised at (ξ,x)∈Ω'_G,m. For any t>0, ℒ((ξ_t,m_t))-ℒ((ξ_t,m̃_t))_TV≤ 1- E[_t^(ξ,x)/a(m-1)+bn|Fill]. This is similar to the proof of Lemma 8.1 of <cit.>. By Proposition <ref>, for any (ζ,y)∈Ω'_G,m, P(ξ_t=ζ,m_t=y)=E[_t^(ξ,x)(y)/χ(ξ(x))B^C_t=ζ]≥E[_t^(ξ,x)(y)/χ(ξ(x)){B^C_t=ζ}∩Fill]. On the other hand, using that B^C_t and ξ_t have the same distribution and Corollary <ref>, P(ξ_t=ζ, m̃_t=y)=π_ζ(y)P(ξ_t=ζ)=π_ζ(y)/P(Fill)P({B^C_t=ζ}∩Fill). We deduce that (P(ξ_t=ζ, m̃_t=y)-P(ξ_t=ζ, m_t=y))_+ ≤(E[{B^C_t=ζ}∩Fill(π_ζ(y)/P(Fill)-_t^(ξ,x)(y)/χ(ξ(x)))])_+. Observe that on event {B^C_t=ζ}, we have ^(ξ,x)_t(y)≤χ(ζ(y)) by Lemma <ref>, and so (on this event), ^(ξ,x)_t(y)/χ(ξ(x))≤χ(ζ(y))/χ(ξ(x))=χ(ζ(y))/(a(m-1)+bn)P(Fill)=π_ζ(y)/P(Fill), where the first equality is due to Corollary <ref> and the second from the definition of the colour function χ. As a result we deduce from (<ref>) that (P(ξ_t=ζ, m̃_t=y)-P(ξ_t=ζ, m_t=y))_+≤E[{B^C_t=ζ}∩Fill(π_ζ(y)/P(Fill)-^(ξ,x)_t(y)/χ(ξ(x)))]. We take a sum over y followed by ζ to obtain ℒ((ξ_t,m_t))-ℒ((ξ_t,m̃_t))_TV ≤E[Fill(1/P(Fill)-^(ξ,x)_t/χ(ξ(x)))] =1-P(Fill) E[_t^(ξ,x)/χ(ξ(x))|Fill] =1- E[_t^(ξ,x)/a(m-1)+bn|Fill], using Lemma <ref> in the last step. Recall from Section <ref> that for each ℓ∈N, _ℓ denotes the value of ink just after the ℓth depinking time. We write _ℓ^(ξ,x) to emphasise the dependence on the initial configuration of the corresponding MaBB. Fix (ξ,x)∈Ω'_G,m. For each ℓ≥1, 1-E[_ℓ^(ξ,x)/a(m-1)+bn|Fill]≤ (71/72)^ℓ√(a(m-1)+bn). We omit the proof (which uses Lemma <ref>) of this result since it is identical to the proof of Proposition 6.1 in <cit.>, except that here ink can take values in {0,…,a(m-1)+bn} (in contrast with <cit.> in which ∈{0,…,n}). § EXPECTED LOSS OF RED IN A ROUND In this section we show that during a single round (which starts with fewer red particles than white) the number of red particles decreases in expectation by a constant factor. Let M_i,j(G) denote the meeting time of two independent random walks started from vertices i and j on G and recall that M̂_i,j(G) denotes the meeting time of two independent random walks started from vertices i and j on the graph obtained from G by halving the edge-weights, that is, M̂_i,j(G)=M_i,j(Ĝ). Consider a slight modification to the chameleon process in which we replace the number of selected particles (<ref>) in Step 2 with k, that is, we allow all paired reds particles to be pinkened. We call this the modified chameleon process. Suppose the modified chameleon process starts a round with red configuration R, white configuration W and black configuration B such that |R|≤ |W|. If the round length T satisfies T≥ 2max_i,jEM̂_i,j(G) then E[|R_T-^C|]≤ (1-c)|R|, with c=(p^*)^2/4a. If instead |W|≤ |R| then we have an equivalent result: E[|W_T-^C|]≤ (1-c)|W|. We shall only count pinkenings between paired red and white particles which get coloured pink the first time they meet (if they do) during the round. (This means that we do not have to worry about how the particles move after their first meeting time – they no longer move independently once they meet.) Since we assume |R|≤ |W|, all red particles will have a label in {1,…,|R|}. Let M^r denote the meeting time of red particle with label r with its paired white particle; this is the first time the two particles are on the same ringing edge. If two paired particles start the round on the same vertex we set their meeting time to be the first time this vertex is on a ringing edge. For each s∈N write F_s(r) for the event that a red particle with label r remains in the pooled pile after Step 1 of the update at time τ_s (if red particle with label r is not on edge e_r at time τ_r-, we set F_r(s)=∅), and write G_s for the event that we do Step 3a (rather than Step 3b) at the update at time τ_s. We also write e_s^1, e_s^2 for the two vertices on edge e_s (in an arbitrary order), u_s(e_s^1) and ℓ_s(e_s^1) for the values of u(e_s^1) and ℓ(e_s^2) at the update at time τ_s, and θ_s(e_s^1) for the probability θ(e_s^1) at the update time τ_s. We lower-bound the expected number of pink particles created during a single round (which has length T) of the modified chameleon process in which at the start of the round the configuration of red particles is R by 2E[∑_r=1^|R|∑_s=1^∞M^r=τ_s<TF_s(r)G_s] =2∑_r=1^|R|∑_s=1^∞E[M^r=τ_s<TP(F_s(r)∩ G_s|τ_s, M^r)]. Observe that conditionally on the configuration of the chameleon process at time τ_s- and the configuration of black particles at time τ_r, F_s(r) and G_s are independent since F_s(r) depends further only on the randomness at Step 1, and G_s the randomness at Step 2 (and we have constructed the chameleon process so that these are independent). Therefore we have almost surely P(F_s(r)∩ G_s|τ_s, M^r)=E[2 (θ_s(e_s^1)∧ (1-θ_s(e_s^1))F_s(r)|τ_s, M^r). Next, for each s∈N, we introduce an event A_s which: * has probability p^* (recall this constant comes from Property <ref>), * prescribes only the value that U_s^b takes, * on event A_s, for each i∈{1,2}, given e_s and B^C_τ_s-, configuration B^C_τ_s satisfies almost surely * P_e_s,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)∈[p^*∧2/9,(1-p^*)∨7/9], * χ(B^C_τ_s(e_s^1))/χ(B^C_τ_s(e_s^2))∈ [1/(2a),2a]. The fact that such an event exists is shown in Lemma <ref>. As A_s only prescribes U_s^b, it is independent of events {M^r=τ_s} and {τ_s<T} (which do not depend on U_s^b). Thus from (<ref>) and by Lemma <ref> we have P(F_s(r)∩ G_s|τ_s, M^r)≥ 2(2/9∧ p^*) p^* P(F_s(r)|τ_s, M^r, A_s)≥4/3(p^*)^2 P(F_s(r)|τ_s, M^r, A_s), using p^*<1/3. We also have that P(F_s(r)|τ_s, M^r, A_s)≥ 1/(2a) almost surely. This follows from Lemma <ref> by first conditioning on the configuration of the chameleon process at time τ_s-, since given this, F_s(r) is independent of M^r and τ_s. Thus our lower-bound on the expected number of pink particles created becomes 4/3a(p^*)^2 ∑_r=1^|R|∑_s=1^∞P(M^r=τ_s<T)≥(p^*)^2/a|R|min_rP(M^r<T). For a red and white pair (with label r) on the same vertex v, say, at the start of the round, M^r is the first time v is on a ringing edge. Suppose w is a neighbour of v (chosen arbitrarily) and recall M_v,w(Ĝ) is the meeting time of two random walks on Ĝ started from vertices v and w respectively. Then P(M^r< T)≥1/2P(M_v,w< T), since for the random walks to meet, vertex v must be on a ringing edge for at least one of the two random walk processes. Then by Markov's inequality, we have in this case that P(M^r< T)≥1/2(1-max_i,jEM_i,j(Ĝ)/T). On the other hand, for a red and white pair which start the round on different vertices, we can directly apply Markov's inequality to obtain P(M^r< T)≥ 1-max_i,jEM_i,j(Ĝ)/T. Thus if T≥2 max_i,jEM̂_i,j(G) then for any r, P(M^r< T)≥ 1/4. Hence this completes the proof with c=(p^*)^2/4a. For each s∈N, there exists an event A_s satisfying properties 1–3 above. We define event A_s={U_s^b≤ p^*}, which clearly has probability p^* and only prescribes the value that U_s^b takes. Recall from the discussion in Section <ref> (in particular (<ref>)) that U_s^b≤ p^* implies that if there are at least two non-marked particles on e_s then a proportion in [1/3,2/3] of the non-marked particles on the edge end up on each vertex on e_s (at time τ_s). Thus on event A_s (and as black particles in the chameleon process move as non-marked particles in MaBB), almost surely, B_τ_s^C(e_s^1) ≥1/3∑_i=1^2B_τ_s-^C(e_s^i)≥ 2∑_i=1^2B_τ_s-^C(e_s^i), B_τ_s^C(e_s^2) ≤2/3∑_i=1^2B^C_τ_s-(e_s^i)≥ 2∑_i=1^2B_τ_s-^C(e_s^i)+∑_i=1^2B_τ_s-^C(e_s^i)=1 ≤2/3∑_i=1^2B^C_τ_s-(e_s^i)≥ 2∑_i=1^2B^C_τ_s-(e_s^i)+1. Thus on event A_s, almost surely, χ(B^C_τ_s(e_s^1))/χ(B^C_τ_s(e_s^2)) ≥γ+b(1-γ)-aγ/2a/3∑_i=1^2B_τ_s-^C(e_s^i)≥ 2∑_i=1^2B_τ_s-^C(e_s^i)+a+b ≥γ, for any γ≤ 1/2 provided b(1-γ)≥ aγ. We similarly have χ(B^C_τ_s(e_s^2))/χ(B^C_τ_s(e_s^1))≥γ under the same condition. This condition is satisfied taking γ=1/(2a) (and this is indeed ≤ 1/2 as a≥ 1). Finally, it remains to show that for each i∈{1,2} we have P_e_s,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)∈[p^*∧2/9,(1-p^*)∨7/9] on event A_s, almost surely. This is the probability that in the MaBB process, if the marked particle is on vertex e_s^i, it remains on vertex e_s^i given the non-marked particles update from configuration B^C_τ_s- to B^C_τ_s when edge e_s rings. Suppose ∑_j=1^2 B^C_τ_s-(e_s^j)≥ 2, i.e. before the update there are at least 2 black particles on e_s. For y∈ e_s, write m_s(y)∈{0,1} for the number of marked particles on y after the update at time τ_s. On event A_s, for each i∈{1,2} we have B^C_τ_s(e_s^i)∈[1/3∑_j=1^2 B^C_τ_s-(e_s^j),2/3∑_j=1^2 B^C_τ_s-(e_s^j)] and thus B^C_τ_s(e_s^i)+m_s(e_s^i) ∈[1/3∑_j=1^2 B^C_τ_s-(e_s^j),2/3∑_j=1^2 B^C_τ_s-(e_s^j)+1] =[1/3(∑_j=1^2 B^C_τ_s-(e_s^j)+1)-1/3,2/3(∑_j=1^2 B^C_τ_s-(e_s^j)+1)+1/3] ⊆[2/9(∑_j=1^2 B^C_τ_s-(e_s^j)+1),7/9(∑_j=1^2 B^C_τ_s-(e_s^j)+1)]. Now recall (from the discussion after (<ref>)) the description of the MaBB process in which we remove the mark on the marked particle, then update as the BBSP, and then choose a uniform particle on the edge on which to apply the mark. Together with the just-determined bound on the number of particles on e_s^i, this tells us that the probability the marked particle is on e_s^i after the update is in [2/9,7/9]. Suppose now that ∑_j=1^2 B^C_τ_s-(e_s^j)=1. Recall the definition of P_e,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i) as P_e,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i) :=B^C_τ_s(e_s^i)+1/B^C_τ_s(e_s^1)+B^C_τ_s(e_s^2)+1P_e^BB(G,s,m)(C_B^C_τ_s-,e_s^i,C_B^C_τ_s,e_s^i)/P_e^BB(G,s,m-1)(B^C_τ_s-,B^C_τ_s) =B^C_τ_s(e_s^i)+1/2P_e^BB(G,s,m)(C_B^C_τ_s-,e_s^i,C_B^C_τ_s,e_s^i)/1/2 =(B^C_τ_s(e_s^i)+1)P_e^BB(G,s,m)(C_B^C_τ_s-,e_s^i,C_B^C_τ_s,e_s^i), where we have used Property <ref> to obtain P_e^BB(G,s,m-1)(B^C_τ_s-,B^C_τ_s)=1/2. BBSP configuration C_B^C_τ_s-,e_s^i has two particles, thus by Property <ref> and the second part of Property <ref>, P_e^BB(G,s,m)(C_B^C_τ_s-,e_s^i,C_B^C_τ_s,e_s^i)≥ p^*, and so P_e,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)≥ p^*. If B^C_τ_s(e_s^i)=0 (so that the non-marked particle and the marked particle end up on different vertices) we have (again by Property <ref>) P_e,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)≤ 1-2p^*, whereas if B^C_τ_s(e_s^i)=1, then by Properties <ref> and <ref>, P_e,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)≤ (B^C_τ_s(e_s^i)+1)1-p^*/2=1-p^*. Finally, if B^C_τ_s-(e_s^i)=0, then a marked particle on e_s^i stays on e_s^i at the update time τ_s with probability 1/2 by Property <ref>. Thus in all cases we have that on event A_s, almost surely P_e_s,B^C_τ_s-,B^C_τ_s(e_s^i,e_s^i)∈[p^*∧2/9, (1-p^*)∨7/9]. § PROOF OF THEOREM <REF> We can now put together the results obtained so far and complete the proof of Theorem <ref>. These arguments are similar to those in previous works using a chameleon process. The next result bounds the first depinking time D_1. We wish to apply this result for any of the depinking times, and so we present the result in terms of a chameleon process started from any configuration in 𝒞(m). In reality, a chameleon process at time 0 will always have all red particles on a single vertex, as is apparent from Definition <ref>. If the round length T satisfies T≥ 2max_i,jEM̂_i,j(G), then from any initial configuration in 𝒞(m) of the (non-modified) chameleon process, the first depinking time has an exponential moment: E[e^D_1/(KT)]≤12a/(p^*)^2, where K=8a/(p^*)^2. This proof follows closely the proof of Lemma 9.2 from Oliveira. By the same arguments as there, we obtain P(D_1>iT)≤3/2 (1-c)^i, for any integer i≥ 1, with c=(p^*)^2/(4a) the constant from Proposition <ref>. To obtain the bound on the exponential moment, observe that for any K>0, E[e^D_1/(KT)] =∑_i=1^∞E[e^D_1/(KT)iT<D_1<(i+1)T] ≤∑_i=1^∞E[e^(i+1)/KD_1>iT]=∑_i=1^∞ e^(i+1)/KP(D_1>iT) ≤∑_i=1^∞3/2e^1/Kexp(i/K+ilog(1-c)). Set K=2/c≥ -2/log(1-c); then i/K+ilog(1-c)≤i/2log(1-c)<0, and E[e^D_1/(KT)]≤3/2(1-√(1-c))≤3/c. We now show a result which bounds the exponential moment of the jth depinking time. In order to emphasise the initial configuration on the underlying MaBB we shall write D_j((ξ,x)) for the jth depinking time of a chameleon process corresponding to a MaBB which at time 0 is in configuration (ξ,x)∈Ω'_G,m. If the round length T satisfies T≥ 2max_i,jEM̂_i,j(G), then for any (ξ,x)∈Ω'_G,m, for all j∈N, E[e^D_j((ξ,x))/(K T)|Fill]≤(12a/(p^*)^2)^j, where K=8a/(p^*)^2. This proof follows identically to the proof of Lemma 6.2 from <cit.> and uses Lemma <ref>. Let ζ_t and ζ'_t be two realisations of BB(G,s,m) initialised at ζ and ζ' respectively. We shall bound ℒ(ζ_t)-ℒ(ζ'_t)_TV. We say that two BB(G,s,m) configurations ζ^1 and ζ^2 are adjacent and write ζ^1∼ζ^2 if there exist vertices v and w such that for all y∉{v,w}, ζ^1(y)=ζ^2(y) and |ζ^1(v)-ζ^2(v)|=|ζ^1(w)-ζ^2(w)|=1, i.e. by moving just a single particle we can obtain ζ^2 from ζ^1. We can now find a sequence of BB(G,s,m) configurations {ζ^i}_i=0^r with r≤ m such that ζ=ζ^0∼ζ^1∼⋯∼ζ^r=ζ'. By the triangle inequality (for total variation), we have ℒ(ζ_t)-ℒ(ζ'_t)_TV≤∑_i=1^rℒ(ζ^i-1_t)-ℒ(ζ^i_t)_TV, where for each 0≤ i≤ r, ζ^i_t is a realisation of BB(G,s,m) started from configuration ζ^i. We now show how to bound ℒ(ζ^i-1_t)-ℒ(ζ^i_t)_TV. Suppose that ζ^i-1 and ζ^i differ on vertices v and w with ζ^i-1(v)-ζ^i(v)=1. Define a BB(G,s,m-1) configuration ξ to be ξ(y):=ζ^i-1(y)-δ_v(y)=ζ^i(y)-δ_w(y) for all y∈ V. As BBSP is a projection of MaBB (i.e. if we ignore the marking of the marked particle in a MaBB we obtain a BBSP, which follows from (<ref>)), we have by the triangle inequality ℒ(ζ^i-1_t)-ℒ(ζ^i_t)_TV≤ℒ((ξ_t,m_t))-ℒ((ξ'_t,m'_t))_TV, where (ξ_t,m_t)_t≥0 is a realisation of MaBB initialised at (ξ,v) and (ξ'_t,m'_t)_t≥0 is a realisation of MaBB initialised at (ξ,w). In order to apply Proposition <ref>, we define m̃_t to be a random variable which, given ξ_t has law π_ξ_t, and similarly m̃'_t to have law π_ξ'_t given ξ'_t. Since ℒ((ξ_t,m̃_t))=ℒ((ξ'_t,m̃'_t)), we use the triangle inequality again to deduce ℒ((ξ_t,m_t))-ℒ((ξ'_t,m'_t))_TV≤ℒ((ξ_t,m_t))-ℒ((ξ_t,m̃_t))_TV+ℒ((ξ'_t,m'_t))-ℒ((ξ'_t,m̃'_t))_TV. We can now apply Proposition <ref>, and by combining (<ref>)–(<ref>), we obtain max_ζ,ζ'ℒ(ζ_t)-ℒ(ζ'_t)_TV≤ 2mmax_(ξ,x)∈Ω_G,m'E[1-_t^(ξ,x)/a(m-1)+bn|Fill]. Lemma <ref> says that the total ink can only change at depinking times, thus (recalling the definition of _j), _t^(ξ,x)=χ(ξ(x)) if t<D_1((ξ,x)) and _t^(ξ,x)=_j^(ξ,x) if D_j((ξ,x))≤ t<D_j+1((ξ,x)) for some j. Hence we have that for any j≥ 1, 1-_t^(ξ,x)/a(m-1)+bn ≤max_ℓ≥ j(1-_ℓ^(ξ,x)/a(m-1)+bn)+t<D_j((ξ,x)) ≤∑_ℓ≥ j(1-_ℓ^(ξ,x)/a(m-1)+bn)+t<D_j((ξ,x)). Taking expectations (given Fill) on both sides and using (<ref>) we obtain for any j≥1, max_ζ,ζ'ℒ(ζ_t)-ℒ(ζ'_t)_TV ≤ 2m max_(ξ,x)∈Ω_G,m'{∑_ℓ≥ jE[1-_ℓ^(ξ,x)/a(m-1)+bn|Fill]+P(D_j((ξ,x))>t|Fill)}. We bound the first term using Lemma <ref> to obtain max_ζ,ζ'ℒ(ζ_t)-ℒ(ζ'_t)_TV ≤ 2m max_(ξ,x)∈Ω_G,m'{∑_ℓ≥ j(71/72)^ℓ√(a(m-1)+bn)+P(D_j((ξ,x))>t|Fill)} ≤ 200me^-jlog(72/71)√(a(m-1)+bn)+2mmax_(ξ,x)∈Ω_G,m'P(D_j((ξ,x))>t|Fill), and then by Markov's inequality and Lemma <ref> (recall constant K=8a/(p^*)^2) we have that max_ζ,ζ'ℒ(ζ_t)-ℒ(ζ'_t)_TV ≤200me^-jlog(72/71)√(a(m-1)+bn)+2me^jlog(12a(p^*)^-2)-t/(2K max_i,jEM̂_i,j(G)). This holds for all j≥1 so if we apply it with j=⌊t/4Kmax_i,jEM̂_i,j(G)log(12a(p^*)^-2)⌋ we obtain max_ζ,ζ'ℒ(ζ_t)-ℒ(ζ'_t)_TV ≤ K_0 m e^-t/(4Kmax_i,jEM̂_i,j(G)log(12a(p^*)^-2))√(a(m-1)+bn) for some universal constant K_0>0. Thus we deduce that there exists a universal C>0 such that if t≥ Ca(p^*)^-2log(12a(p^*)^-2)log((am+bn)/ε)max_i,jEM̂_i,j(G), then the total variation distance between ℒ(ζ_t) and ℒ(ζ_t') is at most ε for any initial configurations ζ and ζ', so the statement of Theorem <ref> holds. § APPENDIX For ease of notation in this section we write P(v,v) for P_e,B,B'(v,v) and similarly for other probabilities. We shall use throughout (sometimes without reference) that, by Lemma <ref>, χ(B(v))P_(v,v)+χ(B(w))P_(w,v) =χ(B'(v)), χ(B(v))P_(v,w)+χ(B(w))P_(w,w) =χ(B'(w)). We write R_v,w for R(v)+R(w), P_v,w for P(v)+P(w) and B_v,w for B(v)+B(w). We first show m^*(v)≤ u(v)+1/2 u^P(v). Recall u(v)+1/2 u^P(v)=χ(B'(v))∧ R_v,w+1/2({χ(B'(v))-χ(B'(v))∧ R_v,w}∧ P_v,w). Hence if R_v,w>χ(B'(v)) then u(v)+1/2 u^P(v)=χ(B'(v)). On the other hand in this case (using (<ref>)), m^*(v)=χ(B'(v))-[χ(B(v)-R(v)-1/2 P(v)]P_(v,v)-[χ(B(w)-R(w)-1/2 P(w)]P_(w,v) and thus as R(v)+P(v)≤χ(B(v)) and R(w)+P(w)≤χ(B(w)), we have m^*(v)≤χ(B'(v)), i.e. in this case m^*(v)≤ u(v)+1/2 u^P(v). If instead R_v,w≤χ(B'(v)), then u(v)+1/2 u^P(v)=R_v,w+1/2({χ(B'(v))-R_v,w}∧ P_v,w). If P_v,w>χ(B'(v))-R_v,w then u(v)+1/2 u^P(v)=R_v,w+1/2(χ(B'(v))-R_v,w)=1/2(χ(B'(v))+R_v,w). But m^*(v) =1/2 R(v)P_(v,v)+1/2 R(w)P_(w,v) =+1/2{(R(v)+P(v))P_(v,v)+(R(w)+P(w))P_(w,v)} ≤1/2 R_v,w+1/2χ(B(v))P_(v,v)+1/2χ(B(w))P_(w,v) =1/2(χ(B'(v))+R_v,w), using (<ref>) in the last step. If instead P_v,w≤χ(B'(v))-R_v,w then u(v)+1/2 u^P(v)=R_v,w+1/2P_v,w and it is clear that this is an upper bound on m^*(v) by bounding P_(v,v) and P_(w,v) by 1. Now we turn to the lower bound, i.e. we want m^*(v)≥ℓ(v)+1/2ℓ^P(v). Recall ℓ(v)+1/2ℓ^P(v)={R_v,w-χ(B'(w))}∨0+1/2[{P_v,w-χ(B'(w))+R∧χ(B'(w))}∨ 0]. If R_v,w>χ(B'(w)), then ℓ(v)+1/2ℓ^P(v)=R_v,w-χ(B'(w))+1/2P_v,w. But in this case m^*(v) =(R(v)+1/2 P(v))P_(v,v)+(R(w)+1/2P(w))P_(w,v) =R_v,w+1/2P_v,w-(R(v)+1/2P(v))P_(v,w)-(R(w)+1/2P(w))P_(w,w) =R_v,w+1/2P_v,w-χ(B'(w))+(χ(B(v))-R(v)-1/2P(v))P_(v,w) =+(χ(B(w))-R(w)-1/2P(w))P_(w,w) ≥ R_v,w+1/2P_v,w-χ(B'(w)). Finally suppose R≤χ(B'(v)), then ℓ(v)+ℓ^P(v)=1/2[{P_v,w-χ(B'(w))+R_v,w}∨0]. If further P_v,w>χ(B'(w))-R_v,w then ℓ(v)+ℓ^P(v)=1/2(P_v,w+R_v,w-χ(B'(w))). But in this case m^*(v) ≥1/2[(R(v)+P(v))P(v,v)+(R(w)+P(w))P(w,v)] =1/2[R_v,w+P_v,w-χ(B'(w))+(χ(B(v))-R(v)-P(v))P(v,w) =1/2[+(χ(B(w))-R(w)-P(w))P(w,w)] ≥1/2[R_v,w+P_v,w-χ(B'(w))]. If instead P_v,w≤χ(B'(w))-R_v,w, then ℓ(v)+ℓ^P(v)=0≤ m^*(v). Recall that we suppose P(v,v), P(w,v)∈[η,1-η] and that θ(v) is defined in (<ref>) which gives θ(v)=u(v)+1/2 u^P(v)-m^*(v)/u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v). There are numerous cases to consider which we detail below. Our goal is to show that in each case θ(v)∈[η,1-η]. Recall that χ(B(v))+χ(B(w))=χ(B'(v))+χ(B'(w))=aB_v,w+2b. Case 1: R_v,w>χ(B'(v))∨χ(B'(w)) In this case u(v)+1/2u^P(v)=χ(B'(v)) and u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v)=χ(B'(v))-(R_v,w-χ(B'(w)))-1/2P_v,w. But m^*(v) =χ(B'(v))-(χ(B(v))-R(v)-1/2P(v))P(v,v)-(χ(B(w))-R(w)-1/2P(w))P(w,v) ≤χ(B'(v))-η(aB_v,w+2b-R_v,w-1/2P_v,w) =u+1/2u^P(v)-η(u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v)), thus θ(v)≥η. On the other hand m^*(v) =R_v,w+1/2P_v,w-(R(v)+1/2P(v))P(v,w)-(R(w)+1/2P(w))P(w,w) =R_v,w+1/2P_v,w-χ(B'(w))+(χ(B(v))-R(v)-1/2P(v))P(v,w) =+(χ(B(w))-R(w)-1/2P(w))P(w,w) ≥ R_v,w+1/2P_v,w-χ(B'(w))+η(aB+2b-R_v,w-1/2P_v,w) =ℓ(v)+ℓ^P(v)+η(u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v)), thus 1-θ(v)≥η. Case 2: R_v,w≤χ(B'(v))∧χ(B'(w)) We consider sub-cases. Case 2a: P_v,w≤ (χ(B'(v))-R_v,w)∧(χ(B'(w))-R_v,w) In this case u(v)+1/2u^P(v)=R_v,w+1/2 P_v,w and ℓ(v)+1/2ℓ^P(v)=0. But m^*(v)≥η(R_v,w+1/2 P_v,w) and so θ(v)≤ 1-η. We also have m^*(v)≤ (1-η)(R_v,w+1/2 P_v,w) and so θ≥η. Case 2b: P_v,w≥ (χ(B'(v))-R_v,w)∨(χ(B'(w))-R_v,w) In this case u(v)+1/2u^P(v)=1/2(χ(B'(v))+R_v,w) and ℓ(v)+1/2ℓ^P(v)=1/2(P_v,w+R_v,w-χ(B'(w))). Hence u(v)+1/2u^P(v)-ℓ(v)-1/2ℓ^P(v)=1/2(aB_v,w+2b-P_v,w). On the one hand m^*(v) =1/2 R(v)P(v,v)+1/2 R(w)P(w,v)+1/2{(R(v)+P(v))P(v,v)+(R(w)+P(w))P(w,v)} ≥1/2η R_v,w+1/2{R_v,w+P_v,w-(R(v)+P(v))P(v,w)-(R(w)+P(w))P(w,w)} =1/2η R_v,w+1/2{R_v,w+P_v,w-χ(B'(w))+(χ(B(v))-R(v)-P(v))P(v,w) =1/2η R_v,w+1/2{+(χ(B(w))-R(w)-P(w))P(w,w)} ≥1/2η R_v,w+1/2(R_v,w+P_v,w-χ(B'(w)))+1/2η(aB_v,w+2b-R_v,w-P_v,w). Thus m^*(v)-ℓ(v)-1/2ℓ^P(v)≥1/2η(aB_v,w+2b-P_v,w), and so 1-θ(v)≥η. On the other hand, m^*(v) =1/2 R(v)P(v,v)+1/2 R(w)P(w,v)+1/2χ(B'(v)) =-1/2{(χ(B(v))-R(v)-P(v))P(v,v)+(χ(B(w))-R(w)-P(w))P(w,v)} ≤1/2η R_v,w+1/2χ(B'(v))-1/2η(aB_v,w+2b-R_v,w-P_v,w) Thus u(v)+1/2 u^P(v)-m^*(v) ≥1/2(1-η)R_v,w+1/2η(aB_v,w+2b-R_v,w-P_v,w) ≥1/2η(aB_v,w+2b-P_v,w) where we use η<1/2 in the last inequality. This gives θ(v)≥η. Case 2c: χ(B'(v))-R_v,w≤ P_v,w≤χ(B'(w))-R_v,w We have u(v)+1/2 u^P(v)=1/2(χ(B'(v))+R_v,w) and ℓ(v)+1/2ℓ^P(v)=0. On the one hand m^*(v)≥η(R_v,w+1/2 P_v,w)=1/2η R_v,w+1/2η(R_v,w+P_v,w)≥1/2η R_v,w+1/2ηχ(B'(v))=1/2η(R_v,w+χ(B'(v))). Hence θ(v)≤ 1-η. On the other hand, as in (<ref>) we have u(v)+1/2 u^P(v)-m^*(v)≥1/2η(aB_v,w+2b-P_v,w) which in this case becomes u(v)+1/2 u^P(v)-m^*(v)≥1/2η(χ(B'(v))+R_v,w). This gives θ(v)≥η. Case 2d: χ(B'(w))-R_v,w≤ P_v,w≤χ(B'(v))-R_v,w We have u(v)+1/2 u^P(v)=R_v,w+1/2 P_v,w and ℓ(v)+1/2ℓ^P(v)=1/2(R_v,w+P_v,w-χ(B'(w))). As in (<ref>), u(v)+1/2 u^P(v)-m^*(v)≥1/2η(aB_v,w+2b-P_v,w) which gives u(v)+1/2 u^P(v)-m^*(v)≥1/2η(R_v,w+χ(B'(w))), so 1-θ(v)≥η. On the other hand, m^*(v)≤ (1-η)(R_v,w+1/2 P_v,w), but R_v,w+1/2 P_v,w≥ R_v,w+1/2(χ(B'(w))-1/2 R_v,w)=1/2 (R_v,w+χ(B'(w))) and so m^*(v)≤ R_v,w+1/2 P_v,w-η/2(R_v,w+χ(B'(w))) which gives θ(v)≥η. Case 3: χ(B'(v))≤ R_v,w≤χ(B'(w)) In this case u(v)+1/2 u^P(v)=χ(B'(v)). We again consider sub-cases depending on the value of P_v,w. Case 3a: P_v,w≥χ(B'(w))-R_v,w Then ℓ(v)+1/2ℓ^P(v)=1/2(R_v,w+P_v,w-χ(B'(w))) and so u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v)=χ(B'(v))-1/2(R_v,w+P_v,w-χ(B'(w))). To show 1-θ(v)≥η, we wish to show that m^*(v)≥ℓ(v)+1/2ℓ^P(v)+η(u(v)+1/2 u^P(v)-ℓ(v)-1/2ℓ^P(v)), i.e. that m^*(v)≥ηχ(B'(v))+1/2(1-η)(R_v,w+P_v,w-χ(B'(w))). As in (<ref>) we have m^*(v) ≥1/2η R_v,w+1/2{R_v,w+P_v,w-χ(B'(w))+η(aB_v,w+2b-R_v,w-P_v,w)} =1/2(1-η)(R_v,w+P_v,w-χ(B'(w)))+1/2η{R_v,w+P_v,w-χ(B'(w))+aB_v,w+2b-P_v,w}. But R_v,w-χ(B'(w))+aB_v,w+2b=R_v,w+χ(B'(v))≥ 2χ(B'(v)), so m^*(v)≥ηχ(B'(v))+1/2(1-η)(R_v,w+P_v,w-χ(B'(w))) as needed. On the other hand, to show θ(v)≥η, we need to show that m^*(v)≤ (1-η)χ(B'(v))+η/2(P_v,w+R_v,w-χ(B'(w))). We have m^*(v) =(1-η){(R(v)+1/2P(v))P(v,v)+(R(w)+1/2P(w))P(w,v)} =+η{(R(v)+1/2 P(v))P(v,v)+(R(w)+1/2P(w))P(w,v)} =(1-η){χ(B'(v))-(χ(B(v))-R(v)-1/2 P(v))P(v,v)-(χ(B(w))-R(w)-1/2P(w))P(w,v))} =+η{R_v,w+1/2P_v,w-(R(v)+1/2P(v))P(w,v)-(R(w)+1/2 P(w))P(w,w)} ≤ (1-η)χ(B'(v)) =+η{R_v,w+1/2P_v,w-χ(B'(w))-(χ(B(v))-R(v)-1/2P(v))P(w,v) =+η{R_v,w+1/2P_v,w-χ(B'(w))-(χ(B(w))-R(w)-1/2P(w))P(w,w)} ≤(1-η)χ(B'(v))+η(R_v,w+1/2P_v,w-χ(B'(w))) ≤ (1-η)χ(B'(v))+η/2(R_v,w+P_v,w-χ(B'(w))), as needed. Case 3b: P_v,w≤χ(B'(w))-R_v,w Here u(v)+1/2u^P(v)=χ(B'(v)) and ℓ(v)+1/2ℓ^P(v)=0. On the one hand we have m^*(v)≥η(R_v,w+1/2P_v,w)≥η R_v,w≥ηχ(B'(v)). This gives θ(v)≤ 1-η. On the other hand, we have m^*(v) =χ(B'(v))-(χ(B(v))-R(v)-P(v))P(v,v)-1/2 P(v)P(v,v) ≤χ(B'(v)) -(χ(B(w))-R(w)-P(w))P(w,v)-1/2P(w)P(w,v) ≤χ(B'(v))-η(aB_v,w+2b-R_v,w-1/2 P_v,w)-1/2η P_v,w ≤χ(B'(v))-ηχ(B'(v)) =(1-η)χ(B'(v)). Thus it follows that θ(v)≥η. Case 4: χ(B'(w))≤ R_v,w≤χ(B'(v)) This is the final main case, and it has two sub-cases. Case 4a: P_v,w≥χ(B'(v))-R_v,w Here u(v)+1/2u^P(v)=1/2(χ(B'(v))+R_v,w) and ℓ(v)+1/2ℓ^P(v)=R_v,w-χ(B'(w))+1/2P_v,w. Thus u(v)+1/2u^P(v)-ℓ(v)-1/2ℓ^P(v)=χ(B'(w))+1/2(χ(B'(v))-R_v,w-P_v,w). On the one hand we want θ(v)≤ 1-η, which in this case is equivalent to m^*(v)≥ (1-η/2)R_v,w+η/2χ(B'(v))+1-η/2P_v,w-(1-η)χ(B'(w)). We can obtain this bound since m^*(v) =R_v,w+1/2P_v,w-χ(B'(w))+(χ(B(v))-R(v)-1/2P(v))P(v,w) =+(χ(B(w))-R(w)-1/2P(w))P(w,w) ≥ R_v,w+1/2P_v,w-χ(B'(w))+η(aB_v,w+2b-R_v,w-1/2P_v,w) =R_v,w+1/2P_v,w-χ(B'(w))+η(χ(B'(v))+χ(B'(w))-R_v,w-1/2P_v,w) =(1-η/2)R_v,w+1-η/2P_v,w+η/2χ(B'(v))-(1-η)χ(B'(w))+η/2(χ(B'(v))-R_v,w), and we obtain the desired bound using that R_v,w≤χ(B'(v)). On the other hand, we also need to show θ(v)≤η, i.e. we need to show m^*(v)≤1+η/2R_v,w+η/2P_v,w+1-η/2χ(B'(v))-ηχ(B'(w)). We have m^*(v) =1/2R(v)P(v,v)+1/2R(w)P(w,v)+1/2{(R(v)+P(v))P(v,v)+(R(w)+P(w))P(w,v)} ≤1-η/2R_v,w+1/2{χ(B'(v))-(χ(B(v))-R(v)-P(v))P(v,v) ≤1-η/2R_v,w+1/2{-(χ(B(w))-R(w)-P(w))P(w,v)} ≤1-η/2R_v,w+1/2{χ(B'(v))-η(aB_v,w+2b-R_v,w-P_v,w)} =1-η/2R_v,w+η/2P_v,w+1-η/2χ(B'(v))-η/2χ(B'(w)) =1+η/2R_v,w+η/2P_v,w+1-η/2χ(B'(v))-ηχ(B'(w))-η R_v,w+η/2χ(B'(w)) ≤1+η/2R_v,w+η/2P_v,w+1-η/2χ(B'(v))-ηχ(B'(w)), using that R_v,w≥χ(B'(w)) in the last inequality. Case 4b: P_v,w≤χ(B'(v))-R_v,w Here u(v)+1/2u^P(v)=R_v,w+1/2 P_v,w and ℓ(v)+1/2ℓ^P(v)=R_v,w-χ(B'(w))+1/2 P_v,w, thus u(v)+1/2u^P(v)-ℓ(v)-1/2ℓ^P(v)=χ(B'(w)). Showing θ(v)≤ 1-η is equivalent to showing m^*(v)≥ R_v,w+1/2 P_v,w-(1-η)χ(B'(w)). We have m^*(v) =χ(B'(v))-(χ(B(v))-R(v)-P(v))P(v,v)-1/2 P(v)P(v,v) ≤χ(B'(v)) -(χ(B(w))-R(w)-P(w))P(w,v)-1/2P(w)P(w,v) ≥χ(B'(v))-(1-η)(aB_v,w+2b-R_v,w-P_v,w)-1-η/2P_v,w =R_v,w+1/2P_v,w-(1-η)χ(B'(w))+η(χ(B'(v))-R_v,w), and this shows the desired bound since R_v,w≤χ(B'(v)). Showing θ(v)≥η is equivalent to showing m^*(v)≤ R_v,w+1/2 P_v,w-ηχ(B'(w)). This holds since we have m^*(v)≤ (1-η)(R_v,w+1/2P_v,w)≤ R_v,w+1/2 P_v,w-η R_v,w and since R_v,w≥χ(B'(w)) this gives m^*(v)≤ R_v,w+1/2 P_v,w-ηχ(B'(w)) as needed. We fix the configurations of black, red, and pink particles B, R, P just before an update on e={v,w} and also the number of paired red R^p_v,w. Let x=ℓ(v)+ℓ(w)-R^q_v,w, where R^q_v,w is the number of non-paired red particles on e. Then x∨0 is the number of paired red particles needed for the lower bounds in Step 1 and so any particular paired red particle will be remaining in the pool after Step 1 with probability 1-(x∨ 0)/R^p_v,w. Observe that χ(B'(v))+χ(B'(w))≥ R_v,w+R^p_v,w (since each paired red particle on e implies the existence of a unique paired white particle also on e). We consider four cases. Case 1: R_v,w>χ(B'(v))∨χ(B'(w)) Then x=2R_v,w-χ(B'(v))-χ(B'(w))-R^q_v,w≤ 2R_v,w-(R_v,w+R^p_v,w)-R^q_v,w=0, i.e. no paired red particles are needed for the lower bounds and they all remain in the pool after Step 1. Case 2: R_v,w≤χ(B'(v))∧χ(B'(w)) In this case x=-R^q_v,w so all paired red particles remain in the pool. Case 3: χ(B'(v))≤ R_v,w≤χ(B'(w)) Then x=R_v,w-χ(B'(v))-R^q_v,w=R^p_v,w-χ(B'(v)). We need to show that this is at most (1-γ) R^p_v,w. We are assuming that χ(B'(w))≤χ(B'(v))/γ, . We also have that χ(B'(v))+χ(B'(w))≥ 2R^p_v,w and thus χ(B'(v))≥ 2R^p_v,w/(1+1/γ)≥γ R^p_v,w since γ<1. This gives the desired bound on x. Case 4: χ(B'(w))≤ R_v,w≤χ(B'(v)) This case follows similarly to Case 3, switching the roles of v and w. § SIMULATION For purposes of further elucidating the evolution of the chameleon process and its relationship to the MaBB, we present a possible trajectory of the two processes for two updates (for simplicity we suppose the first two edge-rings occur at times 1 and 2). In this example, the graph is the line on 7 vertices and a=b=1.
http://arxiv.org/abs/2307.03164v2
20230706174658
Induced Gravitational Waves from Ultra Slow-Roll Inflation and Pulsar Timing Arrays Observations
[ "Hassan Firouzjahi", "Alireza Talebian" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-th" ]
=1 equationsection 1.1 k q xN footnote Induced Gravitational Waves from Ultra Slow-Roll Inflation and Pulsar Timing Arrays Observations Hassan Firouzjahi, [firouz@ipm.ir] Alireza Talebian [talebian@ipm.ir] ^1School of Astronomy, Institute for Research in Fundamental Sciences (IPM) P. O. Box 19395-5531, Tehran, Iran The stochastic gravitational wave background (SGWB) detected recently by the pulsar timing arrays (PTAs) observations may have cosmological origins. In this work we consider a model of single field inflation containing an intermediate phase of ultra slow-roll. Fixing the amplitude of the peak of curvature perturbations by the PBHs bounds we calculate the gravitational waves (GWs) induced from the curvature perturbations enhanced during USR. The spectrum of the induced GWs depends on the sharpness of the transition from the USR phase to the final attractor phase as well as to the duration of the USR period. While the model can accommodate the current PTAs data but it has non-trivial predictions for the induced GWs on higher frequency ranges which can be tested by future observations. § INTRODUCTION There are indications of detection of stochastic gravitational waves background (SGWB) from recent pulsar timing arrays (PTAs) around the frequency range ∼ 10  nHz as reported in NANOGrav <cit.>, Parkers PTA <cit.>, European PTA <cit.> and the China PTA <cit.>. These signals may have cosmological origins as well as astrophysical interpretations. A natural astrophysical interpretation of the observed SGWB is the superpositions of gravitational waves (GWs) signals from the merging of binary supermassive black holes <cit.>. On the other hand, if the observed signal has cosmological origins, this can open a new window into observing the primordial universe and to probe physics beyond the Standard Model of particle physics. Among possible cosmological interpretations of the SGWB are the GWs induced from the enhanced scalar perturbations on small scales generated during inflation, first order cosmological phase transitions <cit.>, domain walls or cosmic strings <cit.>, see <cit.> for further review. It should be noted that the previous NANOGrav 12.5 year data <cit.> also indicated traces of SGWB with a flat spectrum in a narrow range of nHz frequency which initiated interests to look for the origins of this signal. Scalar induced gravitational waves (SIGW) by the enhancement of scalar perturbations on small scale generated during inflation <cit.> is a mechanism which can explain the observed SGWBs <cit.>. In this mechanism, the GWs are sourced at the second order in perturbation theory via their interaction with the scalar sector generated during inflation. Typically, this setup requires that the amplitude of scalar perturbations to grow by about seven orders of magnitude compared to the observed CMB scale. Consequently, this mechanism can yield to primordial black holes (PBHs) formation which may comprise all or parts of dark matter energy density <cit.>. The setup of ultra slow-roll (USR) inflation has been employed as a model in which the primordial power spectrum can be enhanced to induce large SIGWs and PBHs <cit.>, for a review see <cit.>. The USR setup is a single field model of inflation in which the potential is flat <cit.> and the inflaton velocity falls off exponentially so the curvature perturbations grow on superhorizon scales <cit.>. Since the curvature perturbations grow on superhorizon scales the USR setup violates the Maldacena non-Gaussianity consistency condition <cit.> in single field models <cit.>. Originally, it was shown in <cit.> that the amplitude of local-type non-Gaussianity in USR setup is f_NL=5/2. This question was further examined in <cit.> in which it was shown that the amplitude of f_NL crucially depends on the sharpness of the transition from the USR phase to the final slow-roll (SR) phase. In an extreme sharp transition from the USR phase to the SR phase, which was assumed in <cit.>, f_NL reaches its maximum value 5/2. However, if the transition to the final stage is mild then the curvature perturbations evolve after the USR phase before it reaches to its final attractor value. Correspondingly, in a mild transition, the amplitude of f_NL is washed out in the subsequent evolution and it ends up with a value at the order of the slow-roll parameters. Another important point is the issue of loop corrections in this setup. This question was studied in various recent works <cit.>. Originally, it was argued in <cit.>, see also <cit.>, that loop corrections from small scale modes which leave the horizon during the USR phase can induce large corrections on CMB scale modes. This was criticized in <cit.> arguing, among other things, that for a mild transition the loop corrections will be less significant and the standard PBHs formation within the single field USR scenario is applicable. This question was studied in some details in <cit.> with emphasis on the effects of the sharpness of the transition from the intermediate USR phase to the final attractor SR phase. It was shown in <cit.> that for an arbitrarily sharp transition the one-loop corrections can be very large, in line with the results advocated in <cit.>. However, it was speculated in <cit.> that for a mild transition, the dangerous one-loop corrections are washed out during the subsequent evolution of the mode function after the USR phase. This conclusion was further examined in <cit.> confirming the physical expectations of <cit.>. In summary, in order for the one-loop corrections on CMB scale modes to be harmless one needs a mild enough transition from the USR phase to the final attractor phase. In this paper we employ the USR setup as a working mechanism to generate large SIGW as a possible explanations for the observed SGWB in the NANOGrav data <cit.>. For various recent works on SIGWs as an explanation of the the PTAs data see <cit.>. § THE SETUP The setup which we use to enhance the primordial power spectrum to induce large GWs at the range of scales observed by the PTA observations contains an intermediate phase of USR in single field inflation. We have a three-stage model of inflation in which the large CMB scale leaves the horizon at the early stage of inflation. The first stage of inflation proceeds say in about 16 e-folds or so. Then the USR phase takes over in which the potential is very flat and the curvature perturbations experience a growth on superhorizon scales. In order for the curvature power spectrum to be under perturbative controls the USR phase has to be terminated followed by a final phase of SR inflation. A realistic setup requires that the transition from the first SR to the USR stage and from the USR phase to the final SR phase to be smooth. However, in order to follow the dynamics analytically, we consider an idealized situation in which the transition from the SR to USR and then to final SR phase are instantaneous. Assuming that USR phase is extended during the time interval t_i ≤ t ≤ t_e, we assume that the transitions at the starting point t=t_i and at the final point t=t_e to be instantaneous. While the transition to the final SR phase is instantaneous, but it will take time for the system to relax to its final attractor phase. We control this relaxation period by a sharpness parameter which plays important role in our analysis below. It it important that the instantaneous gluing of the solutions should not be confused with the sharpness of the transition to the final attractor solution. With the above discussions in mind, let us elaborate on the dynamics of our setup. During the first stage of inflation, t<t_i, the system follows an attractor phase and the dynamics of the inflaton field ϕ is given by the usual SR dynamics. The Hubble expansion rate H≡ȧ/a is nearly constant in which a(t) is the FLRW scale factor. The small evolution of H during the first stage of inflation is measured by the first SR parameter ϵ≡ -Ḣ/H^2 which is nearly constant and small. During the USR phase the dynamics of the system is given by ϕ̈+ 3 H ϕ̇=0 , 3 M_P^2 H^2 ≃ V_0 , where M_P is the reduced Planck mass. As the potential is flat during the USR phase, H approaches a fixed value and from the field equation we obtain ϕ̇∝ a(t)^-3. Correspondingly, the slow-roll parameter ϵ falls off exponentially during the USR phase as well, ϵ∝ a(t)^-6. On the other hand, the second slow-roll parameter η≡ϵ̇/H ϵ≃ -6 which is the hallmark of the USR phase. It is convenient to work with the number of e-fold N as the clock, d N= H(t) dt. We choose the convention that the USR phase starts at N=0 so for the first stage of inflation, N<0. In particular, the CMB scale modes leave the horizon at around N∼ -16. The duration of the USR phase is denoted by Δ N which is a free parameter of our setup. Going to conformal time, d τ= dt/a(t), the USR phase is extended during τ_i ≤τ≤τ_e and the duration of USR phase is given by Δ N= ln( τ_i/τ_e)= ln( k_e/ k_i) in which k_i (k_e) represents the mode which leaves the horizon at the start (end) of USR phase. The slow-roll parameter at the end of USR phase ϵ_e is related to its value at the start of USR phase ϵ_i via ϵ_e= ϵ_i e^-6 Δ N. As explained above, we assume the USR phase is followed by a SR phase. Therefore, we need to investigate the evolution of the slow-roll parameters ϵ(τ) and η(τ) after the USR phase. This was studied in more details in <cit.>, see also <cit.> for similar studies. Let us denote the final SR parameters with their attractor values by ϵ_V and η_V which are expressed in terms of the first and the second derivatives of the potential in the final SR phase. To simplify the analysis, here, as in <cit.>, we assume that the potential in the final stage is such that ϵ_V≫ |η_V| though this assumption can be relaxed with no important changes in the results. A key parameter in our setup is the sharpness parameter h which controls how quickly the system reaches to its final attractor limit. Following <cit.>, we define h as follows h≡6 √(2 ϵ_V)/ϕ̇(t_e) = -6 √(ϵ_V/ϵ_e) . With this definition, the slow-roll parameters ϵ(τ) and η(τ) after the USR transition are given by ϵ(τ)= ϵ_ e [h/6 - (1+ h/6 ) (τ/τ_e)^3 ]^2 (τ > τ_e) , and η(τ) = -6 (6+h)/(6+h) - h (τ_e/τ)^3 (τ > τ_e) . As discussed previously, the above results are obtained in the limit of an instant transition from the USR phase to the final SR phase. Even in this limit it will take some time for the system to reach to its attractor phase which is measured by the sharpness parameter h. In the limit h → -∞, the system reaches its final attractor phase immediately after τ_e in which the mode functions become frozen. On the other hand, for other values of h the system keeps evolving after τ_e until ϵ(τ) approaches its final attractor value ϵ_V. A particular case of transition is when h=-6 in which ϵ(τ) is frozen to its value at the end of USR, ϵ(τ) =ϵ_e with η(τ)=0 for τ> τ_e. This limit was mostly studied in recent literature concerning the loop correction such as in <cit.>. In the following analysis, as in <cit.>, we consider a general value of h including the spacial case of h=-6. Another important point is that the larger the value of |h| is the larger ϵ_V is compared to ϵ_e. Correspondingly, the final power spectrum scales somewhat inversely with |h|. As a result, a larger (smaller) value of |h| yields to a smaller (larger) final power spectrum. We work with the comoving curvature perturbation which in spatially flat gauge is related to inflaton perturbation via ≡ -H/ϕ̇δϕ. Going to Fourier space, we define the quantum mode function in terms of the annihilation and creation operators as usual via (t, ) = ∫d^3 k/(2π)^3 e^i k· x( _k(t) a_ k + ^*_k(t) a_- k^†) , in which a_ and a_ k^† satisfy the usual commutation relation associated to the annihilation and creation operators, [ a_, a_'^†] = δ ( -'). Starting with the Bunch-Davies (Minkowski) initial condition and imposing the continuity of and at the transition points τ= τ_i and τ= τ_e, the mode function at each stage of SR→ USR→ SR can be obtained <cit.>. The outgoing curvature perturbation ^(3)_k(t) in the final USR phase (third phase) is given by <cit.>, ^(3)_k = H/ M_P√(4 ϵ(τ) k^3)[ α^(3)_k ( 1+ i k τ) e^- i k τ + β^(3)_k ( 1- i k τ) e^ i k τ] , with ϵ(τ) given by Eq. (<ref>) and the coefficients (α^(3)_k, β^(3)_k) are as follow, α^(3)_k = 1/8 k^6 τ_i^3 τ_e^3[ 3h ( 1 -i k τ_e)^2 (1+i k τ_i)^2 e^2i k (τ_e- τ_i) -i (2 k^3 τ_i^3 + 3i k^2 τ_i^2 + 3 i) (4 i k^3 τ_e^3- h k^2 τ_e^2 - h) ] , and β^(3)_k= 1/8 k^6 τ_i^3 τ_e^3[ 3 ( 1+ i k τ_i)^2 ( h+ h k^2 τ_e^2 + 4 i k^3 τ_e^3 ) e^-2 i k τ_i + i h ( 1+ i k τ_e)^2 ( 3 i + 3 i k^2 τ_i^2 + 2 k^3 τ_i^3 ) e^- 2 i k τ_e] . The physical quantities are calculated at the end of inflation τ=0 when the system has reached to its attractor phase with ϵ(τ) →ϵ_V. The curvature perturbation power spectrum _ from Eq. (<ref>) is obtained to be _(k, τ=0) = H^2/ 8 M_P^2 π^2 ϵ_V| α^(3)_k + β^(3)_k |^2 . The behaviour of _(k, τ=0) are plotted in Fig. <ref>. There are a number of common features as can be seen in this figure. First, we have the plateau on large scales associated to the modes which leave the horizon long before the USR phase starts. The amplitude of the power spectrum for these perturbations is fixed by the COBE normalization on k_ = 0.05 Mpc^-1 with _≃ 2.1 ×10^-9. Second, prior to the USR phase there is a dip in power spectrum followed by a universal scaling ∝ k^4. Third, there are oscillations superimposed on the USR plateau after the maximum. Fourth, there is a plateau for the modes which leave the horizon at the final stage of inflation. As discussed previously, the larger is the value of |h|, the smaller is the final power spectrum which we will demonstrate bellow. Let us look at the parameter dependence of the the power spectrum given in Eq. (<ref>). Two important parameters are the sharpness of the transition h and the duration of the USR phase Δ N. In addition, we have the energy scale of inflation H and the final slow-roll parameter ϵ_V. As can be seen from Eq. (<ref>) the latter two parameters appear in a combination which is fixed by the overall COBE normalization at CMB scales. This leaves the scale of inflation or the duration of the observed inflation, N_ tot, to be another free parameter. In our analysis we consider various cases of N_ tot in the range 50 ≲ N_ tot≲ 60. Another independent variable may be considered to be the starting time of USR phase, τ_i. However, in order to obtain the enhancement in power for PTAs observations, we need the starting time of USR to be when the mode of interest which leaves the horizon have the nano-Hertz frequency. This requires the starting time of USR phase compared to the CMB scales to be separated by about 16 e-folds. Finally, the spectral index n_s is fixed by its best fit value from Planck observation, i.e. n_s≃ 0.96<cit.>. In summary, at the end we have three main independent parameters: the sharpness parameter h, duration of USR, Δ N, and the total e-fold number of inflation N_ tot which we will vary. A crucial point is that models with the intermediate USR phase can generate significant PBHs which are constrained by observations. Therefore, in order to meet the various bounds on PBHs formation, we need to impose an additional bound on (h, Δ N) parameter space. These PBH constraints leave only one of them free which we take to be h. A view of the power spectrum for various values of h and the bound from PBHs are shown in Fig. <ref>. Schematically, we see that the PBHs bound is roughly translated into _ < 10^-2. More precisely, in order to consider the PBHs bound on the curvature power spectrum, we need to know about the probability distribution function (PDF) of the primordial perturbations. In a common approach <cit.>, the mass fraction parameter β is related to statistics of R as <cit.> β≃∫_ R_c^∞ f_ R(x)  x ≃12 Erfc( R_c√(2 P_ R)) where f_ R is PDF of R and R_c ∼ O(1) <cit.>. The second estimation comes from the fact that we can consider a Gaussian PDF for R with zero mean and variance at the order of power spectrum. After PBH production, it is crucial to determine the fraction of PBH abundance in dark matter density at the present epoch. It is roughly given by <cit.> f_PBH(M_ PBH) ≃ 2.7 × 10^8(M_ PBHM_⊙)^-1/2β(M_ PBH) , where M_⊙ and M_ PBH are the solar mass and the PBH mass respectively. Assuming an instant reheating at the end of inflation <cit.>, PBH mass can be estimated by M_ PBHM_⊙≃ 10^-13 (10^-6 M_ PH) e^2(N_ tot-N_p - 22.25 ) , where N_ tot is the total number of e-fold of inflation and H is the Hubble rates during inflation. Moreover, N_p is the location of the maximum of the power spectrum. Considering a fixed value for the location of the peak of power spectrum, the fraction f_ PBH depends on the total number of e-fold of inflation which is related to the reheating temperature. In Fig. <ref>, we have illustrated the mass function f_ PBH for various values of N_ tot. Now let us look at the various limits of the power spectrum Eq. (<ref>). We have two dimensionless numbers, x≡ -k τ_i and e^-Δ N. First consider the limit e^-Δ N≪ x so we expand Eq. (<ref>) to leading order in e^-Δ N, obtaining _(k, τ=0) ≃ e^6 Δ N/2 P_(h-6/h)^2 ×[ 2 x^6 + 9 x^4+ 18 x^2 + 9 + (21 x^4 -9) cos(2 x) + ( 6 x^5 - 24 x^3 - 18 x) sin(2 x) ] , in which P_ is the CMB scale power spectrum given by P_ = H^2 /8 π^2 M_P^2 ϵ_i . From Eq. (<ref>) we see that _∝ e^6 Δ N which is the hallmark of USR inflation for the modes which leave the horizon during the USR phase. Second, we see that _∝(h-6/h)^2. This is clearly seen in Fig. <ref> as cases with higher value of |h| have lower power in the final plateau. The physical reason is as follows. Models with larger |h| reach the final attractor phase more quickly. For this to happen, ϵ(τ) should assume its final value ϵ_V > ϵ_e quickly as well. This means that the mode becomes frozen quickly after the USR phase but with a final amplitude _(k, τ=0) < _(k, τ_e). To understand the scaling behaviour of the power spectrum prior to USR phase and close to the USR peak, let us consider the x≪ 1 limit of the expression (<ref>), obtaining _(k, τ=0) ≃2 /25 e^6 Δ N P_(h-6/h)^2 (k τ_i)^4 . It shows that the power spectrum scales like _(k) ∝ k^4 prior to and after the USR phase starts, a phenomenon which was observed previously in <cit.> as well. As we see in Fig. <ref>, there is a dip in power spectrum prior to USR phase where the above mentioned k^4 scaling starts. To understand the nature of this dip, note that the expression (<ref>) is obtained assuming that e^-Δ N≪ x. However, this limit is violated for very long modes which become superhorizon much earlier than the USR phase starts. In particular, the CMB scale modes belong to this limit. Considering the x ≪ e^-Δ N limit of the power spectrum we obtain _(k, τ=0) ≃ P_( 1- 4/5h-6/h (k τ_i)^2 ) , (k τ_i → 0 ) . The position of the dip k= k_d is where the two expressions (<ref>) and (<ref>) become comparable, yielding to the approximate value (see also <cit.>) k_dτ_i≃√(5 h/4(h-6)) e^-3/2Δ N . From the above formula we see that for a fixed value of Δ N, as |h| increase the value of k_d increases as well, i.e. the dip moves towards the right, as seen in the right panel of Fig. <ref>. As we mentioned previously the USR model can generate non-Gaussianities. However, the magnitude of f_NL depends on k as well. For the mode which leaves the horizon during the early stage of inflation and prior to USR phase, then the Maldacena consistency condition does hold and for these modes f_NL is basically very small. On the other hand, for the modes which leave the horizon during the USR phase, i.e. for k_i< k< k_e, the consistency condition is violated. The final value of f_NL for these modes crucially depends on the parameter h. This was studied in details in <cit.> and <cit.> in which it is shown that up to slow-roll corrections, f_NL = 5 h^2/2 (h-6)^2 . For an infinitely sharp transition with h→ -∞ in which the system assumes its final attractor phase immediately after the USR transition, we obtain the maximum value f_NL=5/2. However, lowering |h| reduces f_NL accordingly. For example, for the standard scenario in which h=-6 as studied in <cit.> one obtains f_NL=5/8≃ 0.63. For milder transitions with |h| ≲ 1, from Eq. (<ref>) we typically obtain f_NL≪ 1. For example for h=-1 and h=-0.1 which we will study below, we obtain f_NL≃ 0.051 and f_NL≃ 0.0007 respectively. Therefore, to very good approximation one can employ the Gaussian bound on PBH's formation. To be more specific, to neglect the non-Gaussianity effects in PBH formation, we need that f_NL_≪ 1<cit.>. In our model with the maximum value f_NL= 5/2, we can easily satisfy the Gaussianity approximation if _ is small. In our analysis, as can be seen in Fig. <ref>, the PBHs bound typically require that _≲ 10^-2 so we easily meet the condition f_NL_≪ 1 for all practical values of h. As mentioned in Introduction section, the loop correction is an open question in this setup <cit.>. The effects of the sharpness of the transition on the loop corrections were studied in <cit.>. It was shown that for an arbitrarily sharp transition the one-loop corrections become very large. More specifically, it was shown in <cit.> that for |h| ≫ 1, the one-loop corrections scale linearly with h, yielding to a large loop correction in line with the results of <cit.>. However, it was shown in <cit.> that for a mild transition the one-loop corrections to CMB scale modes are slow-roll suppressed and are harmless. In our current study, in order to trust the Gaussian approximation for PBHs formation and to neglect the large loop corrections, one requires | h| ≲ 1 which we shall assume in our analysis. However, in order to compare the predictions of the setup with both sharp and mild transitions, we also present the results for SIGWs for the cases of sharp transition as well. In our numerical plots below, the examples of sharp transitions correspond to the cases h=-6 and h=-12. § SIGWS AND CONSTRAINTS FROM PTAS OBSERVATIONS The curvature perturbations, which have been generated during inflation, can re-enter the horizon during the radiation-dominated (RD) era in which the metric (in conformal Newtonian gauge) reads s^2 = -a^2[ (1+2Φ) τ^2 + ((1 - 2 Ψ)δ_ij +12h_ij) x^i x^j]. Here τ is the conformal time during RD era, Φ and Ψ are the Bardeen potentials and h_ij is the transverse-traceless tensor perturbations. Using the Einstein's field equations, the evolution of Fourier modes of h_ij, denoted by h^λ_k, are given by h^λ_k”(η) + 2 ℋh^λ_k'(η) + k^2 h^λ_k(η) = 4 S^λ_k(η), where λ represents the two polarizations. The primes denote the derivative with respect to the conformal time, ℋ = a'/a is the conformal Hubble parameter, and the source term S^λ_ k is transverse and traceless which is second order in scalar perturbations, given by S^λ_k = ∫^3 q/(2π)^3 ε^λ_ij(k̂) q^i q^j [ 2 Φ_qΦ_k-q + (ℋ^-1Φ'_q + Φ_q) ( ℋ^-1Φ'_k-q + Φ_k-q) ] , where ε^λ_ij is the polarization tensor. Note that here we have neglected the vector perturbations and the anisotropic stress (Φ≃Ψ). In Fourier space, the Bardeen potential is related to R_ k through transfer function T( kτ) as Φ_ k = 23 T( kτ) R_ k . The transfer function encodes the linear evolution of the Newtonian potential after horizon reentry which has a oscillatory behaviour. Solving the equation of motion (<ref>), taking the late-time limit during a RD era (τ→∞ at the matter-radiation equality), the power spectrum of tensor fluctuations is given by <cit.> P_h(τ ,k) = 4∫_0^∞ v ∫_|1-v|^|1+v| u  𝒦(u,v,k τ) 𝒫_ ℛ(k u) 𝒫_ ℛ(k v) , For further details about the integration kernel 𝒦 and how to perform the integrations see <cit.>. The produced GW contributes to the total energy of Universe and dilutes like radiation. Taking into account the following matter-dominated and dark-energy-dominated eras, the current value of Ω_ GW, the fraction of the GW energy density per logarithmic wavelength, is obtained to be Ω_ GWh_0^2 =Ω_ rh_0^2 (g_*g_*,e)^1/3Ω_ GW,e(f) . Here Ω_ r is the present-day abundance of radiation, g_* is the number of relativistic degrees of freedom in energy density, and the subscripts e denotes the time of emission. Note that Ω_ rh_0^2 ≃ 4.2 × 10^-5 with h_0=H_0 /100 km s^-1 Mpc^-1. Here f=c k/(2π) is the frequency of the GW which has appeared due to the k-dependence of the spectrum of curvature perturbations (<ref>) during inflation. We have used the curvature perturbations power spectrum (<ref>) generating during the USR phase and calculated the convolution integral (<ref>) numerically to find the current fractional energy density of GWs (<ref>). The results are shown in Fig. <ref> for various values of h and Δ N in nano-Hertz bound. The results have been presented alongside the posteriors of an Hellings-Downs (HD) correlated free spectral reconstruction of the NANOGrav signal <cit.>. The values of (h, Δ N) are fixed such that the peak of P_ R does not violate the PBHs bounds. As seen, the spectra of our model follow the HD pattern expected for a stochastic gravitational wave background. Interestingly, we see that within the observable window the predictions of all models, mild (|h| ≲ 1) or sharp (|h | ≫ 1), follow the same pattern and are not significantly different from each other. However, outside the NANOGrav observed window (f > 10^2 nHz) the curves deviate from each other noticeably. This pattern is similar to the plots of power spectrum of curvature perturbations presented in Fig. <ref>. The reason is that all curves are obtained after imposing the PBHs bounds. However, the starting time of the USR and the value of the peak of the USR plateau are very similar for all curves as seen in Fig. <ref>. This is the reason why all curves, sharp or mild, follow close trajectories on the observable window. However, crucial to our setup is that outside the NANOGrav window, the curves have distinct predictions for SIGWs on frequencies much higher than ∼ 10^2 nHz. More specifically, the final tail of the power spectrum scales somewhat inversely with the sharpness parameter h such that milder (sharper) transitions have higher (lower) tails. In Fig. <ref> we we have shown the SIGWs spectra for a larger frequency range. In this figure, the quantity Ω_ GW h^2 was plotted against the frequency together with the sensitivity of the various existing and forthcoming GW experiments such as LISA, SKA, BBO, DECIGO etc. As seen, the tails of SIGW spectrums for different (h, Δ N) can fall into the sensitivity bounds of these observations. It means that different values of (h, Δ N) are distinguishable from each other in future GWs observations. § SUMMARY AND DISCUSSIONS The stochastic gravitational wave background detected by various PTAs observations can open a new window to the dynamics of early universe. In particular, this signal can be generated by GWs induced by scalar perturbations at second order in perturbation theory. The SIGWs can be used as a tool to distinguish various inflationary scenarios. A key criteria is that the models which are employed to explain the SGWB observed by the PTAs observations should not generate too much of PBHs which are constrained in various frequency ranges. In this work we have considered a single field model of inflation containing an intermediate USR phase. This setup has been used extensively in the past to generate PBHs and for the induced GWs studies. We have paid particular attention to the sharpness parameter of the model which play significant roles in loop corrections and for the amplitude of non-Gaussianity. In order to be away from the dangerous one-loop corrections we require a mild transition with | h| ≲ 1. This is also the limit where the amplitude of non-Gaussianity is small and one can employ the Gaussian predictions of the PBHs formation. We have calculated the spectrum of SIGWs and compared it to the NANOGrave results. The predictions of the model are consistent with the observed data. However, a careful data analysis is required to contrast the predictions of the model with the PTA datas and to put constraints on the model parameters. While our setup can qualitatively explain the origin of the NANOGrave observations, but it has specific predictions for the spectrum in higher frequency ranges. Our model predicts that a setup with a mild (sharp) transition has a higher (lower) tail of IGWs once they are fit to the current NANOGrave data. As a result, the predictions of our model for the amplitude of induced GWs can be tested in future GWs observation which may put constraints on the model parameter or to rule it out. Acknowledgments: We are grateful to Anotonio Riotto, Sina Hooshangi, and Seyed Ali Hosseini Mansoori for useful discussions and comments on the draft. We are partially supported by the “Saramadan" Federation of Iran. A. T. would like to thank University of Rwanda, EAIFR, and ICTP for their kind hospitalities during the 17th international workshop on the “Dark Side of the Universe" where this work was in its final stages. 99 NANOGrav:2023gor G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L8 (2023) doi:10.3847/2041-8213/acdac6 [arXiv:2306.16213 [astro-ph.HE]]. Reardon:2023gzh D. J. Reardon et al, Astrophys. J. Lett. 951, no.1, L6 (2023) doi:10.3847/2041-8213/acdd02 [arXiv:2306.16215 [astro-ph.HE]]. Antoniadis:2023ott J. Antoniadis et al, [arXiv:2306.16214 [astro-ph.HE]]. Xu:2023wog H. Xu et al, doi:10.1088/1674-4527/acdfa5 [arXiv:2306.16216 [astro-ph.HE]]. Kosowsky:1992rz A. Kosowsky, M. S. Turner and R. Watkins, Phys. Rev. Lett. 69, 2026-2029 (1992). Kamionkowski:1993fg M. Kamionkowski, A. Kosowsky and M. S. Turner, Phys. Rev. D 49, 2837-2851 (1994). Caprini:2007xq C. Caprini, R. Durrer and G. Servant, Phys. Rev. D 77, 124015 (2008). Hindmarsh:2013xza M. Hindmarsh, S. J. Huber, K. Rummukainen and D. J. Weir, Phys. Rev. Lett. 112, 041301 (2014). Kibble:1976sj T. W. B. Kibble, J. Phys. A 9, 1387-1398 (1976) doi:10.1088/0305-4470/9/8/029 Vilenkin:1981bx A. Vilenkin, Phys. Lett. B 107, 47-50 (1981) doi:10.1016/0370-2693(81)91144-8 Caldwell:1991jj R. R. Caldwell and B. Allen, Phys. Rev. D 45, 3447-3468 (1992) doi:10.1103/PhysRevD.45.3447 Vilenkin:1981zs A. Vilenkin, Phys. Rev. D 23, 852-857 (1981) doi:10.1103/PhysRevD.23.852 NANOGrav:2023hvm A. Afzal et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L11 (2023) [arXiv:2306.16219 [astro-ph.HE]]. Antoniadis:2023zhi J. Antoniadis et al, [arXiv:2306.16227 [astro-ph.CO]]. NANOGrav:2020bcs Z. Arzoumanian et al. [NANOGrav], Astrophys. J. Lett. 905, no.2, L34 (2020). Ananda:2006af K. N. Ananda, C. Clarkson and D. Wands, Phys. Rev. D 75, 123518 (2007). Baumann:2007zm D. Baumann, P. J. Steinhardt, K. Takahashi and K. Ichiki, Phys. Rev. D 76, 084019 (2007). Bugaev:2009zh E. Bugaev and P. Klimai, Phys. Rev. D 81, 023517 (2010). Assadullahi:2009nf H. Assadullahi and D. Wands, Phys. Rev. D 79, 083511 (2009). Alabidi:2012ex L. Alabidi, K. Kohri, M. Sasaki and Y. Sendouda, JCAP 09, 017 (2012). Cai:2018dig R. g. Cai, S. Pi and M. Sasaki, Phys. Rev. Lett. 122, no.20, 201101 (2019). Pi:2020otn S. Pi and M. Sasaki, JCAP 09, 037 (2020). Balaji:2022dbi S. Balaji, G. Domenech and J. Silk, JCAP 09, 016 (2022). Talebian:2022cwk A. Talebian, S. A. Hosseini Mansoori and H. Firouzjahi, Astrophys. J. 948, no.1, 48 (2023) Domenech:2021ztg G. Domènech, Universe 7, no.11, 398 (2021). Carr:2016drx B. Carr, F. Kuhnel and M. Sandstad, Phys. Rev. D 94, no.8, 083504 (2016). Carr:2020xqk B. Carr and F. Kuhnel, Ann. Rev. Nucl. Part. Sci. 70, 355-394 (2020). Sasaki:2018dmp M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, Class. Quant. Grav. 35, no.6, 063001 (2018). Ozsoy:2023ryl O. Özsoy and G. Tasinato, [arXiv:2301.03600 [astro-ph.CO]]. Byrnes:2021jka C. T. Byrnes and P. S. Cole, [arXiv:2112.05716 [astro-ph.CO]]. Ivanov:1994pa P. Ivanov, P. Naselsky and I. Novikov, Phys. Rev. D 50, 7173-7178 (1994). Garcia-Bellido:2017mdw J. Garcia-Bellido and E. Ruiz Morales, Phys. Dark Univ. 18, 47-54 (2017). Biagetti:2018pjj M. Biagetti, G. Franciolini, A. Kehagias and A. Riotto, JCAP 07, 032 (2018). Ragavendra:2020sop H. V. Ragavendra, P. Saha, L. Sriramkumar and J. Silk, Phys. Rev. D 103, no.8, 083510 (2021). Di:2017ndc H. Di and Y. Gong, JCAP 07, 007 (2018). Liu:2020oqe J. Liu, Z. K. Guo and R. G. Cai, Phys. Rev. D 101, no.8, 083535 (2020) Hooshangi:2022lao S. Hooshangi, A. Talebian, M. H. Namjoo and H. Firouzjahi, Phys. Rev. D 105, no.8, 083525 (2022) Hooshangi:2023kss S. Hooshangi, M. H. Namjoo and M. Noorbala, [arXiv:2305.19257 [astro-ph.CO]]. Ghoshal:2023wri A. Ghoshal, A. Moursy and Q. Shafi, [arXiv:2306.04002 [hep-ph]]. Kinney:2005vj W. H. Kinney, Phys. Rev. D 72, 023515 (2005) [gr-qc/0503017]. Morse:2018kda M. J. P. Morse and W. H. Kinney, Phys. Rev. D 97, no.12, 123519 (2018). Lin:2019fcz W. C. Lin, M. J. P. Morse and W. H. Kinney, JCAP 09, 063 (2019). Namjoo:2012aa M. H. Namjoo, H. Firouzjahi and M. Sasaki, Europhys. Lett. 101, 39001 (2013). Maldacena:2002vr J. M. Maldacena, JHEP 0305, 013 (2003) [astro-ph/0210603]. Creminelli:2004yq P. Creminelli and M. Zaldarriaga, JCAP 10, 006 (2004). Martin:2012pe J. Martin, H. Motohashi and T. Suyama, Phys. Rev. D 87, no.2, 023514 (2013). Chen:2013aj X. Chen, H. Firouzjahi, M. H. Namjoo and M. Sasaki, Europhys. Lett. 102, 59001 (2013). Chen:2013eea X. Chen, H. Firouzjahi, E. Komatsu, M. H. Namjoo and M. Sasaki, JCAP 1312, 039 (2013). Akhshik:2015rwa M. Akhshik, H. Firouzjahi and S. Jazayeri, JCAP 12, 027 (2015). Mooij:2015yka S. Mooij and G. A. Palma, JCAP 11, 025 (2015). Bravo:2017wyw R. Bravo, S. Mooij, G. A. Palma and B. Pradenas, JCAP 05, 024 (2018). Finelli:2017fml B. Finelli, G. Goon, E. Pajer and L. Santoni, Phys. Rev. D 97, no.6, 063531 (2018). Pi:2022ysn S. Pi and M. Sasaki, Phys. Rev. Lett. 131, no.1, 011002 (2023). Cai:2018dkf Y. F. Cai, X. Chen, M. H. Namjoo, M. Sasaki, D. G. Wang and Z. Wang, JCAP 05, 012 (2018). yokoyama J. Kristiano and J. Yokoyama, arXiv2211.03395hep-th. Kristiano:2023scm J. Kristiano and J. Yokoyama, [arXiv:2303.00341 [hep-th]]. Riotto:2023hoz A. Riotto, [arXiv:2301.00599 [astro-ph.CO]]. Riotto:2023gpm A. Riotto, [arXiv:2303.01727 [astro-ph.CO]]. Choudhury:2023jlt S. Choudhury, S. Panda and M. Sami, [arXiv:2302.05655 [astro-ph.CO]]. Choudhury:2023rks S. Choudhury, S. Panda and M. Sami, [arXiv:2303.06066 [astro-ph.CO]]. Firouzjahi:2023aum H. Firouzjahi, [arXiv:2303.12025 [astro-ph.CO]]. hr H. Firouzjahi and A. Riotto, arXiv2304.07801astro-ph.CO. Motohashi:2023syh H. Motohashi and Y. Tada, [arXiv:2303.16035 [astro-ph.CO]]. f G. Franciolini, A. Iovino, Junior., M. Taoso and A. Urbano, arXiv2305.03491astro-ph.CO. t G. Tasinato, arXiv2305.11568hep-th. Firouzjahi:2023btw H. Firouzjahi, [arXiv:2305.01527 [astro-ph.CO]]. Fumagalli:2023hpa J. Fumagalli, [arXiv:2305.19263 [astro-ph.CO]]. Vagnozzi:2023lwo S. Vagnozzi, [arXiv:2306.16912 [astro-ph.CO]]. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, [arXiv:2306.17149 [astro-ph.CO]]. Cai:2023dls Y. F. Cai, X. C. He, X. Ma, S. F. Yan and G. W. Yuan, [arXiv:2306.17822 [gr-qc]]. Inomata:2023zup K. Inomata, K. Kohri and T. Terada, [arXiv:2306.17834 [astro-ph.CO]]. Wang:2023ost S. Wang, Z. C. Zhao, J. P. Li and Q. H. Zhu, [arXiv:2307.00572 [astro-ph.CO]]. Liu:2023ymk L. Liu, Z. C. Chen and Q. G. Huang, [arXiv:2307.01102 [astro-ph.CO]]. Yi:2023mbm Z. Yi, Q. Gao, Y. Gong, Y. Wang and F. Zhang, [arXiv:2307.02467 [gr-qc]]. Figueroa:2023zhu D. G. Figueroa, M. Pieroni, A. Ricciardone and P. Simakachorn, [arXiv:2307.02399 [astro-ph.CO]]. Gu:2023mmd B. M. Gu, F. W. Shu and K. Yang, [arXiv:2307.00510 [astro-ph.CO]]. Ebadi:2023xhq R. Ebadi, S. Kumar, A. McCune, H. Tai and L. T. Wang, [arXiv:2307.01248 [astro-ph.CO]]. Madge:2023cak E. Madge, E. Morgante, C. P. Ibáñez, N. Ramberg and S. Schenk, [arXiv:2306.14856 [hep-ph]]. Cai:2021zsp Y. F. Cai, X. H. Ma, M. Sasaki, D. G. Wang and Z. Zhou, Phys. Lett. B 834, 137461 (2022). Cai:2022erk Y. F. Cai, X. H. Ma, M. Sasaki, D. G. Wang and Z. Zhou, JCAP 12, 034 (2022). Planck:2018vyg N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)]. Planck:2018jri Y. Akrami et al. [Planck], Astron. Astrophys. 641, A10 (2020). Garcia-Bellido:2016dkw J. Garcia-Bellido, M. Peloso and C. Unal, JCAP 12, 031 (2016). Musco:2020jjb I. Musco, V. De Luca, G. Franciolini and A. Riotto, Phys. Rev. D 103, no.6, 063538 (2021) Byrnes:2018txb C. T. Byrnes, P. S. Cole and S. P. Patil, JCAP 06, 028 (2019), [arXiv:1811.11158 [astro-ph.CO]]. Cole:2022xqc P. S. Cole, A. D. Gow, C. T. Byrnes and S. P. Patil, [arXiv:2204.07573 [astro-ph.CO]]. Carrilho:2019oqg P. Carrilho, K. A. Malik and D. J. Mulryne, Phys. Rev. D 100, no.10, 103529 (2019), [arXiv:1907.05237 [astro-ph.CO]]. Ozsoy:2021pws O. Özsoy and G. Tasinato, Phys. Rev. D 105, no.2, 023524 (2022). Pi:2022zxs S. Pi and J. Wang, JCAP 06, 018 (2023). Young:2013oia S. Young and C. T. Byrnes, JCAP 08, 052 (2013). Green:2020jor A. M. Green and B. J. Kavanagh, J. Phys. G 48, no.4, 043001 (2021) Kavanagh Kavanagh, B. J. 2019, bradkav/PBHbounds: Release version, 1.0, Zenodo, doi: 10.5281/zenodo.3538999. Carr:2020gox B. Carr, K. Kohri, Y. Sendouda and J. Yokoyama, Rept. Prog. Phys. 84, no.11, 116902 (2021). Kohri:2018awv K. Kohri and T. Terada, Phys. Rev. D 97, no.12, 123532 (2018).
http://arxiv.org/abs/2307.00751v1
20230703045655
Population Age Group Sensitivity for COVID-19 Infections with Deep Learning
[ "Md Khairul Islam", "Tyler Valentine", "Royal Wang", "Levi Davis", "Matt Manner", "Judy Fox" ]
cs.LG
[ "cs.LG", "cs.AI", "q-bio.PE" ]
Population Age Group Sensitivity for COVID-19 Infections with Deep Learning Md Khairul Islam Computer Science Department University of Virginia Charlottesville, USA mi3se@virginia.edu Tyler Valentine School of Data Science University of Virginia Charlottesville, USA xje4cy@virginia.edu Royal Wang School of Data Science University of Virginia Charlottesville, USA rjw8ng@virginia.edu Levi Davis School of Data Science University of Virginia Charlottesville, USA ljd3frf@virginia.edu Matt Manner School of Data Science University of Virginia Charlottesville, USA xkv3na@virginia.edu Judy Fox Computer Science Department School of Data Science University of Virginia Charlottesville, USA cwk9mp@virginia.edu August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The COVID-19 pandemic has created unprecedented challenges for governments and healthcare systems worldwide, highlighting the critical importance of understanding the factors that contribute to virus transmission. This study aimed to identify the most influential age groups in COVID-19 infection rates at the US county level using the Modified Morris Method and deep learning for time series. Our approach involved training the state-of-the-art time-series model Temporal Fusion Transformer on different age groups as a static feature and the population vaccination status as the dynamic feature. We analyzed the impact of those age groups on COVID-19 infection rates by perturbing individual input features and ranked them based on their Morris sensitivity scores, which quantify their contribution to COVID-19 transmission rates. The findings are verified using ground truth data from the CDC and US Census, which provide the true infection rates for each age group. The results suggest that young adults were the most influential age group in COVID-19 transmission at the county level between March 1, 2020, and November 27, 2021. Using these results can inform public health policies and interventions, such as targeted vaccination strategies, to better control the spread of the virus. Our approach demonstrates the utility of feature sensitivity analysis in identifying critical factors contributing to COVID-19 transmission and can be applied in other public health domains. Sensitivity Analysis, Morris Method, Deep Learning for Time Series, Temporal Fusion Transformer, County-level COVID-19 § INTRODUCTION The COVID-19 pandemic has posed significant challenges for governments and healthcare systems worldwide, highlighting the need for effective measures to manage and control the virus's spread. Understanding the factors that contribute to disease transmission is crucial for developing targeted public health policies and interventions. Age is a critical factor in COVID-19 transmission as shown by previous studies <cit.> <cit.>. Many prior works have studied forecasting COVID-19 infection and mortality rates using different statistical <cit.>, auto-regressive machine learning <cit.><cit.>, and deep learning <cit.><cit.><cit.> models. The focus of these models is to make better predictions for the epidemic spread which would help take better mitigation steps. Interpreting how these models make these predictions can provide an understanding of the importance of these different input factors and their interactions <cit.>. Temporal Fusion Transformer (TFT) <cit.> is a state-of-the-art forecasting model that has been widely used to do multivariate multi-horizon forecasting. TFT is particularly useful for modeling complex, non-linear relationships between input features and target variables, making it an ideal tool for analyzing COVID-19 transmission patterns. The Morris method <cit.> has been widely used to perform sensitivity analysis of different models <cit.><cit.>. By calculating the output change with reference to input perturbation, the Morris method provides a simple approach to rank input factors by their sensitivity. By combining TFT with feature sensitivity analysis, we can gain a better understanding of the factors that contribute to COVID-19 transmission and develop more effective strategies to control the spread of the virus. In this work, we collected the population by age subgroups data (Fig. <ref>) for each of the 3,142 US counties<cit.>, along with the daily vaccination rate of the population <cit.> and COVID-19 case report from March 1, 2020, to Dec 27, 2021 <cit.>. While there are prior studies that observe COVID-19 trends for different age groups <cit.>, we trained a TFT model on the dataset separately on each age subgroup as the static covariate and then calculated the sensitivity of those models using a modified Morris method <cit.>. We then ranked the age groups based on their Morris sensitivity scores to identify the most influential factors contributing to COVID-19 transmission. Finally, we evaluated our age group sensitivity rank with the actual COVID-19 cases reported for those age groups in the same time period <cit.>. Our study aims to provide insights into the age-related factors that contribute to COVID-19 transmission and inform targeted interventions to control the spread of the virus. The rest of the paper is organized as follows: Section <ref> gives the background on the problem state and theoretical foundation of our work. Section <ref> presents the experimental setup and model results. Section <ref> features the age group ranking by Morris sensitivity score and comparison with ground truth. Section <ref> summarizes the related works. Section <ref> contains the concluding remarks and potential future works. § BACKGROUND §.§ Sensitivity Analysis using the Morris Method Morris method is a reliable and efficient sensitivity analysis method that defines the sensitivity of a model input as the ratio of the change in an output variable to the change in an input feature. Given a model f(.), and a set of k input features X (x_1, …, x_k), the Morris sensitivity <cit.> of a model input feature x_i can be defined as follows: Sensitivity(X, i)=f(x_1,…,x_i+Δ,…,x_k) - f(𝐗)/Δ where Δ is the small change to the input feature x_i. Since the original Morris method was for static variables, we expand it for our predictions for the high dimensional spatial and temporal datasets. Algorithm <ref> shows the implementation of the modified Morris method for our study, where we normalized the output value change by the number of input days, counties, and delta Δ. Which we call the normalized Morris index μ̂*̂. We further scaled this index using the standard deviation (σ) of the input feature x_i, which we call the scaled Morris index (μ̂*̂×σ). This scaling takes the underlying distribution of the feature x_i when ranking the features by sensitivity. In the rest of the study, by Morris score, we refer to the scaled Morris index. Comment/* */ § PROBLEM STATEMENT In this paper, we use deep learning to study feature sensitivity for model predictions of COVID-19 infection at the county level. The workflow for the design of our experiments is illustrated in Fig. <ref>. §.§ Model Training for Predictions using TFT The Temporal Fusion Transformer (TFT) <cit.> is a novel, interpretable, attention-based deep learning model for multi-horizon forecasting. Its architecture is carefully designed to handle static (e.g. percentage of age group in a county population), past observed (e.g. infection rate of cases, daily vaccination coverage), and known future (e.g. day of the week) inputs. Its architecture is specially modified for four main uses. 1) To learn both local and global time-varying relationships at different scales. 2) To filter out input noises using Variable Selection Network (VSN). 3) To incorporate static metadata into the dynamic features for temporal forecasting. 4) To efficiently use different parts of the network through a gating mechanism. On a wide range of real-world tasks, TFT achieves state-of-the-art performance. Hence we use the model for our own study here. The prediction model f(.) is denoted as bellow: ŷ_i(t,τ) = f(τ, y_i,t-k:t,𝐳_i,t-k:t,𝐱_i,t-k:t+τ,𝐬_i)   where ŷ_i(t,τ) is the number of cases predicted at time t ∈ [0, T_i] for county i. τ is the forecast horizon, 15 days in our case. T_i is the length of the time series period, which for our case is the same for each county. We use the previous 13 days (lag window k) of data to forecast the next 15 days. § EXPERIMENTAL SETUP §.§ Computational Resources We implement our TFT model with PyTorch <cit.>. Then we conducted a performance evaluation of the model training on HPC clusters including the GPU nodes in Table <ref>. Each training epoch takes an average of 50 minutes on a GPU node with at least 32GB of RAM. Each Morris runs with a trained model, and with additional feature analysis that takes around 35 minutes per epoch. §.§ Input Data and Features The data set consists of daily COVID-19 cases for over 3,142 US counties with eight static features and one dynamic feature. The static features signify the percentage of all people in one of the eight age subgroups (0-4, 5-17, 18-29, 30-39, 40-49, 50-64, 65-74, and 75 and older) for all counties. The dynamic feature indicates the percentage of people who are fully vaccinated for all counties. The data set was split into training, validation, and testing sets. The training set includes the dates March 1, 2020, through November 27, 2021, the validation set includes the dates November 28, 2021, through December 12, 2021, and the testing set includes the dates December 13, 2021, through December 27, 2021. §.§ Model Training and Prediction A separate TFT model was trained for each age group, for a total of eight models. Since all static variables are presented together as context vectors, the separation ensures the identification of a specific subgroup's contribution without the interference of other correlated static variables. In addition to the static age feature, all TFT models include a dynamic vaccination status feature. An additional variable SinWeekly is also included, which represents weekly trends for changes in the number of new cases. The target variable for each model was the daily number of cases for each county. The daily predictions for each county were determined for the training, validation, and testing time periods. These predictions were compared to the observed values to obtain RMSE values. The RMSE values are shown in Table <ref> for each of the eight models. The "Model" column indicates the age subgroup used as the static variable for that model. As expected, all models performed well when predicting the test data set and have comparable loss values. §.§ Sensitivity Analysis Next, our Modified Morris Method was applied to each model. This involved perturbing the age feature in each model by a range of positive and negative delta values. The chosen values ranged from -0.01 to 0.01 with steps of 0.001, disregarding the delta value equal to zero. The Morris indices for each of the delta values are shown below (Fig. <ref>). Positive and negative values of delta indicate an increase and decrease in the age feature, respectively. Investigating both an increase and decrease in these features allowed for an understanding of how the Morris indices change between different delta values. Our results show that the Morris values are approximately stable for each age group across a range of values. Another observation about these results is the magnitude of the Morris values. With the delta and Morris values shown at the same scale of 10^-4, the Morris values remain about two to three orders of magnitude smaller than the delta values. This means that the changes caused by the perturbation of the static variable result in small changes in the predictions of the models, indicating a non-linear relationship between the static input and the output. Despite the relatively small values, there is a clear distinction between which age groups have higher Morris indices on average. This separation allowed us to rank the age groups based on their sensitivity to the models. Before this was done, it was necessary to obtain a ground truth ranking to use for verification of our ranking derived from the Morris method. Data was obtained from the CDC for the total number of COVID-19 cases reported for each age group within the time period of the training set, as shown previously (Fig. <ref>). Data from the US Census were also obtained to provide an estimate of the total population of each age group, with the estimate made for April 1, 2020. Having both the cases and the estimated population allowed us to calculate the true ranking. This was done by calculating the percentage of people in each age group who were infected. For example, 10,018,923 cases were reported for the age group 18-29, with an estimated population of 54,992,661 for that age group. This results in an infection rate of 18.2 percent. The infection rates for all age groups are shown below (Fig. <ref>). Based on the infection rate within each age subgroup, a ranking was made ranging from 1, the age group with the lowest infection rate, to 8, the age group with the highest infection rate. A ranking of the sensitivity of the model to each age feature was also determined by first ranking the relative Morris values for each age group for a given delta to obtain a ranking from 1 (lowest) to 8 (highest) value. After ranking each subgroup for all 20 delta values, the average ranking was determined. Both of these rankings, along with the difference between them, are shown in Fig. <ref>. Since the average Morris ranking for the age groups 30-39 and 75+ were found to be identical, we chose to break the tie arbitrarily. We noticed that the 5-17 age group ranking of 2 was significantly different from the ground truth ranking of 7. However, our results indicate that the Morris ranking closely matches the ground truth ranking for the other seven age groups, as most ranks are identical or have a difference of 1 to 3. This is interpreted as indicating that our predictions of which age groups contribute most to the total COVID-19 case numbers during the time period of the data are approximately equal to the true values verified by CDC and US Census data. § RELATED WORK Numerous works have been done on COVID-19 forecasting using deep learning and other AI or statistical models <cit.>. Understanding the feature importance and interaction of the input factors is crucial to the adaptation of control measures based on the changing dynamics of the pandemic. Prior work involved <cit.> simulation of the spread of COVID-19 using a fractal-fractional model and used sensitivity analysis to assess the impact of model parameters such as the transmission rate, recovery rate, and vaccination rate on the outcomes of the model. Other work <cit.> investigated the impact of key model parameters on the outcomes of a Susceptible-Carrier-Exposed-Infected-Recovered (SCEIR) model predicting COVID-19 infection. Similarly, additional work <cit.> analyzed the demographic patterns of COVID-19 cases during the recent resurgence of the pandemic in the United States. The data, consisting of 948 counties, studies the age distribution of cases and their relationship with changes in COVID-19 incidence. § CONCLUSION AND FUTURE WORK Typically, deep learning lacks sufficient interpretability to understand the decisions a model makes to improve predictive performance. However, our work demonstrates the successful application of the Morris method. By analyzing model sensitivity to changes in the age input feature, we explained the impact of COVID-19 on various age group populations in the US between the time period of March 1, 2020 to November 27, 2021. Small perturbations in the static features showed that our models were most sensitive to the 18-29, 30-39, and 40-49 age groups, indicating that these groups contributed most to the total number of COVID-19 cases. These results were then verified by comparison to the ground truth from CDC and Census data. After demonstrating the effectiveness of the Morris method applied to deep learning models for understanding the impact of COVID-19 on different age groups, we intend for future experiments to center on exploring the temporal component of these predictions. CDC data show that the ranking of COVID-19 infection rates by age group changes over time. By training new TFT models for a variety of time periods, we can better understand how the impact of COVID-19 varies throughout the pandemic and can further strengthen the reliability of our methods. § ACKNOWLEDGMENT This work is supported by NSF grant CCF-1918626 Expeditions: Collaborative Research: Global Pervasive Computational Epidemiology, and NSF Grant 1835631 for CINES: A Scalable Cyberinfrastructure for Sustained Innovation in Network Engineering and Science. abbrv
http://arxiv.org/abs/2307.05515v1
20230706140436
Chaos, Cosmic Ray Anisotropy, and the Heliosphere
[ "Vanessa López-Barquero", "Paolo Desiati" ]
astro-ph.HE
[ "astro-ph.HE", "physics.plasm-ph" ]
Autoionization of high-ℓ core-excited Rydberg states of alkaline-earth-metal atoms M. Génévriez August 1, 2023 ================================================================================== § INTRODUCTION Galactic cosmic rays are detected on Earth with anisotropy in their arrival directions <cit.>. Multiple experiments have measured this anisotropy; however, a complete explanation remains elusive. This work investigates the effects of chaotic trajectories of trapped cosmic rays on this arrival anisotropy. Specifically, how a coherent structure, such as the heliosphere, can influence particle propagation and, ultimately, the directions in which particles are detected. § MAGNETIC FIELD CONFIGURATION To simulate the trapping conditions in a magnetic field, we developed a model consisting of an axial-symmetric magnetic bottle with magnetic time perturbations. The temporal perturbations to the magnetic field have the following form: B_y = Δ B/B sin(k_p x-ω_p t) e^-1/2(z/σ_p)^2, where k_p = 2π/L_p and ω = 2π v_p/L_p. The model's specific parameters, such as the magnetic field strength and velocities, are comparable to the heliospheric values. The magnetic bottle is based on the mirroring effect that particles experience when they bounce off the heliosphere's flanks. The time perturbations mimic the effects of magnetic field reversals caused by the 11-year solar cycles. We created three different systems to test how the different components can affect the particles' trajectories. One is the unperturbed system, which consists of just the magnetic bottle. With this system, we will assess the effects of mirroring and trapping. The second is the weak perturbation system which corresponds to the magnetic bottle plus the time perturbation to the magnetic field, with the parameters chosen as those expected for the heliosphere: Δ B/B = 0.1 and v_p = 2 AU/year. Finally, a third one, the strong perturbation system, is created by increasing the values of the weak perturbation in order to amplify the effects so that they will be easy to distinguish for the analysis: Δ B/B = 0.5 and v_p = 20 AU/year. § CHAOS AND THE FINITE-TIME LYAPUNOV EXPONENT We develop a new method for characterizing chaos in the trapping conditions of this magnetic structure. With this new method, we can assess the chaotic effects on particle trajectories and how they are affected by being temporarily trapped in coherent structures. We based our model on the Finite-Time Lyapunov Exponent (FTLE): λ (t,Δ t)= 1/Δ tln [ d(t+Δ t)/d(t) ], where Δ t is the time interval for the calculation and d is the distance in phase space between two particles at a specific time. As a result, the FTLE can calculate the level of chaos based on the divergence rate of the trajectories. This quantity is useful because it can adapt to temporarily trapped conditions caused by interactions with coherent magnetic structures. § RESULTS AND DISCUSSION Once particles are propagated in this system, according to the method described in <cit.>, we found a relation between the Finite-time Lyapunov exponent (FTLE), which represents the chaotic behaviour of the particles, and the time required to escape the system. A power law provides this correlation: λ_FTLE = β t_esc^-1.04 ± 0.03 . One remarkable property of these systems is that the same power law persists even when perturbations are introduced. This feature points to the idea that a system can be characterized by a particular law, which will help in our aim to understand the role of trapping and chaotic behavior in cosmic ray propagation. The Finite-Time Lyapunov exponents and escape times are plotted in arrival distribution maps to derive information that will help us interpret the observations (see Figure <ref>). These maps constitute a visual representation of the various chaotic behaviors and how they are spatially distributed. For example, we can see areas of the unperturbed map where particles are more chaotic (denoted in redder colors in the bottom panel) and less chaotic areas near the stability region (darker blue). As the time perturbation strengthens (left to right in the maps), we can see how the maps change accordingly. The more chaotic particles start to populate larger regions of the map. In the case of the heliosphere, we can expect maps similar to those in the middle panel. There will be a mix of the level of chaos for the particles in the weak perturbation, and there will be regions with more significant variations due to the chaotic nature of the particles in it. These maps show areas with various chaotic behaviours, which may affect the observations. For instance, this might be a source of temporal variability. Since the areas where the particles are very chaotic will change differently than the ones where non-chaotic particles reside. 99 [Aartsen et al.(2016)]2016ApJ...826..220A Aartsen, M. G., Abraham, K., Ackermann, M., et al. 2016, ApJ, 826, 220. doi:10.3847/0004-637X/826/2/220 [Abeysekara et al.(2019)]Abeysekara_2019 Abeysekara, A. U., Alfaro, R., Alvarez, C., et al. 2019, ApJ, 871, 96. doi:10.3847/1538-4357/aaf5cc [López-Barquero & Desiati(2019)]barquero_2019 López-Barquero, V. & Desiati, P. 2019, 36th International Cosmic Ray Conference (ICRC2019), 36, 1109
http://arxiv.org/abs/2307.01862v1
20230704182031
Emergent Resource Exchange and Tolerated Theft Behavior using Multi-Agent Reinforcement Learning
[ "Jack Garbus", "Jordan Pollack" ]
cs.MA
[ "cs.MA", "cs.NE" ]
Comparing Globular Cluster System Properties with Host Galaxy Environment[Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with programs 14219 and 15265.] Jenny E. Greene August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================ For decades, the evolution of cooperation has piqued the interest of numerous academic disciplines such as game theory, economics, biology, and computer science. In this work, we demonstrate the emergence of a novel and effective resource exchange protocol formed by dropping and picking up resources in a foraging environment. This form of cooperation is made possible by the introduction of a campfire, which adds an extended period of congregation and downtime for agents to explore otherwise unlikely interactions. We find that the agents learn to avoid getting cheated by their exchange partners, but not always from a third party. We also observe the emergence of behavior analogous to tolerated theft, despite the lack of any punishment, combat, or larceny mechanism in the environment. introduction § INTRODUCTION While many consider human intelligence a core factor in the success of our species, perhaps an even more fundamental component is our ability to cooperate with one another <cit.>. It is no surprise then that the emergence of cooperative behavior has long been an area of interdisciplinary study as researchers seek to model societal dynamics <cit.> and design artificial intelligence to work with humans <cit.> . <cit.> demonstrated the power of cooperation through ecological simulations where a diverse population of strategies faced each other in the Iterated Prisoner's Dilemma (IPD) game. Cooperative strategies that punished instances of defection eventually dominated the population, while defecting strategies proved to be unstable in the long run. Other experiments on the noisy version of IPD performed mutation and selection on the strategy population, providing glimpses of open-ended evolution of cooperation <cit.>. The resulting theory from these experiments has been broadly applied to understand the emergence of cooperation between organisms such as bacteria as well as enemy troops during wartime <cit.>. Multi-Agent Reinforcement Learning (MARL) has been widely applied as a tool for studying cooperative behavior between agents which optimize their behavior to maximize a reward provided by an environment <cit.>. In this paradigm, multiple agents are placed in an environment designed such that cooperation between individuals is of greater benefit than competition. Many environments have been developed in pursuit of a variety of emergent social behaviors such as turn-taking, teaching, resource sharing, reciprocity and language. <cit.>. The success of multi-agent reinforcement learning has led some researchers to outline a path towards artificial general intelligence heavily oriented around reproducing human social intelligence in silico <cit.>. If we view the success of humanity as a story of cooperation rather than isolated intelligence <cit.>, then this research path is quite promising. Despite the potential of social intelligence, research on artificial societies has remained limited. While agent-based models allow researchers to study changes to social behavior, they often employ fixed, human-designed approximations of real-world dynamics and behaviors <cit.>. Large language models have recently enabled alternative, flexible forms of social modeling such as social simulacra, which simulate community interactions between different personas provided by prompts <cit.>. These large models are trained on vast amounts of human-generated data, which plays a large role in the behaviors of the social model, thus limiting the study of how intelligent or optimal behaviors may first arise. Systems that optimize agent behavior using reinforcement learning are typically configured to study a specific set of emergent social behaviors and often utilize additional modifications to the algorithm or environment's mechanics. Some additions include the training of additional classifiers alongside each policy <cit.>, auxiliary losses <cit.>, or behavior-specific mechanisms to enable desired behavior such as trading <cit.>. If we are to realize the vision of emergent artificial societies, we would like to discover simple, general-purpose environmental mechanisms that can induce different social behaviors instead of adding an additional layer of complexity per behavior. To this end, we demonstrate the emergence of resource exchange without programming any form of exchange protocol into the algorithm or environment. While agents have successfully leveraged human-designed exchange systems and discovered how to barter, trade stocks, and devise tax policies (learning to game the subsequent tax system as well) <cit.>, no prior work to our knowledge has demonstrated emergent resource exchange by picking up and placing resources in an embodied setting, despite the apparent simplicity of the behavior. We call our environment The Trading Game, as agents discover the ability to trade resources by picking up and placing down foraged resources. Food sharing is a prevalent phenomenon among various species, and its evolution has been a topic of interest for researchers in evolutionary biology and anthropology <cit.>. <cit.> reviews several hypotheses that have been proposed to explain resource-sharing from an evolutionary perspective in non-human species, including kin selection, tolerated theft, reciprocal altruism, and cooperative acquisition. The kin selection hypothesis suggests that sharing resources with close genetic relatives enhances the fitness of shared genes. The tolerated theft hypothesis proposes that individuals with ample resources allow those without to steal from them, as defending the resource is more costly than the resource's value. The reciprocal altruism hypothesis, previously applied to the iterated prisoner's dilemma, suggests that reciprocating cooperative behavior can emerge and be an evolutionarily stable strategy, and the cooperative acquisition hypothesis proposes that social carnivores hunt together to increase their chances of catching prey. In the Trading Game described below, where agents cannot fight, hunt, or reproduce, we see that reciprocal altruism is the primary driver stabilizing the emergence of exchange. In addition, we observe the emergence of tolerated theft, despite the lack of any method for stealing resources or engaging in combat. Our contributions are as follows: * We introduce the Trading Game, a simple foraging environment which applies pressure for agents to congregate around a campfire at night. * We demonstrate that agents in our environment can learn to exchange resources using drop/pickup actions during a period of extended congregation, whereas agents from prior research environments could not. * We demonstrate the emergence of a behavior akin to tolerated theft between agents in our environment, despite the lack of any combat, punishment, or larceny system. * Through an ablation study, we demonstrate how reciprocated resource exchange fails to emerge without sufficient pressure to congregate for extended periods of time. § BACKGROUND In this section, we briefly review a few of the most relevant environments used to study emergent cooperation and embodied exchange. §.§ Cleanup The Cleanup environment poses a complex social dilemma for multi-agent research, and has been used to study how systems for reputation, social influence, inequity aversion, and public sanctioning shape emergent cooperation <cit.>. In Cleanup, agents must simultaneously clear pollution from a river and collect apples which spawn proportionally to the amount of pollution cleared. In order to prevent free-riders from collecting apples without clearing pollution, agents are equipped with a “punish beam”, which they can fire at other agents to fine them with a certain amount of negative reward. This punishment mechanism allows agents to punish free-riders who do not contribute to the cleaning effort. When the punishment beam is combined with one of the additional systems mentioned above, agents learn to punish free-riders to achieve socially beneficial outcomes and overcome the social dilemma. §.§ Fruit Market In the Fruit Market environment described in <cit.>, agents can move around, produce and consume apples or bananas, and broadcast one of nineteen human-designed trade offers to nearby agents that are automatically executed by the environment once an agent accepts. Agents are designated as Apple Farmers (producing more apples at a time) or Banana Farmers (producing more bananas at a time). When Apple Farmers receive a larger reward for consuming bananas than apples, they are incentivized to produce apples and trade them for bananas (and vice versa for Banana Farmers). Agents eventually converge on trading as the optimal strategy, and a behavior akin to bartering soon emerges, where agents broadcast the offer that most benefits them, lowering their prices when they encounter other agents who do the same with counter-offers. Agents adjust their offers until an agreement is reached, after which the transaction is executed by the environment. When agents were given the ability to drop and pick up items as an alternative to hand-engineered offers, agents learned to avoid freely giving away resources, thus necessitating the implementation of a trading mechanism for exchange to occur. Further experimentation which varied the relative supply and demand of resources showed corresponding changes in price akin to what one might expect from real-world supply/demand curves. It was also shown that under certain maps with apples and bananas on opposite sides, a “merchant”-like behavior can emerge, where agents trade for apples on the apple-saturated side and then sell them at a higher price on the banana-saturated side. §.§ AI Economist Aside from Fruit Market, there are other environments that directly implement different mechanisms for exchange. Of note is the AI economist detailed in <cit.>, which tasks agents with gathering wood and stone to construct houses. Agents are set with different skill multipliers such that some agents are able to gather more resources while others make more coins building houses. Additionally, the environment provides a global market to which agents can submit buy and sell orders from anywhere on the map, which are automatically executed once a valid transaction exists. This environment adds an additional 44 actions for trading alone, representing the combination of 11 different price levels, whether the order is a buy or sell, and whether the resource is wood or stone. While adding substantial environmental complexity, the market enables agents to specialize in gathering or building houses and trade for the materials or coins they need. §.§ NeuralMMO The exchange system of NeuralMMO described in <cit.> also introduces a global market with which agents can buy and sell items using gold. Alongside the exchange system, a profession system is introduced which allows agents to produce items needed by other professions. As a result, each profession must purchase items from other professions in order to progress and raise their level. Agents can sell items by posting them to the market along with a price, and agents can simply select an existing item on the marketplace to purchase it at the specified cost. The NeuralMMO exchange system introduces 161 unique item types, making it one of the most complex exchange systems in a multi-agent research environment. method § METHOD multi-agent-reinforcement-learning §.§ Multi-Agent Reinforcement Learning Our environment is represented as a partially observable Markov decision process described by the tuple < S, O, A, P, R, γ, N>. The observation function O maps each state s ∈ S to the local observation o_t^i of the environment at time step t for agent i. The shared action space between N agents is denoted by A. Each agent is controlled by a policy π(θ) that is parameterized by a weight vector θ. In this setting, agents act one at a time rather than simultaneously. The probability transition function P(s'|s_t^i, a_t^i) is represented by P, where a_t^i is the action taken by agent i at time step t and and s' is the new environment state after the action has been taken. Notably, s'=s_t^i+1 if other agents still need to take their turn for the current time step, otherwise s'=s_t+1^1 The reward function is denoted by R(s_t, a_t), and the discount parameter is represented by γ. The objective of each agent i is to maximize its discounted accumulated reward over an episode of T time steps, which is represented by 𝔼_a_t^i,s_t^i[Σ_t=0^T γ^t R(s_t^i, a_t^i)]. environment §.§ Environment We formulate our environment as a two-dimensional grid world with two types of resources, fruits and greens, denoted by red and green squares respectively. Five fruits spawn randomly in each of the two patches in the left corners of the grid and five greens spawn randomly in each of the two patches in the right corners. Agents are able to move up, down, left, right, or perform no action. Additionally, agents can pick all fruit or greens on their cell, as well as place 0.5 fruits or 0.5 greens on their cell, resulting in nine total actions. Agents automatically consume 0.1 units of whatever resources they possess on every time step. If an agent can only consume one type of food, they do not fulfill all their nutritional needs and receive 0.1 reward. If an agent consumes both a fruit and a green in a single step, then their needs are fulfilled and the agent receives a reward of 1. Thus, in order to maximize reward, agents should consume fruits and greens together. Unlike <cit.>, all agents share the same reward function and are equally proficient at resource collection. Agents also receive an additional “collection” reward equal to the number of newly spawned resources they collected that time step. Resources placed by other agents do not contribute to the collection reward, as it would be possible generate large amounts of reward by repeatedly placing and picking up resources. Unlike many environments used to study the emergence of cooperation in multi-agent systems, our environment has a day/night cycle. The light level l for each cell on the grid starts at 0, which is the start of a new day, and then increases to 1 before oscillating between -1 and 1 for the rest of the episode in small steps. To incentivize agents to avoid dark regions of the map, we introduce a light penalty p, which is set to 10 by default. Agents lose lp reward when on cells with a negative light level, which scales the punishment by the darkness of the cell. With the default value of p=10, agents can lose up to 10 units of reward in a single step during the middle of the night. In order for agents to survive the darkness without receiving continual punishment, there is a small “campfire” region in the center of the grid which produces a faint light in a 5x5 area. The internal 3x3 area around it holds a light level greater than zero throughout the entire episode, and the outer ring holds a light level just under zero. The addition of the day/night cycle adds an element of periodicity to our setting; instead of wandering around the entire episode collecting food, we can expect agent behavior to alternate between foraging during the daytime and joining up around the campfire at night, treating the campfire like a “home base” described in <cit.>. Agents begin each episode in one of the four corners of the campfire's 3x3 area, spawning in the same corner each episode. At the start of each day, all remaining resources on the grid are removed, and two patches of new fruits and greens spawn randomly around the four corners of the map. Days and nights each last 24 time steps, which is enough time for agents to acquire all the resources in a single patch during the day. For an agent to acquire both types of resources on their own, they must stay out during a portion of the night. All experiments shown last 180 time steps which simulates four days of foraging, terminating at midnight on the fourth day. For the purposes of our analysis, we only report exchange statistics from the first three nights, as agents do not always finish trading when the episode terminates in the middle of the fourth night. observations §.§ Observations Agents observe a local 7x7 area of the grid centered around themselves. For our setting with 4 agents, each cell contains 18 channels of data, yielding observation tensors of shape (7, 7, 18). A description of each channel can be found in Table <ref>. As agents act sequentially, each observation contains the state of the environment after the previous agent has acted. §.§ Algorithm We train a deep neural network as our policy using the Proximal Policy Optimization algorithm (PPO) <cit.>, leveraging the implementation provided in the Ray Python library <cit.>. While there exist many algorithms tailored for multi-agent settings <cit.>, vanilla PPO has been shown to be effective on many multi-agent problems <cit.>. All of our agents utilize separate policies which share no parameters. Each policy contains a vision network, memory layer, and a controller network. The output of the vision network is fed into the memory layer, and the output of the LSTM is sent to the controller, which produces action probabilities. The architecture details can be found in Table <ref>, and the full list of hyperparameters is available in the appendix. results § RESULTS As seen in Figure <ref>, the behavior throughout training can be viewed as two periods of equilibrium with a period of fluctuation in between, reminiscent of the “punctuated equilibria” model of evolution <cit.>. The first equilibrium is reached when agents learn to forage resources during the day and return to the campfire at night, reached at around 10,000 iterations of training and persisting for around 45,000 more iterations. During this period, agents do not exchange resources and instead wait out the night only consuming just the resources they foraged, occasionally dropping a resource or moving around due to the stochastic sampling of actions during training. AI algorithms tend to be employed on games that typically do not contain long periods of time devoted to doing nothing. Indeed, a game where half the time is spent doing nothing would likely not be very interesting for many; however, when agents are not given an easy way to further optimize an objective they can find novel, sometimes unexpected methods to do so given time <cit.>. We observe the rise of such novel methods during the first transition around 55,000 training iterations in, when agents discover the ability to exchange resources around the fire. Exchange starts off in very small quantities, with agents dropping just half a unit of food in total over three nights, despite possessing an abundance of their respective resources each night. It takes thousands more iterations before the number of exchanges over three nights stabilizes around nine fruit and nine greens per episode between four agents. By this time, agents split up into two pairs who trade with each other, as seen in Figure <ref>. This averages out to 3 resources exchanged per night, 1.5 resources per pair of agents. We denote this trading configuration as 2-PAIR. Each agent goes to one resource patch per day, and each patch contains 5 units which all get picked. In the ideal case, each pair of agents would trade 2.5 units; however, agents need to walk from the patch back to the campfire, consuming anywhere between 0.5-1.0 units of food in the process. This implies that that 1.5 resources per night pair of agents is fairly close to the practical optimal quantity. An example exchange can be found in Figure <ref>. Agents form into pairs and stand a cell away from their partner. Each agent drops a resource before moving over to their partner's cell to collect the resource dropped by their partner. Notably, the pairs are not adjacent to each other, which may limit the degree to which different pairs can interfere with each other. emergence-of-tolerated-theft §.§ Emergence of Tolerated Theft Across four trials, agents will sort themselves up into pairs to exchange resources, as reported above. In a fifth trial however, a different form of behavior emerges which is so interesting that it deserves its own analysis. The 2-PAIR trading configuration emerges when each agent finds its own food patch. It is possible, however, that agents do not divide themselves evenly across all patches and instead get caught in a local minima where two agents visit the same patch and split the resources. In this particular trial, the purple agent collects resources from both a fruit and a green patch (accepting a minor light penalty in the evening in order to do so), the blue and orange agent share the other fruit patch, and the pink agent forages the other greens patch alone. As a result, the purple has significantly less incentive to trade, leaving the other three agents to divide foraged resources among each other. The resulting behavior is fascinating; as seen in Figure <ref>, the pink agent will drop some of their excess greens, which lures the orange agent away from the blue agent. The orange agent then leaves the blue and pink agents alone to trade and collects their offering of greens. This behavior is present on every single evaluation run on the final checkpoint of this trial. In order to test whether this behavior is a coincidence or intentional, we take control of the pink agent to prevent it from dropping the bait, and observe the change in behavior of the orange agent which can be seen in Figure <ref>. The orange agent responds by interfering with the blue and pink agents when they attempt to trade, akin to a defender in basketball. We control the pink agent during the attempted exchanges as well, since the pink agent will only attempt to trade with blue after it drops an offering for orange. Interestingly, there is no need for the pink agent to wait for a return offer after it has left a resource to orange. This allows the orange agent to collect its resource after the pink agent has moved three cells away to trade with blue, enabling food sharing over a distance of a few cells. Out of all the theories for the emergence of food sharing, this behavior is the closest to tolerated theft, where agents freely give resources because the cost of defending those resources is greater than the cost of simply giving them away <cit.>. In this case, the cost of defending a resource is replaced with the missed opportunity to exchange. The Trading Game supports no pre-programmed method of punishment like the punish beam described in <cit.>, yet it appears that the emergence of exchange brings forth both new forms of conflict (interfering with exchanges) and new ways to deal with troublemakers (conceding resources). resilience-to-defection-and-exploitation-of-suckers §.§ Resilience to Defection In all of the experiments run, agents never exchange resources on the same grid cell; rather, they consistently stand at least one cell away from a partner, drop a resource, and if the partner reciprocates, both agents exchange spots. We call this behavior DROP-SWAP. Notably, DROP-SWAP is fairly complicated behavior, and we would not expect it to arise if the only goal between agents was exchange. A much simpler exchange strategy would involve agents meeting on the same grid cell, then dropping and picking without moving during the exchange. So why do partners always keep their distance for each exchange? §.§.§ Intra-Pair Defection We hypothesized that DROP-SWAP emerges as a mechanism to defend against defection, since a cooperative agent will have enough time to reclaim their offer before a defecting partner can grab it. We denote this kind of defection intra-pair defection, since the cheating behavior takes place within the pair. To test this, we overwrote agent actions during evaluation to observe how agents respond when a partner reclaims their offer or attempts to grab an offer without reciprocating. There are two broad ways an agent can defect, since agents do not perform actions at the same time. The first defection occurs when an agent places down a resource, but then picks it back up once their partner drops before moving to collect the partner's resource. The second defecting strategy occurs when the partner drops a resource and the agent attempts to grab it without dropping anything in return. Across each of the four trials, we overwrote each agent to perform each type of defection during an exchange with their partner. For every single pair across all trials, the partner defended their resource and rescinded their offer, moving back to their original cell if needed to rescind. Despite our best efforts, we were unable to trick any partner into giving up their food for free, supporting the hypothesis that the DROP-SWAP form of exchange arises to prevent agents from getting cheated out of their offered resource. The ubiquity of defense against defection is rather surprising, considering the relative payoffs of getting cheated versus performing an exchange. If an agent is cheated out of half a resource without receiving anything in return, they only lose 0.5 units of total reward. Furthermore, at the time step of the defection, the loss is heavily discounted as it only impacts future reward when agents run out of food five steps sooner. In contrast, performing an exchange yields around an additional 4.5 units of total reward (5 from the exchange - 0.5 from running out of a resource sooner). Reward from exchange is also discounted far less, as agents immediately start receiving the reward for consuming two resources at once. Given these relative payoffs, we might not expect agents to learn to so strongly defend their resource unless it heavily motivated a partner to provide an offer in response. §.§.§ Inter-Pair Defection Additionally, we noticed that across all trials, pairs never exchange adjacent to other pairs. Instead, pairs consistently trade with at least one empty cell in between them. We hypothesized that this might have emerged as a method to defend against Inter-pair defection, where a pair will refuse to exchange if an outside agent is able to grab the dropped resources. To test this, we made each agent interfere with the opposite pair and measured whether it was possible to intercept resources during the exchange. We managed to intercept at least one resource between seven pairs out of eight across four trials. This does not imply, however, that stealing a resource was always easy. we observed various levels of defense: Some pairs would completely ignore outsiders, while others would refuse to initiate an exchange if there was an agent adjacent to the pair. In some cases, we could steal a resource from a pair right when they began their first exchange, but after getting cheated, they would refuse to exchange any further. Notably, a pair might display more defense against one outsider, but completely ignore a different outsider and allow them to steal during the exchange. We speculate that this variability in inter-pair defense may be a result of the difficulty in discovering exchange. If two pairs discover exchange at roughly the same time, attempting to intercept resources from another pair will quickly prove less beneficial than exchanging with a partner. On the other hand, when one pair discovers exchange far before the other, there is heavy incentive for outsiders to get better at intercepting resources from the cooperators until the non-cooperative pair can discover exchange themselves. Under this hypothesis, the difference in time between the two pairs individually discovering exchange may be an indicator to the degree of defense the first pair may have. There is no clear metric to quantify the degree to which agents defend from inter-pair defection, so validating this theory is not straightforward. sec:intercoop §.§ Inter-pair Cooperation When exchanging resources in trials without tolerated theft, agents form into pairs and exchange resources with their partner. As each agent is perceived in a separate observation channel, we sought to understand the degree to which this exchange behavior is tied to a particular partner. We measured this by freezing the normal partners from entering the campfire, and seeing if agents from opposite pairs would exchange resources around the campfire if their usual partner was not available. The exchange counts over three nights can be found in Figure <ref>. As in the defection experiment, the results were varied: many inter-pair pairings exchanged no resources, some pairings exchanged the near optimal amount, and other pairings would only exchange half a resource. These results imply that, despite extensive periods of time to explore interactions with other agents around the campfire, the degree to which agents explore the full range of interactions with others varies greatly. different-quantities §.§ Different quantities In Figure <ref>, we see that there is approximately a 1:1 exchange ratio between fruits and greens, which logically follows from the 1:1 distribution of resources. This prompts the following question: How do the rates of exchange change in settings with asymmetrical distributions of resources? We ran two groups of five trials to answer this question. Tolerated theft emerges once in the first group and twice in the second group, so we focus on three trials from each group that do not demonstrate tolerated theft, as tolerated theft skews base exchange rate away from 1:1. For one group, we set the fruit and greens patches to contain six fruits and four greens respectively. The other group had six fruits in their fruit patches and four greens in their greens patches. The number of exchanges for each resource can be found in Figure <ref>, where we can observe interesting dynamics play out over over the course of training. Initially, agents are willing to give great amounts of the abundant resource, often for nothing in return. In this setting, there is enough of the abundant resource for an agent to never run out. When an agent has so much food that it will never go a step without at least one resource to consume, dropping some of the extra resources yields the same reward as hoarding resources that will never be eaten. Thus, agents with excess will occasionally drop their food, since there is no reason not to. To get agents to drop more than a spare resource here and there, some extra reward for doing so is required. Agents with the scarcer resource begin to offer food which provides that reward, and so DROP-SWAP emerges. Eventually, the exchange rate approximates 1:1., with the abundant resource exchanged in slightly greater quantities than the scarcer resource. The approach towards a 1:1 ratio is surprising. Given the 3:2 distribution of the resources, we might intuitively expect a 3:2 exchange rate to stabilize, as was the case in <cit.>, but agents appear to stabilize on the DROP-SWAP strategy in a 1:1 ratio, despite initially giving resources away for free and possessing the ability to perform a 3:2 exchange in a single transaction. Nevertheless, after all 1:1 exchanges take place for a night, agents with the abundant resource consume whatever leftovers they have, and the cycle repeats the next day. For now, we do not attempt to provide an explanation for this phenomenon, and instead leave in-depth study of this dynamic to future work. ablation-studies §.§ Lowered Light Penalty In the Trading Game, the night penalty p serves multiple purposes: 1) to pressure agents into congregating for extended periods of time and 2) to prevent agents from foraging from both sides of the map in the same day. With the rather high value p=10, we've seen that agents will forage a single resource, then exchange for the other food type around the campfire; with a lower p, we might expect agents to stay out during parts of the night to forage both resources for themselves, as the weaker night penalty no longer outweighs the extra benefits from staying out to collect the other resource. Given this, we can view p as a parameter which controls the duration and degree to which agents will congregate around the fire. To study the impact of the light penalty, we run two sets of five trials, one where p=0 and the other where p=3, and compare them to the five p=10 trials. The rewards and exchange counts for these trials can be found in Figure <ref>. We notice that agents in the lower-penalty setting exchange less resources, receive less total reward, and accept a larger cumulative light penalty than agents from the p=10 run. Despite possessing the capability for resource exchange, agents in the lower-penalty setting converge to the aforementioned minima and seem to be unable to escape this minima once it is reached. This emphasizes just how unlikely resource exchange is to dominate as a strategy in the Trading Game: not only must exchange be more rewarding than foraging both resources alone, but foraging both resources needs to be more costly than only foraging one. When there is no light penalty and thus no reason to gather around the campfire, we observe slightly more exchange than in the low-penalty case. The exchanges seen here are a result of agents dropping excess resources that they will never consume, as described in Section <ref>. While there are no asymmetries in the quantities of resources, it is possible for agents to forage both patches of a resource and collect more of one food type than they will eat. These agents will occasionally drop their excess as there is no benefit to hoarding, allowing a one-way exchange to occur. This is made clear by the lack of reciprocation occurring during these trials. In trials with no light penalty, we observed numerous opportunities where two nearby agents holding different resources could perform DROP-SWAP, but opted not to. This cements the importance of the campfire: In Section <ref>, we showed when one agent is willing to give away resources for free, reciprocal exchange can emerge around the campfire to further promote exchange. Without a light penalty (and thus no extended downtime around the campfire) some agents give away resources for free, but nobody reciprocates, and thus DROP-SWAP fails to emerge. discussion § DISCUSSION This work takes inspiration from the concepts of autocurricula from reinforcement learning <cit.> and coevolutionary arms races <cit.>, where agents create problems for others to solve, which then leads to the creation of clever solutions and even harder problems. In our domain, these dynamics produce the emergence of exchange, the emergence of tolerated theft-like behavior, and competitive-cooperative dynamics such as defending exchange offers from defectors. Notably, unlike all exchange systems from previous work, the complexity of this environment arises not from agents learning to use complex game mechanics with many actions, but rather by learning complex ways to use just nine. Given the actions of picking up and placing down resources, agents exhibit complex forms of cooperation and competition in ways we have not intended or expected. If the environment had some human-designed trading system which allowed no room for interference or defection, these dynamics likely would have not arisen. The campfire does not explicitly facilitate exchange, but instead acts as a stepping stone <cit.> for its emergence by shaping the conditions under which interactions occur. In the work presented, it takes up to four days of wall clock time for agents to begin to reliably exchange resources on a domain where approximately 50% of the training steps are spent in a 3x3 area with four other agents. One need not imagine how long it might take for these behaviors to emerge in the p=0 setting, where the chances of interacting with another agent are significantly lowered–ten days of real-world training time proved insufficient. Concepts like the “punish beam” from <cit.> or the broadcast radius of trade offers <cit.> can also increase the chance of interacting with an other agent, mitigating this effect, but the campfire is unique in that it emphasizes repeated interactions with the same partner during training under similar conditions. Furthermore, the campfire does not require all actions to have a large range or area of effect, allowing exchange to emerge without requiring drop/pickup actions to apply over a distance, as seen with the emergence of tolerated theft. While the Trading Game leverages a heavy light penalty and distant resource piles to prevent agents from initially foraging both resources, the main purpose of the campfire is to promote periods of extended, idle congregation. There are a plethora of environmental modifications which would disincentivize dual foraging, such as adding difficult terrain between resources for which agents incur a penalty to cross or making agents proficient at foraging different resource types. The goal of this work is not then to present the Trading Game as some benchmark task that should be accepted “as-is”, but instead as a set of ideas that can be incorporated and modified in other environments to study new forms of cooperation. sec:ipd §.§ Informal Reduction to Iterated Prisoner's Dilemma Informally, the emergence of resource exchange during the night can be reasoned about in a similar fashion to the emergence of cooperation in the iterated prisoner's dilemma (IPD). The night reduces the practical dimensionality of the environment, pushing agents to the small 3x3 campfire to perform whatever actions they please for the duration of the night. This setting mirrors the IPD, where agents play multiple rounds with the same player; if agents could roam around as they pleased during the entire episode, random interactions between agents would be rare and unlikely to be iterated if agents move apart after the interaction. This may explain why resource exchange did not occur in the Fruit Market environment: A form of cooperation like modern-day markets may not need repeated interactions with the same partner, but for a behavior like resource exchange to arise, repeated interactions with previously seen individuals might be a necessary stepping stone <cit.>. We expect selfish agents to first learn to avoid dropping resources around the fire, and pick up any resources that are dropped by others. This simple short-term method of maximizing reward is analogous to defection in the prisoner's dilemma. This is indeed the first equilibrium we observe, and it persists for many thousands of training iterations. Due to the stochastic sampling of actions from our policies during training, agents still occasionally drop their resources around the fire, enabling agents to accidentally gift each other resources for extra reward. Agents which drop resources without receiving anything in return will avoid dropping resources in the future, which implies that if agent a wants agent b to drop a resource more often, it needs to make dropping a resource provide a higher reward for b than not dropping a resource, which it can do by offering or not offering a resource in return. This can be thought of as analogous to Tit-For-Tat, where defection and cooperation are both reciprocated such that cooperative strategies receive a higher reward on average than defecting strategies. Like the noisy iterated prisoner's dilemma described in <cit.>, there are no clever agents which intentionally shape the behavior of other agents. Rather, new forms of cooperation (like exchange) lead to new forms of defection (cheating/interfering), which leads to more complex forms of cooperation (tolerated theft/defending offers). Defection is not a viable long-term strategy since agents alter their policies to respond to defection, so when this defection-only minima is escaped, it must be from a cooperative strategy which is resistant to exploitation. This is likely why we always observe the rise of DROP-SWAP trading across all trials which converge to 2-PAIR. limitations §.§ Limitations As seen in Section <ref>, while agents can reliably exchange with their partner of choice, the degree to which this behavior generalizes to other agents varies. Furthermore, unlike Fruit Market and the AI Economist, exchange in the Trading Game emerges when agents are unable to acquire both resources on their own without getting heavily penalized and are required to work together to maximize reward. This is not uncommon, however, as reference games which require cooperation to solve are used to study the emergence of natural language <cit.>. This domain is also sensitive to the quantity of resources. If there are too few fruits, agents will consume them before they get a chance to trade; too many fruits, and agents can forage fruits one day and greens the following day, as there will be leftovers from the night before, thus reducing the benefit of mutual exchange. Furthermore, resources need to be spaced out: if fruits and greens spawned next to each other, there would be far less reason to specialize and exchange. These pitfalls could be alleviated by adding a limit to the amount of resources an agent can carry at once, or by adding incentive for agents to specialize in a particular resource as done in <cit.>, but for the purposes of this work we keep the environment as simple as possible. The compute required to reach exchange emergence is also nontrivial, requiring up to four days of wall-clock time on a Titan X before a pair may begin to reliably trade, and up to ten days in experimental settings with eight agents. This bottle-necked iteration speed and made it difficult to predict when an experimental configuration could yield emergent exchange or not, as performance does not improve significantly during the first equilibrium. Environments supporting greater social complexity with larger numbers of agents would likely take significantly longer. futurework §.§ Future Work Given the results and limitations described in the Trading Game, there remains ample room for future work. Within the Trading Game, there still exists many interesting environmental properties to study, such as the potential effects of adding a carrying capacity for resources or relative food quantities. Generalizing this exchange behavior to all seen and even unseen agents is also of great interest. The simplicity of the Trading Game also makes it ripe for extension; for example, the addition of a local communication system could allow agents to negotiate around the campfire. Deep neuroevolutionary approaches have been successfully applied as competitive alternatives to single and multi-agent reinforcement learning problems <cit.> and show potential as another algorithm for the study of emergent cooperation. Concepts analogous to the campfire could be applied to related domains like Fruit Market and even to entirely different domains such as reference games for research on emergent communication <cit.>. conclusion § CONCLUSION In this work, we demonstrated how novel behaviors can arise by reshaping the environmental conditions under which agents interact. We showed that a simple foraging environment with periodic gathering around a campfire can lead to emergent trading behavior, and discussed how the emergence of trading in our setting is analogous to the evolution of cooperation in the Iterated Prisoner's Dilemma. By directly interacting with the agents, we found that agents could prevent themselves from being cheated by their usual partner, but they exhibited varying levels of defense against being cheated by a third party. Additionally, we observed that agents could learn to interfere with exchanges as an indirect form of punishment, allowing an emergent behavior similar to tolerated theft to emerge. As congregation pressure is reduced, these behaviors arise in much weaker forms, if at all, demonstrating the importance of extended congregation on the emergence of embodied cooperation. appendix § APPENDIX We use RLlib provided in Ray 1.13.0 <cit.> and PyTorch 1.11.0 in our experiments. All parameters not mentioned in Table <ref> can be assumed to be the default for these versions.
http://arxiv.org/abs/2307.00578v1
20230702141552
TinySiamese Network for Biometric Analysis
[ "Islem Jarraya", "Tarek M. Hamdani", "Habib Chabchoub", "Adel M. Alimi" ]
cs.CV
[ "cs.CV" ]
label1]Islem Jarrayamycorrespondingauthor [label1]organization=REGIM-Lab.: REsearch Groups in Intelligent Machines, University of Sfax, National Engineering School of Sfax (ENIS), BP 1173, city=Sfax, postcode=3038, country=Tunisia label1,label2]Tarek M. Hamdani [label2]organization=Higher Institute of Computer Science Mahdia (ISIMa), University of Monastir,country=Tunisia label3]Habib Chabchoub [label3]organization=College of Business, Al Ain University of Science and Technology, Abu Dhabi,country=United Arab Emirates label1,label4]Adel M. Alimi [label4]organization=Epartment of Electrical and Electronic Engineering Science, Faculty of Engineering and the Built Environment, University of Johannesburg,country=South Africa [mycorrespondingauthor]Corresponding author Biometric recognition is the process of verifying or classifying human characteristics in images or videos. It is a complex task that requires machine learning algorithms, including convolutional neural networks (CNNs) and Siamese networks. Besides, there are several limitations to consider when using these algorithms for image verification and classification tasks. In fact, training may be computationally intensive, requiring specialized hardware and significant computational resources to train and deploy. Moreover, it necessitates a large amount of labeled data, which can be time-consuming and costly to obtain. The main advantage of the proposed TinySiamese compared to the standard Siamese is that it does not require the whole CNN for training. In fact, using a pre-trained CNN as a feature extractor and the TinySiamese to learn the extracted features gave almost the same performance and efficiency as the standard Siamese for biometric verification. In this way, the TinySiamese solves the problems of memory and computational time with a small number of layers which did not exceed 7. It can be run under low-power machines which possess a normal GPU and cannot allocate a large RAM space. Using TinySiamese with only 8 GO of memory, the matching time decreased by 76.78% on the B2F (Biometric images of Fingerprints and Faces), FVC2000, FVC2002 and FVC2004 while the training time for 10 epochs went down by approximately 93.14% on the B2F, FVC2002, THDD-part1 and CASIA-B datasets. The accuracy of the fingerprint, gait (NM-angle 180°) and face verification tasks was better than the accuracy of a standard Siamese by 0.87%, 20.24% and 3.85% respectively. TinySiamese achieved comparable accuracy with related works for the fingerprint and gait classification tasks. Biometrics Classification SiameseTinySiamese Verification § INTRODUCTION Deep learning has been successfully used to achieve good performances in a variety of applications such as recognition applications including image, activity and voice recognition. However, most of these algorithms often perform well when the predictions are made using a bit of information or data. In fact, in the modern deep learning tasks, neural networks are almost effective, but these networks require training on a huge number of samples. Nevertheless, a large number of samples is not available in some problems. For certain tasks including fingerprint or signature verification, data is not abundantly available. The lack of data can be solved using such procedures as data augmentation which has drawbacks, among which changing the right direction of learning or falling into too much data. As a result, systems that incorporate these procedures tend to excel in similar cases, but sometimes fail to offer robust solutions. A particularly interesting task when there are a few examples for training from scratch of each class before making a prediction is verification. This is called one-shot learning <cit.>, which attempts to solve such problems and construct a trained model using a small number of samples. One-shot algorithms can use few training samples for each class and generalize a model for unfamiliar categories without the need for extensive retraining. Therefore, the objective is to achieve new samples for each epoch and get high recognition accuracy with limited data. For one-shot learning, approaches such as deep Siamese networks <cit.> were proposed in several works <cit.>. Deep Siamese networks solve this type of tasks using only few images to get better predictions. The capacity to learn from little data has made Siamese networks popular in recent years. These networks are composed of two twin networks for image similarity computation. In fact, the Siamese network integrates CNN neural networks such as ResNet, VGG and MobileNet. It tries to learn the similarity between given images using distance measures. Despite the effectiveness of the Siamese Networks, they possess some weaknesses. Since they require pairs to learn, these networks need more training memory and time than normal CNN networks. In fact, they are slower than the normal learning types. In addition, the Siamese networks always require a lot of running time for the prediction. This work aims to address these high-level problems without requiring expensive learning which may be impossible due to limited data, low-power machines or the demand for fast prediction in terms of execution time. The present work work relies on the deep learning framework, which uses a small number of layers with relevant features to avoid learning failure. The proposed model is able to learn from little data although the cost of the learning algorithm in terms of execution time, running memory, and number of layers cannot be high. In fact, this paper introduces the TinySiamese network which has a small number of layers and allows a small model to successfully learn from few examples with a short learning time. TinySiamese is very useful under low-power machines with a normal GPU which cannot allocate a large RAM space. The most important contributions of our work are the following: =-1pt =-3pt * Proposing the TinySiamese for verification using a few number of layers and without the need for a huge dataset for training. * Proving that the proposed Tiny-Siamese could get a competitive performance with the standard Siamese Network. In the experiments, the efficiency of TinySiamese was demonstrated for verification and classification with the shorted matching and training time. * Showing through experimental studies that TinySiamese achieved a competitive performance with related works for classification. Here is how the remaining part of the paper is organized: related works are presented in section 2. However, the used Siamese and the proposed TinySiamese are described in Section 3 and Section 4 respectively. Section 5 is devoted to the presentation of used datasets, whereas Section 6 focuses on the experimental study. Section 7 covers the discussion. The conclusion of this paper is drawn in Section 8. § RELATED WORKS Research into one-shot learning algorithms is somewhat mature. It has received attention from the machine learning community. In fact, there are different works that present the Siamese as a network for one-shot learning. This section briefly reviews the related works on Siamese-based biometric recognition. The Siamese neural network is not very different from the traditional Convolutional Neural Networks. It takes images as input and encodes their features into a set of layers. The difference lies in the output processing. In fact, the Siamese networks take 2D images as input. The similarity between the two inputs is calculated using a distance between their feature vectors. Different distances formula were used on several works as Euclidean and contrastive losses <cit.>. The standard Siamese was performed in different works which focus on fingerprint recognition such as Lin et al. <cit.>, Attrich et al. <cit.>, Zhu et al. <cit.>, Alrashidi et al. <cit.>, Li et al. <cit.>, Zihao et al. <cit.> and Zhengfang et al. <cit.> and on face recognition, including Song et al. <cit.>, Soleymani et al. <cit.>, Pei et al. <cit.>, Li et al.<cit.> and Wang et al. <cit.>. Siamese networks have also been used in other studies related to gait recognition, among which Zhang et al. <cit.>, George et al. <cit.>and Sheng et al. <cit.>. Lin et al. <cit.> developed a framework of multi-Siamese using fingerprint minutiae, respective ridge map and specific region of ridge map. Each single CNN of the Siamese network was composed of four convolutional layers and three max pooling layers. This network was used to calculate similarity scores of two fingerprint images using a distance-aware loss function. Attrich et al. <cit.> proposed a Siamese network to develop a contactless fingerprint recognition system. The backbone of the proposed network was composed of three layers of convolutional and batch normalization and one average pooling layer. Attrich et al. employed the contrastive loss for the calculation of the similarity score. Zhu et al. <cit.> presented three different Siamese networks for fingerprint recognition. The three Siamese networks were based on three CNN networks: their own CNN, AlexNet and VGG. Li et al. <cit.> created an embedded image processing algorithm based on a Siamese neural network for fingerprint image recognition from any source. The Siamese backbone was constructed of thirteen convolutional layers. The contrastive loss function was used in the distance layer. Alrashidi et al. <cit.> proposed to use a Siamese network for cross-sensor fingerprint matching and enhancement. The proposed Siamese network was composed by four Convolutional (Conv) and Rectified linear unit activation (ReLU) layers among which there are three Softmax layers. Alrashidi et al. computed the channel-wise difference between the feature maps F1 and F2 of two fingerprint images in order to determine whether or not the corresponding features in each fingerprint are similar. Zihao et al. <cit.> proposed a novel fingerprint recognition method based on a Siamese neural network which was made up of thirteen convolutional layers for each sub-network. They used the contrastive loss function for the distance layer of the Siamese network. Zhengfang et al. <cit.> presented a novel Siamese Rectangular Convolutional Neural Network (SRCNN) for fingerprint identification. Each sub-network of the SRCNN is composed of five convolutional and pooling layers and each pooling layer is positioned after a convolutional layer. At the end of the SRCNN, the euclidean distance was used for the distance layer. Song et al. <cit.> designed a Siamese Neural Network based on Local Binary Pattern and Frequency Feature Perception for face recognition. They deployed the VGG16 network structure for feature extraction and the squared euclidean distance in the Siamese network. Pei et al. <cit.> used a Siamese network for face spoofing detection. The proposed deep Siamese network was trained by joint Bayesian with contrastive and softmax losses. George et al. <cit.> presented a deep Siamese convolutional neural network for gait recognition across viewpoints. The Siamese network contained three blocks which were each composed of 1 convolutional layer, 1 batch Normalization, 1 ReLU layer and 1 max pooling layer. George et al. used the contrastive loss function to calculate the similarity between two inputs. Sheng et al. <cit.> proposed a novel skeleton-based model named Siamese Siamese Denoising Autoencoder network for gait recognition. They proposed to learn a discriminative embedding vector using a Siamese architecture based on a Deep Autoencoder (DAE) network. They employed the contrastive loss function for the distance layer of the Siamese network. Table <ref> presents the related works which used the Siamese network. Despite the effectiveness of the Siamese Network, it has always presented a long matching time. Thus, the present challenge is how to keep the efficiency of the Siamese with less complexity and execution time. In this context, researches in the two works <cit.> and <cit.> proposed a light siamese architectures for objects tracking with small parameters and calculations. However, inputs have to pass over the set of CNN layers of Siamese for training and matching. In fact, like all standard Siamese networks, using other descriptors or only feature vectors as inputs is hopeless. The proposed TinySiamese resolved this problem and achieved the above goals with a recognition accuracy very close to the accuracy of the standard Siamese. § SIAMESE NETWORK A Siamese neural network is a category of neural network architectures. It contains two or more identical sub-networks. In fact, Siamese sub-networks have the same structure with the same parameters and weights. These networks are used to find the input similarity by calculating the distance between its feature vectors. In the present work, a siamese network was inspired by the well-known work of Koch et al. <cit.> and used for verification. The suggested architecture was based on a CNN sub-network, distance layer, linear layer and a sigmoid layer as shown in Fig. <ref> . The CNN sub-network of the Siamese model contained four blocs of convolutional, ReLU and max Pooling layers (Fig. <ref>) ending with two fully-connected layers: ReLU and Sigmoid layers (Fig. <ref>). This network was trained from scratch with the binary cross-entropy loss function and Adam optimization algorithm. The loss function is presented in the equation (Eq. <ref>) of the next section where p_i is the predicted probability distribution, y_i is the actual probability distribution and N is the number of samples. Despite the effectiveness of Siamese Network, it has some weaknesses. Since Siamese networks requires pairs to learn, it necessitates a large training memory and much running time for training and testing. Therefore, it requires power machines for priced learning. In addition, Siamese Network is limited by the CNN model using particularly image data. In other words, the freedom to use another model architecture or descriptor is generally limited. § TINYSIAMESE NETWORK The proposed TinySiamese neural network takes on a new look and a new way of working which is different from the standard Siamese network. The difference first appears in the input processing of the network. Instead of having images as input, the input was the output feature vector of a pre-trained CNN model. In other words, all input images would be transformed into feature vectors using a feature extractor (such as a pre-trained CNN model) as illustrated in Fig. <ref>. Then, the Tiny-Siamese encoded the features in a small set of layers and finally calculated the distance between two encoded feature vectors and generated similarity score. Using this score, the model was trained from scratch with the Adam optimization algorithm and binary cross-entropy loss function. §.§ Architecture Unlike the standard Siamese, the input of the TinySiamese was the encoded image as a feature vector. The backbone layers first aimed to extract relevant features using a linear fully-connected layer and a ReLU layer and then amplify them using another linear fully-connected layer and Sigmoid layer. The output size of the first linear layer had the half size of the input (n, n/2) and was followed by a non-linear ReLU layer. The second linear layer took n/2 features in input and came back to the same first input size in output (n/2, n). This layer was followed by a non-linear Sigmoid layer. The outputs of the TinySiamese sub-networks were encoded into an n-dimensional vector using inputs of a size equal to n. Siamese networks are usually trained using a distance function to minimize the distance among matches and maximize the distance among mismatches. The Euclidean distance L2 was well performed by different works <cit.>. Because there were no convolutional layers to lead to a strong training in the TinySiamese network, it became necessary to improve the distance function for a better separation of the classes. Thus, the Euclidean distance was merged with the Hadamard product and was used in the distance layer. The concatenated distance vector had a size equal to 2n. The final linear layer had the same size as the distance vector 2n in input and had an output of one (2n, 1). This layer was followed by a non-linear Sigmoid layer to map the produced real value into a small range that could be interpreted as probability of the similarity. Fig. <ref> details the overall architecture of the TinySiamese network. This model was designed to have few layers to be run and trained quickly even on weak machines. The proposed network was scalable depending on the size of the input feature vector and the complexity of the task. Indeed, it was possible to add new layers and delete others before and after the distance layer. §.§.§ Distance Layer Most related works used Euclidean or contrastive distances. Since the distance layer was in the middle of the network, it was more useful to use the Euclidean distance in order to receive two feature vectors (from the pair input) and output a feature vector (the difference between the two inputs). It was thought that V1 was the vector resulting from the upper sub-network and V2 was the vector resulting from the lower sub-network of the TinySiamese. In order for the distance between two feature vectors to have been more robust and efficient, the distance layer generated the concatenation of the Euclidean distance between V1 and V2 and the Hadamard product of V1 and V2. In fact, the L2 Euclidian distance and the hadamard product were used by <cit.> for Authorship Verification. The resulting concatenated vector V had a 2n dimension (Eq. <ref>). V=[(V1-V2)^2 (V1 ⊙ V2)] §.§.§ Loss Function The binary cross-entropy loss is a commonly used loss function in machine learning for binary classification problems <cit.>. It measures the difference between the predicted probability distribution p_i of the model and the actual probability distribution y_i of the target variable as shown in the equation (Eq. <ref>) where N is the number of samples. Loss = 1/N∑_i=1^N -(y_i * log(p_i) + (1-y_i) * log(1-p_i)) §.§.§ Activation Function The role of the activation function is to generate an output signal using the received input signal <cit.>. The activation function was applied to the output of any linear transformation of the proposed TinySiamese to introduce non-linearity into the model. The used activation functions were ReLU <cit.> and Sigmoid <cit.>. The formula for the Rectified Linear Unit (ReLU) activation function is: f(x) = max(0,x) The advantage of using the ReLU activation function is that it is simple and computationally efficient. It is also the most frequently-used activation function in neural networks particularly in Convolutional Neural Networks (CNNs). Besides, ReLU can introduce sparsity in the network, which can help reduce overfitting. The TinySiamese network resolves a binary classification problem by classifying two inputs into one of two classes: similar or not similar. In fact, the output result was 0 or 1. Thus, the final linear fully connected layers in the proposed TinySiamese network adopted the Sigmoid activation function (Eq.<ref>). σ (x)=1/1+e^-x §.§ Pre-trained Model for Feature Extraction The convolutional neural network (CNN) is among the major architectures employed in deep learning for feature extraction. It is specially efficient type of neural network when working with visual data <cit.>. One of the interesting advantages of CNN is that it is able to extract features from images at various levels. A hierarchical structure of features is developed by a trained convolutional neural network with large and high-level features in the deep layers and small features in the first layers. Due to data scarcity for some tasks such as fingerprint verification <cit.>, it is useful to exploit the features produced by a pre-trained CNN model in different levels for TinySiamese training. In other cases, when the data does not contain images (such as 3D face, video, voice, text, etc), it is possible to use any other trained model to extract features (such a swavelet neural networks, GAN, LSTM, Gabor descriptor, Autoencoder, Bert, etc) <cit.>. In fact, the type of the feature extractor is not important knowing that the TinySiamese takes a feature vector as input. The TinySiamese is not limited by the CNN model. Furthermore, the use of any other trained model architecture or descriptor is possible. § DATASETS In the experiments, the proposed TinySiamese network was performed on four datasets: §.§ B2F (Biometric images of Fingerprints and Faces) §.§.§ Set of Fingerprint Images To the best of our knowledge, there is no fingerprint pattern database showing the different finger image for each person. In fact, each finger is presented as a different class in most benchmark datasets. Therefore, our B2F[https://ieee-dataport.org/documents/b2f-biometric-images-fingerprints-and-faces] dataset (Biometric images of Fingerprints and Faces) has been prepared for fingerprint verification, classification or recognition. The B2F presents a multi-finger dataset. The introduced fingerprint set of B2F dataset is composed of two subsets of data: the first is for the fingerprint images of the left hand and the second is for the fingerprint images of the right hand of each person. Figure <ref> shows the fingerprint images of one hand of only one person. The digital images of this dataset were taken at a resolution of 258* 336 pixels. Each data subset (for right of left hand) contains 13.710 fingerprint images of 457 persons. In fact, there are 30 images for each hand of one person: 6 images for each finger (little, ring, middle, index and thumb fingers). §.§.§ Set of Face Images The same applies to the set of face images from our B2F dataset which is composed of different facial emotions and views of the same person and distances between the camera and the person that do not exist in the published benchmarks. The introduced face set of B2F dataset is composed of face images taken on 13 views at a shooting distance of 2 meters (0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, 165° and 180°), face images taken with 7 emotions (neutral, happy, angry, disgusted, surprised, scared and sad) and four other images taken at 3 distances (2 at a distance of 2 meters, 1 at 1 meter and 1 at 0.25 meter) for each person. Figure <ref> shows the face images of one person. The digital images of this dataset were taken at a resolution of 4608*3456 pixels. This dataset contains 10637 images of 431 persons. In fact, there are between 20 and 30 images for each person. §.§ FVC Datasets FVC2002[http://bias.csr.unibo.it/fvc2002/databases.asp] and FVC2004[http://bias.csr.unibo.it/fvc2004/download.asp] datasets include noisy images acquired by different live scan devices. Each one has four sets and contains 880 fingerprint images. The fingerprints of each dataset were categorized into four types: arch, right loop, left loop, and whorl. The four sets of FVC2004 were merged into a single set of four classes to form a multi-sensor fingerprint dataset. The same procedure was used for FVC2002 using only three sets (DB1, DB2 and DB4). Figure <ref> shows examples of the four classes from the FVC2004 dataset. §.§ THDD-part1 Dataset The THDD-part1 (THODBRL) <cit.> is a multi-view horse face database. The images were captured when horses were in the barn. The horse face images were taken from 3 views (right view profile, left view profile and frontal view of the horse). In fact, this dataset contains face images for 47 Arabian, Barbaro and hybrid horses. For each horse, there are 10 right profile images, 10 left profile images and 10 frontal face images. §.§ CASIA-B Dataset CASIA-B[http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp] dataset is a multi-view gait database <cit.>. This dataset includes data of 124 persons captured from 11 views (0°, 90°, ..., 180°) with 3 walking conditions: walking with a bag (BG) (2 sequences per subject), wearing a coat (CL) (2 sequences per subject) and normal (NM) (6 sequences per subject). Namely, for each subject, there are 11 × (6 + 2 + 2) = 110 sequences. § EXPERIMENTS §.§ Implementation The experiments were performed on NVIDIA GeForce GPU and a 18 GB memory. For the purpose of obtaining a thorough assessment, the proposed TinySiamese was tested for two different tasks on four datasets: fingerprint and face sets of the B2F , FVC , THDD-part1 and CASIA-B datasets. Motivated by the fingerprint verification task (Verif), the B2F fingerprint images of the left hand dataset were divided into two different parts. The first part encompassed 85 persons. This part was intended for creating the pre-trained CNN which would serve as a feature extractor. The remaining 186 persons were for the second part. This part was randomly divided into two parts: 60% was provided for Siamese and TinySiamese training while 40% was devoted to testing. In fact, the fingerprint images of five fingers of each person were randomly split into two parts and used for training and testing. The TinySiamese could prove its effectiveness by implementing the introduced task which was so complex. The same procedure was used for face verification, the B2F face images were merged with THDD-part1 images to get a mixed face dataset for humans and horses. The new dataset was randomly divided into two parts: 60% was deployed for Siamese and TinySiamese training and 40% for testing. Since there was no official partition of the training and testing sets of the CASIA-B dataset, the experiments in this paper used a popular setting in current literature. This setting is also known as large-sample training (LT) when 74 subjects are used for training and the remaining 50 subjects are left for testing. The GEI was employed for gait classification. In fact, each person had four GEI images for the Gallery set and two GEI images for the Probe set. Figure <ref> presents four GEI images from four videos of four different people. Only the normal state in the 180° view was used in this work for TinySiamese verification. Motivated by the fingerprint classification task (Classif), the three datasets (DB1, DB2 and DB4) of FVC2002 were merged into one dataset and then divided into four categories for the purpose of creating the pre-trained CNN which would serve as a feature extractor. The four datasets (DB1, DB2, DB3 and DB4) of the FVC2004 dataset were merged and then categorized into four types which were, in turn, divided into two parts. According to related works, the merged dataset was randomly split: 80% was used for training while 20% was employed for testing. By implementing the classification task, the TinySiamese could prove its efficacy when comparing its results with the results of related works. For gait classification, the same partitions deployed for verification was used again. Only the normal state in the 0°, 90° and 180° views was deplyed in our work for the evaluation of the TinySiamese classification. The simplest CNN, AlexNet <cit.>, was employed as a feature extractor for the fingerprint verification task. AlexNet was trained using the SGD optimizer (Opt), 90 epochs (Ep) and a Batch Size (BS) equal to 5. Indeed, to make a fairer comparison between the TinySiamese and the standard Siamese network, AlexNet was fine-tuned on the second part of the B2F dataset. In order not to overuse the machine and waste more training time, 50 epochs were enough with a batch size equivalent to 5. According to Ametefe et al. <cit.>, the VGG16 <cit.> is effective for fingerprint classification. Thus, VGG16 was deployed as a feature extractor for the fingerprint classification task. This network was trained using the SGD optimizer (Opt), 90 epochs (Ep) and a Batch Size (BS) equal to 50. The pre-trained DensNet used by Jarraya et al. <cit.> was employed as a feature extractor for the gait classification task. The pre-trained ResNet-50[https://awesomeopensource.com/project/peteryuX/arcface-tf2] with ArcFace <cit.> was used for face verification and classification tasks. Since ResNet-50 (with ArcFace) was pre-trained only on human faces without horse faces, this model was fine-tuned on the training part of the merged face. 15 epochs with a batch size equal to 50 were enough to avoid overusing the machine and wasting more training time. The Siamese and TinySiamese Networks were trained from scratch using ADAM optimizer, 120 epochs and a batch size equal to 18 for the fingerprint verification task. However, the same network used 240 epochs from scratch and a batch size equal to 32 for the classification task as there were more data in the FVC2002 dataset. Since each class contained hundreds of images, it was useful to increase the number of dissimilar and similar images. The same number of epochs was used for the other tasks: Gait and face verification or classification on the other datasets. Table <ref> and table <ref> represent the experimental configuration of the different networks. §.§ Sample Pairing One way to use Siamese networks which consist of two identical sub-networks is to take two input samples, pass them through the sub-networks, and then compare the output representations of the samples to determine their similarity. In fact, for sample pairing, there are two different strategies. The first strategy is unbalanced sample pairing and the number of dissimilar pairs is greater than the similar pairs. The second one is the balanced pairing of samples when the number of dissimilar pairs is equal to similar pairs. The balanced pairing was used in <cit.>. In this paper, the strategy of balanced sample pairing was followed for a stable training and testing process. For verification and classification tasks, each iteration took N similar pairs of the same user and N dissimilar pairs composed of N random user images. The same user was compared with other users for dissimilar pairs. §.§ Complexity: Matching and Training Time The complexity of neural networks is interesting in the deep learning axis <cit.>. Indeed, it is useful to reduce the number of layers in the network while maintaining performance and efficiency. Added to that, using a light network help users train and test deep systems, even with low-power machines. The proposed solution does not require a large Siamese network for verification. The standard Siamese Network can be replaced by a pre-trained CNN network and the proposed TinySiamese. The pre-trained network can be used with no or only little fine-tuning. Fine-tuning a simple CNN requires less time and memory than the simple Siamese which consists of two CNN structures in parallel. Besides, the TinySiamese takes a little time for matching and training. Table <ref> presents the matching time on the B2F, the FVC2000, FVC2002 and FVC2004 using Siamese network and TinySiamese. The resulting time was the average time of ten matches of ten random images. The required time for matching by the pre-trained CNN with TinySiamese is smaller than the others. In fact, there was a gain of 0.086 ms for matching. Table <ref> presents the average training time of ten epochs on the fingerprint set of B2F, CASIA-B, the fusion of the face set of B2F with THDD-part1, FVC2000, FVC2002 and FVC2004 datasets using the Siamese network and the TinySiamese. The required time for training by TinySiamese was less than the standard Siamese. Indeed, there was a gain of 612.28 ms for training. §.§ Fingerprint Verification Results The fingerprint verification task of five fingers at once was quite complicated. Its implementation proved the effectiveness of the proposed TinySiamese. Table <ref> shows the performance of the TinySiamese with an accuracy equal to 90.13%. This percentage is considered good due to the difficulty of the task. In fact, by means of a pre-trained CNN, the TinySiamese was able to outperform the standard Siamese using an entire CNN for training with 0.87%. §.§ Face Verification Results The face verification task using a dataset of human images on 13 views and 7 human images with 7 emotions along with horse images on 3 views was quite complicated. The implementation of this task proves again the effectiveness of the proposed TinySiamese. Table <ref> shows the performance of the TinySiamese with an accuracy equal to 85.87%. This percentage is considered acceptable due to the complexity of the task. Indeed, using a pre-trained CNN, the TinySiamese was able to outperform the standard Siamese for face verification by means of an entire CNN for training with 3.85%. §.§ Gait Verification Results Based on Table <ref>, the proposed TinySiamese has proven its performance in the experimental part on the normal state of the CASIA-B database with the 180° view compared to the Siamese network. In fact, the results were very encouraging with an accuracy equal to 98.39%. By means of a pre-trained CNN, the TinySiamese was able to outperform again the standard Siamese for gait verification using an entire CNN for training with an accuracy difference of 20.24%. §.§ Fingerprint Classification Results The proposed TinySiamese network was tested again for a simple task on the FVC2004 benchmark. The fingerprint classification task was introduced into some research works <cit.> using the same dataset. According to related works, the TinySiamese was trained on 80% of the FVC2004 dataset and tested on the remaining part. The VGG16 was pre-trained using three sets of the FVC2002 (DB1, DB2 and DB4). Table <ref> illustrates comparative results with different research studies. It is noticed that our network produced a high performance and was a competitor to the other networks. §.§ Gait Classification Results The gait classification task was introduced using the Casia-B dataset in some research works <cit.>. According to the related works, the TinySiamese was trained on 60% of the people in the CASIA-B dataset and tested on the remaining part. Table <ref> shows the effectiveness of the proposed network for gait classification on three views using GEI images. Table <ref> illustrates similar results compared to various research studies. It is noted that our network was a competitor to the other networks by achieving a high performance. § DISCUSSION The proposed TinySiamese architecture obtained not only good results but it also presented a short execution time for training and short matching time. These properties allowed for a quick execution, ensuring its use in online verification with the ability to update the network upon new additions without complexity. TinySiamese introduced two new additions compared to standard Siamese networks: the use of a pre-trained model as a feature extractor and a concatenated distance vector in the distance layer. Table <ref> and <ref> confirm the effectiveness of this network. Saving time and memory space are important tasks for quick verification and for people who do not have powerful machines. In fact, the time saved was equal to 612.28 ms for training and 0.086 ms for matching. Despite all these economies, TinySiamese presented encouraging results for verification with an accuracy equivalent to 90.13% on the fingerprint set of the B2F database, 85.87% on the face set of the B2F database and the THDD-part1 and 98.39% on the CASIA-B dataset. Additionally, this network achieved competitive results with related works as shown in table <ref> and <ref>. Thus, the proposed TinySiamese network did not require expensive learning. It can be executed under low-power machines and with fast prediction in terms of execution time. Tables <ref> and <ref> demonstrate the importance of adding the Hadamard product into the distance layer of the TinySiamese and the Siamese networks. The information added by concatenation with the Euclidean distance of the two extracted feature vectors gave the network more opportunities to avoid mistakes. With all these advantages, it is important to point out two important things. First, the standard Siamese network is able to achieve better results using a bigger number of epochs. In fact, increasing the number of epochs to 1000 gives more chances to the network to learn better. However, this will take more training time and can further strain the computer. Second, the use of convolutional 1D layer instead of the fully-connected layer can reduce the number of parameter in the network more and more. This will be the subject of one of our coming research papers. § CONCLUSION This paper presents a different strategy for performing one-shot verification using the TinySiamese neural network with a short matching and training times. The performance of the proposed network was better than the existing state-of-the-art networks developed for biometric verification and classification. The TinySiamese network outperformed the networks available for verification by the lowest complexity and came close to the best results obtained by the previous authors. It presented a saved matching time equal to 76.78%, a saved training time equivalent to 93.14% for 10 epochs and a verification accuracy equal to 90.13% on the fingerprint set of the B2F database, 85.87% on the face set of the B2F database and the THDD-part1 dataset and 98.39% on the CASIA-B dataset and a classification accuracy equivalent to 99% on the FVC2004, 93.00% for 0° view, 100% for 90° view and 98.98% for 180° view on the CASIA-B dataset. § ACKNOWLEDGMENT The research leading to these results has received funding from the Tunisian Ministry of Higher Education and Scientific Research under the grant agreement number LR11ES48. elsarticle-num Islem Jarraya was born in Sfax, Tunisia, in 1986. She received the Four-year University Degree in computer science and multimedia in 2009 and the master degree in Computer Science and Multimedia in 2014 at the Multimedia, InfoRmation systems and Advanced Computing Laboratory (MIRACL-Lab) from the higher institute of computer science and multimedia of Sfax, Tunisia. She has been pursuing the PhD. degree on Computing System Engineering with the Research Group on Intelligent Machines (ReGIM-Lab) at ENIS since 2015. She focuses her research on applying intelligent methods (Deep neural networks, feature extraction, and recognition algorithms) to Deep Machine Learning, Computer Vision, Face Image Processing, intelligent pattern recognition, and analysis of large scale complex systems. She has been an active IEEE member since 2015. §.§ Tarek M. Hamdani (IEEE Member’01, Senior Member'12) was born in Tunis, Tunisia, in 1979. He received his M.Sc. degree in 2003, the Ph.D. degree in 2011, in Computer Science Engineering from the National Engineering School of Sfax, Tunisia, and the HDR degree in Computer Science Engineering from the National Engineering School of Sfax, Tunisia, in 2019. He is currently an Associate Professor in Computer Science at the University of Monastir, Monastir. He focuses his research on applying intelligent methods (neural networks, fuzzy logic, and evolutionary algorithms) to Machine Learning, Computer Vision, Natural Language Processing, intelligent pattern recognition, and analysis of large scale complex systems. He is a Reviewer of the Pattern Recognition Letters, Neurocomputing, Neural processing Letter, IEEE TNNLS, and the Soft Computing journal. He is an IEEE Senior Member of CIS and SMC societies. §.§ Habib Chabchoub is a full professor of Management Science and director of the MBA program at the College of Business, Abu Dhabi campus, in Al Ain University (UAE). He has initiated and led different Master programs in Management and a PhD program in Quantitative Methods at University of Sfax (Tunisia). He has supervised more than 20 PhD theses , co-worn several international academic and research projects (TEMPUS "Aqi-Umed", CMCU, bilateral projects, and others) and been involved in several international conferences (MOPGP, META, ICALT, LOGISTIQUA, etc.). He co-authored more than 100 refereed publications, several of which are in high impact forums. He received a BSc in Mathematics and a MSc in Management Science from University of Tunis (Tunisia) and a PhD in Operations and Decision Systems from Laval University (Canada). His research interest encompasses supply chain and logistics management, multiple objective programming, multi criteria decision making and meta-heuristics. §.§ Adel M. Alimi (IEEE Student Member’91, Member’96, Senior Member’00). He graduated in Electrical Engineering in 1990. He obtained a PhD (from Ecole Polytechnique of Montreal, Canada) and then an HDR both in Electrical and Computer Engineering in 1995 and 2000 respectively. He is full Professor in Electrical Engineering at the University of Sfax, ENIS (National Engineering School of Sfax), since 2006. He is founder and director of the research REGIM Lab in intelligent Machines. He was Director of the Tunisia Erasmus+ Office (2018-2020). He also was Director of the National Engineering School of Sfax, University of Sfax, Tunisia (2005-2011). His research interest includes applications of intelligent methods (neural networks, fuzzy logic, evolutionary algorithms) to pattern recognition, robotic systems, vision systems, and industrial processes. He focuses his research on intelligent pattern recognition, learning, analysis and intelligent control of large scale complex systems. He is associate editor and member of the editorial board of many international scientific journals (e.g. “IEEE Trans. Fuzzy Systems”, “NeuroComputing”, “Neural Processing Letters”, “International Journal of Image and Graphics”, “Neural Computing and Applications”, “International Journal of Robotics and Automation”, “International Journal of Systems Science”, etc.). He was guest editor of several special issues of international journals (e.g. Fuzzy Sets & Systems, Soft Computing, Journal of Decision Systems, Integrated Computer Aided Engineering, Systems Analysis Modelling and Simulations). He is the Founder and Chair of many IEEE Chapter in Tunisia section, he is IEEE Sfax Subsection Chair (2011), IEEE ENIS Student Branch Counselor (2011), IEEE Systems, Man, and Cybernetics Society Tunisia Chapter Chair (2011), IEEE Computer Society Tunisia Chapter Chair (2011), he is also Expert evaluator for the European Agency for Research. He was the general chairman of the International Conference on Machine Intelligence ACIDCA-ICMI’2005 & 2000.
http://arxiv.org/abs/2307.00299v1
20230701105637
Box complexes: at the crossroad of graph theory and topology
[ "Hamid Reza Daneshpajouh", "Frédéric Meunier" ]
math.CO
[ "math.CO", "math.AT", "05C15 (Primary) 55P10, 68Q17 (Secondary)" ]
Various simplicial complexes can be associated with a graph. Box complexes form an important families of such simplicial complexes and are especially useful for providing lower bounds on the chromatic number of the graph via some of their topological properties. They provide thus a fascinating topic mixing topology and discrete mathematics. This paper is intended to provide an up-do-date survey on box complexes. It is based on classical results and recent findings from the literature, but also establishes new results improving our current understanding of the topic, and identifies several challenging open questions. [2020]05C15, 55P10, 68Q17 Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme Doris E. Reiter August 1, 2023 ================================================================================================= § INTRODUCTION Since the 1978 breakthrough paper by Lovász solving the Kneser conjecture <cit.>, various simplicial complexes associated with graphs have been studied, in relations with other combinatorial problems or in their own right. The search for good topological bounds on the chromatic number of graphs has been a great stimulation in this area, and has been at the origin of an especially prominent family of simplicial complexes, namely that of box complexes. A box complex associated with a graph is a simplicial complex whose simplices are its (not necessarily induced) complete bipartite subgraphs. This is just a rough definition, especially because we do not explain the status of an empty part, and this gives actually freedom for considering various types of box complexes. The group _2 acts freely on box complexes by exchanging the two parts, which gives them interesting features, and which allows the use of elementary results from equivariant topology. The popularity of box complexes can be explained by the simplicity of their definition, but also for other reasons: they provide among the best topological lower bounds on the chromatic number, their relation with other simplicial complexes is well understood, and they form an intriguing connection between discrete mathematics and topology The objective of this paper is to form an up-to-date survey of the properties of box-complexes. The diagram of Figure <ref> is the main object around which our work is organized. It shows how various bounds on the chromatic number, especially topological lower bounds in relation with box complexes, compare: each arrow represents a ≤ inequality, from the smaller parameter to the larger. The meaning of the various expressions in the diagram is given later in the paper. However, we already emphasize that we focus only on two kinds of box complexes: (G) and _0(G). Other box complexes have been considered in the literature but all of them are _2-homotopy equivalent to one or the other. All arrows (and all non-arrows) will thoroughly be discussed. To achieve this, we not only collect results from the literature but also prove new results, such as possible gaps between consecutive bounds in the diagram. For instance, Simony, Tardos, and Vrecica <cit.> have shown that the gap in inequality (<ref>) can be equal to 1, but they left open the question whether the gap can be larger. We contribute to that question by constructing infinitely many graphs for which the gap is 2; see Section <ref>, where inequality (<ref>) is discussed, and Section <ref> for the topological result used for the construction. Apart from the improved understanding of the arrows and non-arrows of Figure <ref>, the paper provides other contributions, like: the decidability of (_0(G)) (Theorem <ref>); a complexity result showing the equivalence between the Borsuk–Ulam theorem and the inequality ((G))+2 ≤χ(G) (Theorem <ref>); a theorem relating the box complex of the join of two graphs to the join of their box complexes (Theorem <ref>). The paper is organized as follows. (Style1*=⋆, Style2*=-, Hide=2, FinalSpace=2em, Indent=0.5cm, Space1=0.2cm, Space1*=0.2cm, Space2=0.2cm, Space2*=0.2cm) # Section <ref> deals with the definition of the box complex (G) and presents its main properties and its relevance. It is a short section aiming at being a gentle and self-contained introduction to the topic. # Section <ref> introduces the other box complex considered in the paper, namely _0(G), as well as two other complexes that can be built from graphs, namely the Hom complex and the neighborhood complex. A few topological properties of the box complex (G) when G is a Kneser graph, a Schrijver graph, or a chordal graph are given. The first two kinds of graphs are classical ones in the area of topological combinatorics. # Section <ref> provides the definition of various parameters of graphs and complexes. All parameters that appear in Figure <ref> are in particular defined in that section. The section is divided into three subsections: the first on combinatorial parameters, the second on topological parameters, the third on the comparisons between the parameters attached to simplicial complexes. # Section <ref> discusses the computability of the various parameters and their relation with the Borsuk–Ulam theorem. As explained in that section, these two topics are not independent. # Section <ref> is a short section on (categorical) product of graphs and topological spaces. Even if this section is short, it is an important topic dealing mostly with Hedetniemi's conjecture, a central conjecture in graph theory (now disproved). # Section <ref> is about joins of graphs and topological spaces. The join operation can also be seen as a kind of product, but which is more flexible and has more applications than the product studied in the previous section. # Section <ref> provides a thorough explanation of the diagram of Figure <ref>. For each arrow, the current knowledge on how large the gap can be will discussed. The non-arrow will also be discussed (is an arrow absent because there is no way to order consistently the parameters, or by lack of knowledge?). # Section <ref> gathers complementary remarks, and collects the main open questions met in the survey. Throughout the paper, all graphs are simple, i.e., they have no parallel edges and no loops. § ACKNOWLEDGMENTS The authors are grateful to Moishe Kohan <cit.> for pointing out the existence of the Brieskorn manifolds. They thank Mathieu Florence, Elba Garcia, and Bram Petri for clarifying email messages that helped the writing of the proof of Lemma <ref>. Roman Karasev is also thanked for pointing out the reference to a theorem by Conner and Floyd in the comment of inequality (<ref>). Part of this work was done when the first author was at the School of Mathematics of IPM as a guest researcher in Spring 2021 and received support for his research by this institution. § BOX COMPLEXES: DEFINITION, RELEVANCE, AND MAIN PROPERTIES §.§ Definition Let G be a graph. For a subset A⊆ V(G), let _G(A) = {v∈ V(G) av∈ E(G) for all a∈ A} . It is the set of common neighbors of A. When there is no ambiguity, we will write instead of _G. Note that (∅)=V(G) and that since G has no loops, we have (A)⊆ V∖ A. The box complex of G, denoted by (G), is the simplicial complex defined as follows: (G) = { A' ⊎ A” A',A”⊆ V, A' ∩ A” = ∅, G[A',A”] is complete, (A'),(A”) ≠∅} . Its vertex set is the “signed” version of V(G). Each vertex v of G becomes two vertices: +v and -v. The notation A' ⊎ A” means {+v v∈ A'}∪{-v v∈ A”}. The notation G[A',A”] stands for the bipartite graph with parts A' and A” and whose edges are all edges of G with one endpoint in A' and the other in A”. Roughly speaking, (G) is the simplicial complex formed by all bicliques, i.e., complete bipartite subgraphs, of G. We also count a subset of vertices with at least a common neighbor as a complete bipartite subgraph with no edges (case when A' or A” is empty). The box complex (K_n) of the complete graph with n vertices is the boundary of the n-dimensional cross-polytope to which we have removed two opposite facets. This implies in particular that (K_n) is homotopy equivalent to the (n-2)-dimensional sphere S^n-2. The box complex (C_4) is formed by the disjoint union of two copies of the 3-dimensional simplex. This implies in particular that (C_4) is homotopy equivalent to S^0. This is also a special case of complete bipartite graphs dealt with in Example <ref>. For n≥ 3, the box complex (C_2n) of the 2n-cycle is homeomorphic to the disjoint union of two copies of S^1 × [0,1] (the boundary of each copy being formed by two n-cycles). This implies in particular that (C_2n) is homotopy equivalent to two disjoint copies of S^1. See Figure <ref> for an illustration of the 6-cycle and its box complex (C_6). The box complex (C_2n+1) of the (2n+1)-cycle is homeomorphic to S^1 × [0,1] (whose boundary is formed by two (2n+1)-cycles). This implies in particular that (C_2n+1) is homotopy equivalent to S^1. The box complex (K_m,n) of the complete bipartite graph with parts of size respectively m and n is formed by two disjoint copies of the (m+n-1)-dimensional simplex. This implies in particular that (K_m,n) is homotopy equivalent to S^0. Box complexes have been introduced by Matoušek and Ziegler <cit.> (motivated by former complexes introduced by Alon, Frankl, and Lovász <cit.> and by Kříž <cit.>). In Section <ref>, we will see the other box complex of G considered in the paper, denoted by _0(G). §.§ Chromatic number and connectivity The chromatic number is a central notion in graph theory, and likewise for connectivity in topology. It turns out that these two fundamental notions, from two seemingly distant areas of mathematics, are present in a fundamental inequality in the theory of box complexes. Before stating this inequality, we remind the definition of these two fundamental notions. Coloring graphs is of great importance in discrete mathematics, and beyond (computer science, operations research, etc.). A map c V(G)→ [t] is usually called a coloring, the integers in [t] being considered as colors. When c(u)≠ c(v) for every pair of adjacent vertices u and v (the vertices u and v are adjacent if uv∈ E(G)), the coloring is proper. The minimal t for which there exists a proper coloring is the chromatic number of G and is denoted by χ(G). It is a graph parameter that is not easy to compute (it is -hard) and, considering its importance, any bound is welcome. The connectivity of a topological space X is a central notion in topology. It is denoted by (X) and is the maximal integer d such that any continuous map f from the k-dimensional sphere S^k to X with k∈{-1,0,1,…,d} can be extended to a continuous map f̅ from the (k+1)-dimensional ball B^k+1 to X. When (X) ≥ d, we say that X is d-connected. In this context, S^-1 is interpreted as ∅ and B^0 as a single point. Therefore, (-1)-connected means non-empty. One of the most important facts about the box complex (G) is that its connectivity is related to the chromatic number of G, making a surprising connection between topology and graph theory: ((G))+3 ≤χ(G) . The connectivity of a simplicial complex is not easy to compute, but inequality (<ref>) offers a powerful approach to the chromatic number in many situations. There are cases where the connectivity can yet be precisely computed, others where lower bounds are known, etc. An inequality similar to inequality (<ref>) has in particular been used by Lovász to settle Kneser's conjecture. Our discussion about the diagram of Figure <ref> in Section <ref> will actually provide a proof of inequality (<ref>), but let us sketch the general idea of the proof. The box complex is a free simplicial _2-complex. A simplicial _2-complex is a simplicial complex on which _2 acts. It is free if each orbit of its polyhedron (underlying space) is of size two. In the case of the box complex, the action is simply the exchange A' ⊎ A”→ A”⊎ A'. This makes it amenable to “equivariant” topology and especially to tools like the Borsuk–Ulam theorem. This latter theorem states that there is no continuous map S^d → S^d-1 that commutes with the central symmetry. When the graph is lifted to its box complex, the coloring is lifted to a continuous map, and it is the kind of obstruction provided by the Borsuk–Ulam theorem that prevents the graph of being colored with too few colors. §.§ “Universality” of the box complex Seeing box complexes as free simplicial _2-complexes is very useful. Things go also the other way around, as shown by the following theorem by Csorba (see also the paper by Živaljević <cit.>). (_2-homotopy equivalence, defined hereafter, implies in particular homotopy equivalence.) Every free simplicial _2-complex is _2-homotopy equivalent to the box complex (G) of some graph G. A _2-map between two _2-spaces (topological spaces endowed with a _2-action) is a map that commutes with the action. Two continuous _2-maps f and g between two _2-spaces X and Y are _2-homotopic if there exists a continuous map h X×[0,1] → Y with h(·,0)=f(·) and h(·,1)=g(·), and such that h(·,t) is a _2-map for all t∈[0,1]. Note that this definition is the traditional definition of homotopic maps, except that continuous maps are replaced by continuous _2-maps. The definition of the _2-homotopy equivalence is then the same as the definition of homotopy equivalence with _2-homotopy in place of homotopy. Csorba's proof of Theorem <ref> provides an explicit and easy construction of the graph G: given a simplicial _2-complex , the vertices of G are the non-empty simplices of and στ forms an edge if σ is a face of the image of τ by the action (or vice versa). This construction suggests a few questions, which, to the authors' knowledge, have not been addressed yet. For instance, the number of vertices of G in this construction is exponential in the dimension of . Would it be possible to come up with a construction requiring much less vertices in general? Or with a construction providing a graph with small chromatic number? Concrete uses of this construction seem to be scarce, and the current interest of Theorem <ref> is the existence of this graph, independently of the way it is constructed. The main message of this theorem is that there is no hope to achieve a meaningful characterization of box complexes of graphs, since these latter are actually almost as general as all free simplicial _2-complexes. (Yet, as noted by Csorba <cit.>, a version of the theorem with _2-homeomorphism does not hold.) Theorem <ref> will be used in Section <ref> to prove that the gap of some inequalities between topological lower bounds can be arbitrarily large. In the literature, there is also another “negative” message, arguably in the same spirit: there is no hope for characterizing the chromatic number via topological property of the box complex (in slightly different terms, this was a question by Lovász in his 1978 paper solving Kneser's conjecture). Indeed, Matsushita <cit.> proved that there is no homotopy invariant of the box complex that gives an upper bound for the chromatic number of a graph. Generalizations of Csorba's construction have been proposed by Dochtermann <cit.> and by Dochtermann and Schultz <cit.>. § FURTHER RESULTS ON BOX COMPLEXES AND OTHER RELEVANT COMPLEXES §.§ Another box complex There is another box complex, denoted by _0(G), which is probably as popular as (G). Its definition goes as follows: _0(G) = { A' ⊎ A” A',A”⊆ V(G), A' ∩ A” = ∅, G[A',A”] is complete} . It contains (G). The extra simplices are the bipartite subgraphs with an empty part and whose vertices in the other part do not have a common neighbor in G. The box complex _0(K_n) of the complete graph with n vertices is the boundary of the n-dimensional cross-polytope. This implies in particular that _0(K_n) is _2-homeomorphic to S^n-1. The box complex _0(C_4) of the 4-cycle is formed by two pairs of disjoint tetrahedra, somehow arranged into a “circle,” where two adjacent tetrahedra share 2 vertices. This implies in particular that _0(C_4) is _2-homotopy equivalent to S^1. See Figure <ref>. This is also a special case of complete bipartite graphs dealt with in Example <ref>. For n ≥ 3, the box complex _0(C_2n) of the 2n-cycle can be described as follows: the box complex (C_2n) is formed by two copies of S^1 × [0,1] (see Example <ref>); in _0(C_2n), there is an extra (2n-1)-dimensional simplex attached to one boundary cycle of each copy, and there is a second extra (2n-1)-dimensional simplex attached to the two other boundary cycles of the copies. This implies in particular that _0(C_2n) is homotopy equivalent to the wedge of S^1 and two copies of S^2 (i.e., the topological space obtained by joining them at a single point). The box complex _0(C_2n+1) of the (2n+1)-cycle can be described as the box complex (C_2n+1) (see Example <ref>) with two 2n-dimensional simplices attached to the two (2n+1)-cycles. This implies in particular that _0(C_2n+1) is _2-homotopy equivalent to S^2. The box complex _0(K_m,n) of the complete bipartite graph with parts of size respectively m and n is formed by two pairs of disjoint (m+n-1)-dimensional simplices, somehow arranged into a “circle,” where two adjacent (m+n-1)-dimensional simplices share m or n vertices. This implies in particular that _0(K_m,n) is _2-homotopy equivalent to S^1. Several other definitions of box complexes have been proposed in the literature. In the paper by Matoušek and Ziegler <cit.>, many other definitions are considered, but by results of Csorba and Zivaljević, they are all _2-homotopy equivalent to (G) or to _0(G) (see, e.g., <cit.>). This explains why (G) and _0(G) have attracted most of the attention devoted to box complexes in the literature, and motivated the focus on them. The next theorem establishes the link between the two box complexes considered in this paper. Here, the notation (·) stands for the standard “suspension” operation from topology, which is defined as follows. Let X be a topological space. The suspension of X, denoted by (X), is the quotient space (X × [-1,1])/(X×{-1}, X ×{1}), which corresponds to shrink in X×[-1,1] all points (x,t) with t=-1 to a single point, and same thing for all points with t=1. Suspension, and the more general “join operation,” will be further discussed in Section <ref>. For every graph G, the complexes ((G)) and _0(G) are _2-homotopy equivalent. §.§ Other complexes Apart from box complexes, the Hom complex, denoted by (K_2,G), and the neighborhood complex, denoted by (G), have also played a prominent role in topological combinatorics. The Hom complex has been introduced (in a slightly different setting, and for the more general case of hypergraphs) by Alon, Frankl, and Lovász <cit.>. The neighborhood complex has been introduced by Lovász in his 1978 foundational paper. The Hom complex (K_2,G) is the partial ordered set (poset) defined as (K_2,G) = { A' ⊎ A” A',A”⊆ V, A' ∩ A” = ∅, G[A',A”] is complete, A', A”≠∅} equipped with the following partial order: (A',A”) ≼ (B',B”) if A' ⊆ B' and A”⊆ B”. Note that in contrast with the previous definitions, we do not deal here with a simplicial complex but with a poset. In the literature, several options have been taken regarding Hom complexes (CW-complexes, simplicial complexes). We follow the option chosen by Simonyi, Tardif, and Zsbán <cit.>. (The notation comes from the original definition that used “multihomomorphism” from K_2 to G.) The neighborhood complex (G) is defined as follows: (G) = { A ⊆ V (A) ≠∅} . Note that, contrary to the other complexes introduced so far, _2 does not act on it. The following theorems show that these four complexes are strongly related. The next theorem is an immediate consequence of results due to Csorba et al. <cit.>. For every graph G, the complexes (G) and (G) are homotopy equivalent. The neighborhood complex (K_n) of the complete graph with n vertices is the boundary of the (n-1)-dimensional simplex. It is thus homotopy equivalent to S^n-2, to which (K_n) is also homotopy equivalent (Example <ref>). Combining this theorem with the inequality (<ref>) leads to the inequality ((G))+3 ≤χ(G) . Historically speaking, it is the first topological lower bound on the chromatic number and it is due to Lovász. There is a topological relation between (G) and (K_2,G). A poset P is brought in the realm of topology via the order complex, which is the simplicial complex whose ground set is formed by the elements of P and whose simplices are the collections of pairwise comparable elements. It is denoted by (P). For every graph G, the complexes (G) and ( (K_2,G) ) are _2-homotopy equivalent. §.§ Box complex of some specific graphs Kneser's conjecture, whose resolution by Lovász is generally considered as the birth of topological combinatorics, was about the chromatic number of Kneser graphs. These graphs, which are important objects from discrete mathematics, continue to play an important role in topological combinatorics. Given two integer number n and k, with n ≥ 2k-1, the Kneser graph (n,k) has all k-subsets of [n] as vertex set and has an edge between each pair of disjoint vertices. To achieve the computation of the chromatic number of Kneser graphs, Lovász proved first inequality (<ref>), and then that (((n,k))) is equal to n-2k-1. Since n-2k+2 is an easy upper bound, the equality χ((n,k)) = n-2k+2 follows immediately. This result about the neighborhood complex of Kneser graphs has been improved in various ways. The wedge of topological spaces is a standard operation in topology and consists in joining the topological spaces at a single point. The box complex ((n,k)) is homotopy equivalent to the wedge of an odd number of (n-2k)-dimensional spheres. Schrijver graphs were introduced by Schrijver <cit.> soon after the resolution of Kneser's conjecture. Their role in topological combinatorics is also important. Given two integer number n and k, with n ≥ 2k-1, the Schrijver graph (n,k) is the subgraph of (n,k) induced by the 2-stable k-subsets of [n], which are the k-subsets not containing consecutive integers (with 1 and n being considered as consecutive). Schrijver proved that the chromatic number of (n,k) is n-2k+2 as well (but that they are critical, in the sense of the removal of any vertex reducing the chromatic number). Combined with inequality (<ref>), the next theorem provides an alternative proof of χ((n,k))≥ n-2k+2. The box complex ((n,k)) is _2-homotopy equivalent to an (n-2k)-dimensional sphere. Results of this type for other “stable” subgraphs of Kneser graphs have been obtained by Osztényi <cit.> and Daneshpajouh and Osztényi <cit.>. Here is a result about another classical family of graphs. The box complex (G) of a connected chordal graph G is homotopy equivalent to the wedge of an odd number of spheres. § PARAMETERS OF GRAPHS AND COMPLEXES This section is devoted to the definition of the parameters present in Figure <ref>. A subsection concerns combinatorial parameters and another concerns topological parameters. Most of these parameters are then compared in a subsequent subsection. Two very important parameters are not defined hereafter since we have already met them in Section <ref>: the connectivity of a simplicial complex or a topological space, denoted by (·), and the chromatic number of a graph, denoted by χ(·). §.§ Combinatorial parameters The most classical lower bound on the chromatic number is the clique number, defined as the maximal number of vertices of a clique, i.e., a complete subgraph. It is denoted by ω(G). Another relevant parameter is the largest n such that G contains a biclique with parts of sizes k,ℓ for all k,ℓ≥ 1 with k+ℓ = n. (We remind that bicliques, already met in Section <ref>, are complete bipartite subgraphs.) It is denoted by b(G). Consider a graph G with a proper coloring c V(G) →_+. A biclique is zigzag if there exists a numbering v_1, v_2, …, v_t of its vertices such that the vertices with an odd index form one part of the biclique, those with an even index form the other part, and c(v_1) < c(v_2) < ⋯ < c(v_t) . The notion of zigzag bicliques originates from the work of Simonyi and Tardos <cit.> (but the terminology is ours). The zigzag number of G, denoted by (G), is the minimum over the proper colorings of G of the size of a largest zigzag biclique. We clearly have (G) ≤χ(G). A hypergraph is 2-colorable if its vertices can be colored with two colors in such a way that no edge is monochromatic. The 2-colorability defect of a hypergraph , denoted by (), is the minimal number of vertices to remove from such that the hypergraph induced by the remaining vertices is 2-colorable: ()=min{|U|(V()∖ U,{e∈ E() e∩ U=∅})} . This parameter has been introduced by Dol'nikov <cit.> who proved that it also provides a lower bound on the chromatic number with the help of the notion of Kneser representation. A Kneser representation of a graph G is a hypergraph with a one-to-one mapping from V(G) to E() such that adjacent vertices are mapped to disjoint edges. Given a Kneser representation of G, then Dol'nikov's inequality reads () ≤χ(G) . For the Kneser graph (n,k), the complete k-uniform hypergraph on [n], which we denote by 𝒦_n^k, is a natural Kneser representation. Removing less than n-2k+2 vertices from that hypergraph does not make it 2-colorable: any 2-coloring will color with the same color at least k elements. Removing exactly n-2k+2 vertices does make it 2-colorable: color k-1 elements with one color; color the other k-1 elements with the other color. Thus (𝒦_n^k)=n-2k+2, and we get χ((n,k))≥ n-2k+2. We already mentioned that n-2k+2 is an easy upper bound, and so inequality (<ref>) is tight for Kneser graphs. It is actually not difficult to come up with a Kneser representation of any graph G. The standard construction consists in representing each vertex of G by the set of its incident edges in the complement graph; in some cases, unfortunately, vertices can be represented by the same set, an issue that can be easily fixed by introducing an extra element for each vertex. The question of finding a Kneser representation of minimal size has been investigated by Hamburger, Por, and Walsh <cit.>. The last combinatorial parameters we define are related to posets. A chain in a poset is a collection of pairwise comparable elements. The first parameter is the dimension of a poset P, denoted by (P), and defined as the maximal length of a chain minus one. It is a classical parameter. The second one is less common and has been introduced by Simonyi, Tardif, and Zsbán <cit.> in relations with topological bounds on the chromatic number. A _2-poset is a poset endowed with an order-preserving involution. The group _2 is thus acting on such a poset. For a _2-poset P, we define its cross-index, denoted by (P), as the minimum t such that there exists a _2-map from P to Q_t, where Q_t is a _2-poset defined as follows. It has {± 1,± 2,…, ± (t+1)} as ground set, x≼ y for x,y∈ Q_t if |x|≤ |y|, and whose order-preserving involution is the map x↦ -x. We have (P) ≤(P). (Indeed, assign a sign + to one element and a sign - to the other element of every orbit; define a _2-map ϕ from P to Q_(P) by setting ϕ(x) to be the largest cardinality of a chain ending at x with the sign assigned to x.) §.§ Topological parameters The index of a topological _2-space X is the smallest d such that there exists a continuous _2-map from X to S^d (the central symmetry making this latter a _2-space). By convention, it is equal to +∞ when there is no such map for any d, and to -1 when X is empty. The coindex of a topological _2-space X is the largest d such that there exists a continuous _2-map from S^d to X. By convention, it is equal to -1 when X is empty and to +∞ when there is such a map for arbitrarily large d. (Note that when X is non-empty, such a map always exists for d=0.) Therefore, both the index and the coindex take their values in {-1,0,1,2,…}∪{+∞}. The Borsuk–Ulam theorem implies (and is actually equivalent to) the inequality (X) ≤(X) for every topological _2-space X. We already defined the notion of free simplicial _2-complexes. Similarly, a free _2-space is such that every orbit is of size 2. It is immediate to check that index and coindex of non-free topological _2-spaces are equal to +∞. So, even if the definition of index and coindex is valid for non-free topological _2-spaces, it is only relevant for free topological _2-spaces. A topological parameter that is closely related to the connectivity is the _2-connectivity. The _2-connectivity of a topological space X is denoted by __2(X) and is the maximal integer d such that H_k(X,_2)=0 for all k ∈{-1,0,1,…,d}. The classical Hurewicz theorem states in particular that the inequality (X)≤__2(X) holds for every such topological space X. The last topological parameter we consider is also maybe the most abstract. We define it for a (finite) simplicial free _2-complex . There always exists a _2-map from such a to the infinite-dimensional sphere S^∞. This can be easily achieved by induction on the dimension of , the sphere S^∞ being contractible; see <cit.> or <cit.> for a complete proof. Pick any such map f. This _2-map f induces a map f̅ / _2 → S^∞ / _2 = P^∞ and in cohomology we get a map f̅^* H^*( P^∞,_2) → H^*(/_2,_2). The graded algebra H^*( P^∞,_2) is of the form _2[α], with generator α taken in H^1( P^∞,_2) <cit.>. Denoting by ϖ_1() the image of α by f̅^*, the cohomological-index of , also called its Stiefel–Withney height, is the maximum n such that (ϖ_1())^n ∈ H^n(/_2,_2) is not 0. (Here, the power n is computed according to the cup product of the cohomology ring H^*(/_2,_2).) We denote it by (). The cohomological index is well-defined because the map f̅^* does not depend on the choice for f: this is because all _2-maps from to S^∞ induce the same map / _2 → S^∞ / _2 = P^∞ up to homotopy; see <cit.>. §.§ General relations between various parameters For a free (finite) simplicial _2-complex , we always have at (2/3,2/3) (s1) a1; at (2/3,-1/3) (s2) a2; at (10.7/3,2/3) (s3) b1; at (11/3,-1/3) (s4) b2; at (-2/3,0) (a1) 1+(); at (2.2,0.85) (a2) (); at (2.3,-1) (a3) 1+__2(); at (5.5,0) (a4) (); at (7.25,1/8) (s5) ≤c; at (8.25,0) (a5) (); at (9.25,1/8) (s6) ≤d; at (10.75,0) (a6) (()); at (12.25,1/8) (s7) ≤e; at (13.35,0) (a7) () ,; where () is the face poset of . We now discuss briefly each inequality. The inequality 1+() ≤() is proved for instance by Matoušek in his book <cit.>. (The objective in this reference is to prove a lower bound on the index, but the proof is actually showing the stronger result with coindex.) It is a consequence of the Hurewicz theorem. It is a now classical inequality but we provide a complete and self-contained proof. We first establish that the cohomological index of a sphere is at least its dimension (it is acyually equal; see discussion of inequality (<ref>) below). The conclusion will then follow easily. The graded algebra H^*( P^d,_2) is of the form _2[β]/β^d+1 for some generator β <cit.>. The inclusion map S^d S^∞ induces an injective homomorphism H^*( P^∞,_2) → H^*( P^d,_2) (this is because written as CW-complexes, P^d is exactly the d-skeleton of P^∞; see <cit.> for a proof for a similar property for homology). This injectivity implies in particular that ϖ_1(S^d) = β, and thus (ϖ_1(S^d))^d ≠ 0. Hence, (S^d) is at least d. To conclude, assume the existence of a _2-map g S^d →. The composition g̅^* ∘f̅^* H^*( P^∞,_2) → H^*( P^d,_2), with f̅^* defined as in Section <ref>, implies that (ϖ_1(S^d))^n=0 if (ϖ_1())^n=0. Hence, (ϖ_1())^d≠ 0. This inequality requires some work, and we refer to the work by Conner and Floyd <cit.> (where we use the isomorphism between homology and cohomology for finite simplical complexes and coefficients in a field); see also <cit.>. The quantity (S^d) is at most d (actually, is equal to d because of the argument used for proving inequality (<ref>)), because H^n(S^d/_2)=0 when n>d. Again, the cohomological index being an increasing parameter, we get the desired inequality. Let t = (()). Any _2-map from () to Q_t lifts to a _2-map from () to Q_t. The simplicial complex Q_t is _2-homeomorphic to a t-dimensional sphere. This implies inequality (<ref>). The dimension of is the same as the dimension of its face poset; therefore, inequality (<ref>) is a specialization for P=() of the inequality between cross-index and dimension; see end of Section <ref>. § COMPUTABILITY AND COMPLEXITY OF PARAMETERS §.§ Decidability and hardness Clearly, all combinatorial parameters of Section <ref> are decidable (i.e., given an integer k, there is an algorithm deciding whether the parameter is at most k). The same thing holds for topological parameters based on homological or cohomological quantities. For the other parameters, we do not know the answer, except for ((G)) and (_0(G)). The former is undecidable since, via Csorba's result, it is equivalent to the decidability of topological connectivity in general, which is not possible <cit.>. (We reduce to _2-complexes just by taking the product with a sphere of sufficiently large dimension.) For the latter, which is a better bound, we have the following. The quantity (_0(G)) is decidable. Actually, the proof will make clear that there is an algorithm realizing the decision task in a time that is polynomial in the size of _0(G). If G has no vertex, then (_0(G)) is -2 because _0(G) has no vertices and no simplices. If G has at least one vertex and no edges, then (_0(G)) is -1 because _0(G) has at least two 0-simplices (obtained from a same vertex in the graph) but no 1-simplices. For the remaining cases, we assume that G has at least one edge, and thus that _0(G) is path-connected because (G) is non-empty and _0(G) is the suspension of (G) (Theorem <ref>). Moreover, we will use the (well-known) fact that when G is connected, (G) is path-connected if and only if G is not bipartite (see, e.g.,  <cit.> where it is stated for the neighborhood complex, which is homotopy equivalent according to Theorem <ref>). If G is bipartite or not connected, then (_0(G)) is 0 because in that case (G) is not path-connected, which implies that the reduced 1-dimensional homology of _0(G) in non-trivial (consequence of Theorem <ref> and equality (<ref>)). If G is non-bipartite and connected, then (_0(G)) is equal to its homological connectivity over because in that case (G) is path-connected, which implies that _0(G) is simply connected (by equality (<ref>)) and thus that the Hurewicz theorem applies. Since homology over can be computed via elementary linear algebra, this finishes the proof. The computation of ω(G) is among the first problems that have been proved -hard <cit.>. The computation of () and b(G) are -hard as well. For the first parameter, see <cit.>, and for the second, this can be shown by a direct reduction from the “maximum balanced biclique,” which is -hard <cit.>: the existence in G of a biclique whose parts are both of size at least k is equivalent to b(G') ≥ 2k, where G' is the disjoint union of G and all possible bicliques with part sizes ℓ,m, with ℓ+m=2k and ℓ≤ k-1. The complexity status of the other parameters is not known. Note that even if computing the homology or cohomology is polynomial in terms of the size of the simplicial complex, it does not imply any polynomiality result for graphs since the complexes we consider are exponential in the size of the graph. §.§ Borsuk–Ulam boundary Except for the clique number, the Borsuk–Ulam theorem—or a re-proof of it—is used to prove (G) ≤χ(G) for all lower bounds below the red line in Figure <ref>. Moreover, if we also discard the lower bound (), the inequality (G) ≤χ(G) can be used to establish the Borsuk–Ulam theorem, as we will discuss further below. Concerning the inequality () ≤χ(G) (with a Kneser representation of G), we are not aware of any way to derive the Borsuk–Ulam theorem from it. This will also be discussed a bit further at the end of the section. On the other hand, the proof of (G) ≤χ(G) is trivial when (G) ∈{((G))+2,(_0(G))+1,((G))+2,((K_2,G))+2,(G)} . For (G), it is immediate from the definition. For the other bounds, it is also immediate once we notice that any proper coloring of G with n colors translates directly into a _2-map from (G), _0(G), and (K_2,G) to respectively (K_n), _0(K_n), and (K_2,K_n). These inequalities are “functorial,” since they are just translations of the existence of a map in a setting into another setting. It does not seem that any of these inequalities can be used to prove the Borsuk–Ulam theorem. These are the reasons why we call the red line in Figure <ref> the Borsuk–Ulam boundary: the topological lower bounds above the line does not require the Borsuk–Ulam theorem; the topological lower bounds below require the Borsuk–Ulam theorem (apart possibly for ()). We are fully aware that this kind of statements is hard to support with mathematical arguments. Yet, we provide now an elementary proof of the Borsuk–Ulam theorem from the inequality ((G))+3 ≤χ(G), and a complexity result showing that the statement ((G))+2 ≤χ(G) is somehow equivalent to the Borsuk–Ulam theorem. We finish with a computational problem related to Schrijver's graph, which also illustrates the fact all topological lower bounds below the dashed line in Figure <ref> require the Borsuk–Ulam theorem (apart possibly for ()). §.§.§ Proving the Borsuk–Ulam theorem from ((G))+3 ≤χ(G) The Borsuk graph ^d(ε) has the points of the unit sphere S^d as vertices, and has an edge between two vertices if they are at distance at least 2-ε apart (Euclidean distance). It is well-known that the Borsuk–Ulam theorem is an immediate consequence of the chromatic number of ^d(ε) being at least d+2. We construct a graph G from the simplicial complex ^r(∂^d+1) (r-th barycentric subdivision of the boundary of the (d+1)-dimensional cross-polytope), as done in Csorba's proof of Theorem <ref> (Section <ref>), where r is a sufficiently large integer. Note that the graph G is isomorphic to a subgraph of ^d(ε) by construction. The simplicial complexes (G) and ^r(∂^d+1) are thus _2-homotopy equivalent, which implies that ((G)) = d-1. The inequality ((G))+3 ≤χ(G) implies then that χ(^d(ε)) ≥ d+2, as required. §.§.§ The coindex lower bound is polynomially equivalent to the Borsuk–Ulam theorem The problem ε-Borsuk–Ulam is defined as follows. An arithmetic circuit is a representation of a continuous function defined by a few elementary operations (addition, multiplication, maximum, etc.); see, e.g.,  <cit.>. ε-Borsuk–Ulam: Input. An arithmetic circuit f^d+1→^d; a constant ε∈. Task. Find a point x ∈ S^d such that f(x) - f(-x)_∞≤ε. The problem ε-Borsuk–Ulam is -complete <cit.>. The complexity class is one of the fundamental subclasses of , the -search problems that always have a solution. It has been introduced, with other classes, by Papadimitriou in 1994 <cit.> and is defined as those problems that can be reduced to the problem of finding another odd-degree vertex in a graph which is given with a first odd-degree vertex. ε-Borsuk–Ulam being -complete means, as for -complete problems, that it is unlikely that the problem be solved in polynomial time. (Note that ε-Borsuk–Ulam is not in since it is not a decision problem.) Now, we define a computational problem related to the inequality ((G))+2 ≤χ(G). A _2-circuit g is a circuit such that g(-x)=-g(x) for all x in the domain of g (this property can be verified in polynomial time.) Coind-lower-bound: Input. A graph G, along with a (non-necessarily proper) coloring using at most d+1 distinct colors; an arithmetic _2-circuit g^d+1→^V(G); a constant δ∈. Task. Find one of the following. * A point x ∈ S^d such that g(x)_∞≤δ. * A point x ∈ S^d such that {s · v s ∈{+,-}, v ∈ V(G), s g_v(x) > 1/2(|V(G)|+1)δ} is not a simplex of (G). * A monochromatic edge of G. The rationale of the problem is the following. It considers implicitly that (G) is embedded with the two images of the orbit of each vertex v located at e_v and -e_v (the e_v being the unit vectors of the standard basis of ^V(G)). If the coloring is proper, the inequality ((G))+2 ≤χ(G) implies that there is no continuous _2-map S^d→(G). Thus, any continuous _2-map ^d+1→^V(G) (as g) does not map S^d to (G) (expressed in two ways via type-(<ref>) solutions and type-(<ref>) solutions). Working with arithmetic circuit imposes some technicalities. Coind-lower-bound is polynomially equivalent to ε-Borsuk–Ulam. Since the proof is a bit long and technical, it is given in the appendix. §.§.§ Schrijver's graphs computationally Schrijver's graphs (n,k) have been introduced in Section <ref>. The problem Schrijver is defined as follows. A Boolean circuit is a representation of a Boolean function defined by a few elementary operations (AND, OR, NOT). Schrijver: Input. Two integers n and k such that n≥ 2k-1; a Boolean circuit c{2-stable k-subsets of [n]}→ [n-2k+1] representing a coloring of (n,k). Task. Find a monochromatic edge of (n,k), i.e., two disjoint 2-stable k-subsets S,T of [n] such that c(S)=c(T). Since χ((n,k))=n-2k+2, the coloring c cannot be proper and a monochromatic edge always exists. Haviv proved the following result, which means that ε-Borsuk–Ulam can be polynomially reduced from and to Schrijver. The proof makes these reductions explicit. Schrijver is -complete. The inequality χ((n,k))≥ n-2k+2 can be established without too much work from all topological lower bounds below the dashed line of Figure <ref>, except from (), which provides a lower bound of n-4k+4 when is the natural Kneser representation ([n] being its vertex set, and the 2-stable k-subsets being the edges). This again shows that these topological lower bounds somehow implies the Borsuk–Ulam theorem. Regarding the colorability defect, we note that () is at most n-2k+1 for every Kneser representation of (n,k), at least when n is odd: indeed Corollary 2 in <cit.> shows that when the colorability defect lower bound is tight for some Kneser representation, then the circular chromatic number is equal to the usual chromatic number, which is not the case for Schrijver graphs with odd n. This might indicate that the colorability defect lower bound is “weaker” than the Borsuk–Ulam theorem, and may not require this latter theorem to be established. The computational problem Kneser is defined just by changing in Schrijver the domain of the Boolean circuit c becomes the full collection of k-subsets of [n]. The following problem is probably among the most important problems at the intersection of topological combinatorics and complexity theory. Is Kneser -complete? A negative answer would imply that more elementary proofs of the Lovász theorem and of the colorability defect lower bound than those known could be possible. § CATEGORICAL PRODUCT OF GRAPHS AND TOPOLOGICAL SPACES Given two graphs G and H, their categorical product G× H is the graph whose vertices is the set V(G) × V(H) and for which (u,v)(u',v') forms an edge whenever uu' is an edge of G and vv' is an edge of H. This product has attracted attention especially because it appears in the famous Hedetniemi conjecture <cit.> stating that χ(G× H) is equal to the minimum of χ(G) and χ(H). This conjecture has recently been disproved by Shitov <cit.>, but it is known to hold for some graphs for which topological bounds are tight, as we discuss now. Consider all graphs for which a given topological lower bound is tight. We may then ask whether they satisfy Hedetniemi conjecture. The answer is known to be `yes' for some topological lower bounds <cit.>. The most general result in this line is the following theorem by Daneshpajouh, Karasev, and Volovikov <cit.> (immediate consequence of Corollary 2.6 of their paper), which implies all previous results of this kind because the corresponding topological lower bounds lie below the cohomological index lower bound in Figure <ref>. Consider two graphs G and H for which the cohomological index lower bound is tight. Then χ(G × H) = min(χ(G),χ(H)). The proof of Theorem <ref> relies on the following result. For every pair of graphs G,H, the complexes (G × H) and (G) ×(H) are _2-homotopy equivalent. (The product (G) ×(H) can be interpreted as a CW-complex, whose cells are the products of the simplices of (G) and (H), or simply as the Cartesian product of (G) and (H) seen as topological spaces, since only homotopy equivalence is at stake.) A similar relation does not hold for _0(·), as elementary examples show easily. Consider the case where G is an edge and H a triangle. Then G × H is the 1-skeleton of a prism. The complexes (G) and (H) are _2-homotopy equivalent respectively to S^0 and S^1 (Example <ref>). Since G × H is C_6, the complex of (G× H) is _2-homeomorphic to the disjoint union of two copies of S^1 × [0,1] (Example <ref>, the stronger _2-homeomorphism being immediate), which is indeed _2-homotopy equivalent to the product of S^0 and S^1. The proof of Theorem <ref> also relies on a Hedetniemi-like relation for the cohomological index of _2-spaces. Most previous results on lower bounds below the cohomological index lower bound in Figure <ref> also required similar relations. Here are all known relations of this type: * For every pair X,Y of topological spaces, we have (X × Y) = min((X),(Y)) (well-known). * For every pair X,Y of topological spaces, we have __2(X × Y) = min(__2(X),__2(Y)) (Künneth's formula). * For every pair of free _2-spaces X,Y, we have (X × Y) = min((X),(Y)) (easy to establish). * For every pair X,Y of free _2-spaces that admit _2-triangulations, we have (X × Y) = min((X),(Y)) (proved by Daneshpajouh, Karasev, and Volovikov <cit.>). Whether similar relations hold for the index parameter or for the cross-index parameter is open (see  <cit.>). A closely related open question is the following. For which parameters of Figure <ref>, do we have (G× H) = min((G),(H)) ? From the previous discussion (except for the clique number, but it is then immediate), we have this Hedetniemi-like relation when (·)∈{ω(·),((·)),((·)),__2((·)),((·))} . But this is open for all other bounds in Figure <ref>. § JOINS Join is an important operation in topology, and it is especially useful for the study of topological bounds. §.§ Preliminaries on the join and suspension operations in topology The join of two simplicial complexes and Ł, denoted by * Ł, is the simplicial complex whose vertices are the elements of V() ⊎ V(Ł) and whose simplices are all A ⊎ B with A ∈ and B ∈Ł. Remark that the join operation is commutative and associative (up to renaming of the vertices). Since * S^0 is homeomorphic to the suspension of (see Section <ref> for the definition of the suspension of a topological space), the simplicial complex * S^0 is also called the suspension of and denoted by (). The join operation behaves well with respect to many parameters. Regarding topological connectivity, we have the following relation, originally proved by Milnor <cit.>: ( * Ł) ≥() + (Ł) +2 . In the case of _2-connectivity, the Künneth formula provides directly (we are working with coefficents in a field): __2( * Ł) = __2() + __2(Ł) +2 . The special cases when Ł is S^0—the suspension operation—is useful in many situations, as this can already be seen by the present paper. For instance, equality (<ref>) implies that (()) ≥() + 1 . In the case of homological connectivity for coefficients in a field, we have equality; this is a direct consequence of equality (<ref>) in the case of _2. We have even the following equality (see <cit.>): H_i+1((),) = H_i(,) . We also have equality in case of the cohomological index <cit.>: ( * Ł) = () + (Ł) +1 . For the index and coindex, the following is immediate: ( * Ł) ≥() + (Ł) +1 ( * Ł) ≤() + (Ł) +1 . We finish this section by a short discussion on inequality (<ref>) because it is sometimes misstated as an equality in the combinatorial literature (and even the more general inequality (<ref>) is sometimes stated erroneously as an equality). The gap can be arbitrarily large. We state it and provide a full proof. For every integer n≥ 3, there is an n-dimensional simplicial complex _n such that ((_n))≥ n and (_n) = 0. According to <cit.>, for every n≥ 3, there is a smooth homology sphere of dimension n, whose fundamental group is a nontrivial perfect group. Let be any triangulation of such homology sphere (which exists then by Whitehead's theorem <cit.>). It is thus a simplicial complex satisfying the following three properties: * is path-connected. * π_1() is a nontrivial perfect group. * H_i(,)=0 for i=0,…,n-1. The simplicial complex () is simply connected because of item <ref> and inequality (<ref>). We can therefore apply the Hurewicz theorem to (). Since H_i+1((),) = H_i(,) for all i (by (<ref>)), we conclude that () is n-connected. Further, considering the importance of the connectivity of the suspension for the study of box complexes, we provide here a characterization of equality in (<ref>). For any path-connected simplicial complex , we have (()) = () + 1 ⟺ π_1() is trivial or not a perfect group . (Actually, this proposition is true for any topological space with the structure of a CW-complex.) If is not path-connected, then we always have the equality in (<ref>) (as it can easily be seen by using (<ref>) and the Hurewicz theorem). By inequality (<ref>), () is simply connected because is path-connected. Therefore, the Hurewicz theorem combined with the fact that H_i+1((),) = H_i(,) for all i≥ -1 implies the following equality: (()) = min{n H_n(,) ≠ 0} . Suppose now that π_1() is trivial. Then by the Hurewicz theorem, the topological connectivity of is the smallest n, minus 1, such that H_n(,) is non-zero. With the help of equality (<ref>), we get the desired equality. Suppose then that π_1() is not perfect. Then the topological connectivity of is 0. By the Hurewicz theorem, H_1(,) is the abelianization of π_1(). Since this latter is not a perfect group, H_1(,) is non-zero. Since H_0(,) is zero ( is path-connected), equality (<ref>) implies the desired equality. Suppose now that π_1() is a nontrivial perfect group. Then again the topological connectivity of is 0. By the Hurewicz theorem, H_1(,) vanishes, which implies according to equality (<ref>) that (()) > 1. §.§ Box complexes of join Given two graphs G and H, their join G * H is the graph formed from disjoint copies of G and H and by connecting each vertex of G to each vertex of H. One of the main new results of this paper, mentioned in the introduction, is Theorem <ref> below. It is a result relating the box complex (G * H) of the join of two graphs G and H with that of the join of their box complexes (G) and (H). For _0(·), we have even a stronger result, with a simpler proof. For every pair of graphs G, H, the complexes _0(G * H) and _0(G) * _0(H) are _2-homeomorphic. We prove a more general fact: (_0(G * H)) and (_0(G)) *(_0(H)) are _2-isomorphic. (Given a simplicial complex , we denote by () its first barycentric subdivision.) Let A' and A” be two disjoint subsets of V(G * H) such that (G*H)[A',A”] is a complete bipartite subgraph of G*H. Write A' as A'_G ∪ A'_H where A'_G = A' ∩ V(G) and A'_H = A' ∩ V(H), and do similarly for A”. For A' and A” not both empty, set f(A' ⊎ A”) = (A'_G ⊎ A”_G) ⊎ (A'_H ⊎ A”_H). It is a simplicial _2-map from (_0(G * H)) to (_0(G)) *(_0(H)). Let B' and B” (resp. C' and C”) be two disjoint subsets of V(G) (resp. V(H)) such that G[B',B”] (resp. H[C',C”]) is a complete bipartite subgraph of G (resp. H). For B', B”, C', and C” not all empty, set g((B'⊎ B”) ⊎ (C'⊎ C”)) = (B' ⊎ C') ⊎ (B”⊎ C”). It is a simplicial _2-map from (_0(G)) *(_0(H)) to (_0(G * H)). It is immediate that f∘ g = g∘ f= id. A similar result is not true for the box complex (G). (With Theorem <ref>, this might show that _0(G) is a bit more handy than (G).) Yet, we have the following weaker version. For every pair of graphs G,H, the complexes (G * H) and ( (G) * (H)) are homotopy equivalent. Consider the case where G is the complete graph K_m and H the complete graph K_n. Then G*H is K_m+n. We have _0(G), _0(H), and _0(G*H) respectively _2-homeomorphic to S^m-1, S^n-1, and S^m+n-1 (Example <ref>). Thus _0(G*H) and _0(G)*_0(H)=S^m-1*S^n-1 are indeed _2-homeomorphic, as expected by Proposition <ref>, since they are equal. The complexes (G), (H), and (G*H) are homotopy equivalent respectively to S^m-2, S^n-2, and S^m+n-2 (Example <ref>). Thus, (G*H) and ((G)*(H)) are indeed homotopy equivalent. When H is a single vertex, Theorem <ref> is a result by Csorba <cit.>, who actually proved the stronger _2-homotopy equivalence for this special case. We conjecture that the _2-homotopy equivalence holds actually for the general case as well. Note that, by applying (·) on each of (G * H) and ( (G) * (H)) in Theorem <ref>, we get a weaker version of Proposition <ref>, namely that _0(G * H) and _0(G) * _0(H) are homotopy equivalent. (Here, we see the suspension as the join with S^0, and use the commutativity and associativity of the join.) The main tool of the proof of Theorem <ref> is the Fiber theorem of Quillen <cit.>, which states that an order preserving map f P→ Q of posets is a homotopy equivalence if f^-1(Q_≼ q) is contractible for every q∈ Q. The notation Q_≼ q stands for the set {x∈ Q x≼ q}. (An elementary proof of Quillen's Fiber theorem has been given by Barmak <cit.>.) We actually prove that (G * H) and ( (G) * (H)) are homotopy equivalent. Since the join operation preserves homotopy equivalence (see for instance <cit.>), the desired result will follow then from Theorem <ref>. Let P be the face poset of (G * H) and Q that of (G) * (H) (note that face posets do not contain empty faces). Define a new poset Q as Q ∪{r,s}, where r and s are two incomparable extra elements such that q ≺ r and q ≺ s for all q ∈ Q. Since the order complex of Q is homeomorphic to the suspension of the order complex of Q, the conclusion will follow from the proof that P and Q are homotopy equivalent. This will be done by applying Quillen's Fiber theorem to the following map. Let φ P →Q be defined for A ⊆ V(G*H) such that _G*H(A)≠∅ by φ(A) = A if _G(A∩ V(G))≠∅ and _H(A∩ V(H))≠∅, r if _G(A∩ V(G))=∅, s if _H(A∩ V(H))=∅. We check that φ is well defined, namely that it is impossible that both _G(A∩ V(G)) and _H(A∩ V(H)) are simultaneously empty for A ⊆ V(G*H) such that _G*H(A)≠∅. This is clear when A∩ V(G) or A∩ V(H) are empty. (Recall that the common neighbors of the empty set is the full vertex set.) So, suppose that both A∩ V(G) and A∩ V(H) are nonempty. Pick v in _G*H(A). Without loss of generality, v is in V(G), and thus _G(A∩ V(G)) is nonempty. The map φ is obviously order preserving. In order to use Quillen's Fiber theorem, it remains to check that D_q φ^-1(Q_≼ q) is contractible for every q ∈Q. First, consider the case when q ∈ Q. In this case, q is actually a subset A of V(G * H) such that _G(A∩ V(G))≠∅ and _H(A∩ V(H))≠∅. Since φ(A)=A, we have A ∈ D_A. Moreover, by definition of D_A, any other B∈ D_A is such that φ(B) ≼ A and thus such that φ(B) = B, which implies B ≼ A. The set A is the unique maximal element of D_A, which shows that D_A is a cone and therefore contractible. Consider now the case when q is r or s. Without loss of generality, we assume that q is r. We have already checked that it is impossible that both _G(A∩ V(G)) and _H(A∩ V(H)) are simultaneously empty for A ⊆ V(G*H) such that _G*H(A)≠∅. It means that if A is in D_r, then _H(A∩ V(H)) is nonempty, and thus φ(A ∪ V(G)) = A ∪ V(G), which implies in particular that A ∪ V(G) belongs to D_r. In other words, the map ψ A ↦ A ∪ V(G) is an order preserving map D_r → D_r. Note that ψ(A) ≽ A for every A is D_r. In other words, we have ψ≽id_D_r for the induced order on the self-maps of D_r. A standard result in combinatorial topology says then that the image of ψ is homotopy equivalent to D_r (see, e.g., <cit.>). Since V(G) belongs to D_r, it also belongs to the image of ψ, and it is its unique minimal element. Thus the image of ψ is contractible and so is D_r. We can therefore apply Quillen's Fiber theorem and we get that φ is a homotopy equivalence, which finishes the proof. § COMMENTING FIGURE <REF> §.§ Arrows In this section, we review every arrow of Figure <ref>: for each arrow, we provide for the corresponding inequality a proof or a reference to the literature, and discuss the possible gaps. We prove the inequality. Any clique K_n of G induces a homomorphism from K_n to G. This homomorphism becomes a simplicial _2-map from (K_n) to (G) when it is read at the level of box complexes. In Example <ref>, we have seen that (K_n) is homotopy equivalent to S^n-2; the same argument actually shows its _2-homotopy equivalence with S^n-2. Therefore ((G)) ≥ n-2. The gap can be arbitrarily large: the clique number of the Kneser graph (3k-1,k) is 2 while we have (((3k-1,k)))+3 = k+1, as stated at the beginning of Section <ref>; Theorem <ref> shows then that (((3k-1,k)))+3 = k+1 as well, which is a lower bound on (((3k-1,k)))+2, established by inequality (<ref>). (We have actually (((3k-1,k)))+2 = k+1 by using the upper bound given by the chromatic number.) The inequality is a special case of inequality (<ref>). The gap can be arbitrarily large: consider for instance the graph formed by two disjoint copies of K_n+2; its box complex (G) is _2-homotopy equivalent to the disjoint union of two n-dimensional spheres; it is thus not 0-connected, while it has a coindex equal to n. The inequality is a direct consequence of Theorem <ref> combined with inequality (<ref>). We prove that the gap can be arbitrarily large. This fact, which has not been emphasized in the literature yet, is another evidence on the better behavior of _0(G) for providing efficient lower bounds on the chromatic number. Fix an integer n > 5. Choose any simplicial complex _n as in Proposition <ref>. Consider now × S^n+1 and equip this space with the _2-action that acts on the first component trivially and on the second component as the antipodal action. By Theorem <ref>, there is a graph G such that (G) and × S^n+1 have the same _2-homotopy type. Now, by Theorem <ref>, _0(G) and (× S^n+1) are _2-homotopy equivalent. So, to finish the proof it is enough to show that the following two equalities hold: ( × S^n+1 )=0 ( (× S^n+1) ) ≥ n . The first equality of (<ref>) is a direct consequence of × S^n+1 being path-connected and π_1( × S^n+1 )=π_1()×π_1( S^n+1 ) = π_1() ≠ 0. To show the second equality, note first that H_i( × S^n+1, ) = H_i(, ) for all i < n: indeed, the ith homology of a CW-complex X depends only on its (i+1)-skeleton; considering S^n+1 as a CW-complex with just two cells (one zero-cell and one (n+1)-cell), the (i+1)-skeleton of × S^n+1 is just the (i+1)-skeleton of when i < n. We have H_i(, )=0 for all i < n because was chosen like this in the proof of Proposition <ref> (and anyway it is a consequence of the topological connectivity of () being exactly n). As H_i+1( (× S^n+1), )= H_i( × S^n+1, ) for all i≥ 0, the Hurewicz theorem implies the second equality of (<ref>) ((× S^n+1) is simply connected). Any _2-map between two spaces can be lifted to a _2-map between their suspensions. This implies the desired inequality because the suspension of a sphere is a sphere of one dimension higher and because the suspension of (G) is _2-homotopy equivalent to _0(G) (Theorem <ref>). Simonyi, Tardos, and Vrećica <cit.> have shown that the coindex of any 2-dimensional compact manifold with even genus is equal to 1, while the coindex of its suspension is equal to 3. Theorem <ref> implies then that there are graphs for which the gap between ((G)) and (_0(G)) can be equal to 2. In Section <ref> below, we show the existence of a topological _2-space with a gap of 3 between its coindex and the coindex of its suspension. Since this _2-space is actually smooth and compact, it admits a triangulation that is _2-equivariant <cit.>. There exists therefore a simplicial _2-complex with a gap of 3 between its coindex and the coindex of its suspension, and Theorem <ref> implies then that there are graphs for which the gap between ((G)) and (_0(G)) can be equal to 3. Whether this gap can be larger is unknown. The inequality is a special case of inequality (<ref>). We prove that the gap can be arbitrarily large. We take the same example as in the proof of inequality (<ref>), namely two disjoint copies of a complete graph, which we choose here to be K_n+1. Denote this graph by G. Its box complex _0(G) can be described as follows. Let ∂_1^n+1 and ∂_2^n+1 be two disjoint copies of the boundary of the (n+1)-dimensional cross-polytope. Pick in each of them two opposite facets: F_1 and F_1' for ∂_1^n+1 and F_2 and F_2' for ∂_2^n+1. Then _0(G) is isomorphic to ∂_1^n+1∪∂_2^n+1∪ (F_1 * F_2) ∪ (F'_1 * F_2'). The boundary of an (n+1)-dimensional cross-polytope being an n-dimensional sphere, the coindex of _0(G) is at least n. On the other hand, contracting each F_1*F_2 and F_1'*F'_2 to a point shows that _0(G) is homotopy equivalent to two n-spheres sharing their North and South poles. It is a standard exercise to check that such space is homotopy equivalent to the wedge of S^1 and two disjoint copies of S^n. The equality between the __2((G))+1 and __2(_0(G)) is a consequence of equation (<ref>) and Theorem <ref>. The inequality is a special case of inequality (<ref>). We prove that the gap can be arbitrarily large. Take any simplicial complex with H_1(,)=_3 and H_i(,)=0 for i≠ 1 (such a simplicial complex exists; see, e.g., <cit.>). Fix a natural number n≥ 3 and set X=× S^n+1. Equip this space with the _2-action that acts on the first component trivially and on the second component as the antipodal action. The homology groups of (X) over vanishes in every dimension less than n+1, except in dimension 2, in which it is _3 (see equation (<ref>)). So, by Hurewicz's theorem, its topological connectivity is at most one. On the other hand, H_i((X),_2)=0 for i≤ n by the universal coefficient theorem, which implies __2((X)) ≥ n. Now, using Theorem <ref>, we are done. The inequality was proved by Alishahi and Hajiabolhassan <cit.>. Actually, they proved a stronger result, namely that the altermatic number—a parameter they introduced, which dominates the colorability defect—is a lower bound on (_0(G))+1. The gap can be arbitrarily large as shown by considering the case where G=K_n is the complete graph and its Kneser representation is a graph formed by a perfect matching: in that case ()=0 and (_0(G))=n-1. Yet, choosing as the hypergraph with n vertices and n edges formed by all possible singletons leads to a tight bound. Nevertheless, there are graphs for which the gap is arbitrarily large for every Kneser representation. An example is obtained by taking the join of an arbitrary number of C_5. Consider the graph G=C_5^*n. On the one hand, the colorability defect of any Kneser representation of C_5 is at most one. Indeed, in such a Kneser representation, every pair of non-adjacent vertices share a common element (as edges of the Kneser representation) that is not shared by any other vertex; interpreting every such common element as an edge of a new graph between the non-adjacent vertices shows that every Kneser representation contains a C_5; removing any edge of this C_5 leads to a path, whose edges can be two-colored. Any Kneser representation of G is the disjoint union of n Kneser representations of C_5 and thus has colorability defect upper bounded by n. On the other hand, (C_5) is homotopy equivalent to S^1 (Example <ref>). Theorem <ref> shows then that (G) is homotopy equivalent to S^3n-2. Since the chromatic number of G is 3n, it makes the bound ((G))+2 tight, which in turn implies that (_0(G))+1=3n. The equality between ((G))+1 and (_0(G)) is a consequence of equation (<ref>) and Theorem <ref>. The inequality is a special case of inequality (<ref>). The authors believe that the gap can be positive (and even arbitrarily large), but they have not been able to come up with a concrete example. The inequality is a special case of inequality (<ref>). We prove that the gap can be arbitrarily large. Consider the space X formed by S^n and two copies of S^1: one S^1 is attached to the North pole, and one S^1 is attached to the South pole. We assume that the _2-action is the central symmetry on S^n and exchanges the two S^1's. We have (X) ≥ n because (X) = n and __2 (X) = 0. We finish with Theorem <ref>. The inequality is a special case of inequality (<ref>). The gap can be arbitrarily large. Indeed, the odd projective space P^2n+1 can be equipped with a free _2-action so that its index is at least n <cit.>, while its cohomological index is 1 <cit.>, and we finish with Theorem <ref>. We always have (X) ≤((X)) ≤(X) + 1 for any topological _2-space X. The left-hand inequality is immediate from the definition of the index and the right-hand inequality is a consequence of the right inequality in equation (<ref>) (which is by the way also immediate). This explains why inequality (<ref>) holds, and also shows that the gap can be at most one. There are examples achieving this gap <cit.>. We finish with Theorem <ref>. The inequality was proved by Csorba et al. <cit.>. The gap can be arbitrarily large as shown by arbitrary complete bipartite graphs; see Example <ref>. The inequality was proved by Simonyi et al. <cit.>. (Note that it is not a special case of inequality (<ref>), because there is an extra barycentric subdivision.) The question on how large the gap can be is completely open. The inequality was proved by Simonyi et al. <cit.>. The question on how large the gap can be is completely open. The inequality is obvious from the definition. The gap can be arbitrarily large as shown by graphs with no C_4 (=K_2,2) and arbitrarily large chromatic number, which exist by Erdős's result <cit.>. For all inequalities there are graphs realizing them as equalities. Indeed, for each of the lower bounds ω(G), (), and ((G))+3, there are many graphs for which they are tight (obtained from the collection of perfect graphs or Kneser graphs). Actually, the complete graph is an example for which the three bounds are tight simultaneously (with being the complete 1-uniform hypergraph). §.§ A topological _2-space with a gap of three between its coindex and the coindex of its suspension The Brieskorn space M_p,q,r is defined as the intersection of the complex algebraic surface z_1^p + z_2^q + z_3^r = 0 with the unit 5-dimensional sphere |z_1|^2 + |z_2|^2 + |z_3|^2 = 1. Here, p, q, and r should be integer numbers non-smaller than 2. These spaces are well-studied objects from algebraic geometry. When p, q, and r are odd, they are naturally equipped with the free _2-action inherited from the antipodal action on the unit sphere. Existence of spaces with the following property was implicitly asked in Section 5 of the already cited paper by Simonyi, Tardos, and Vrecica <cit.> to study the gap in inequality (<ref>). We have ((M_p,q,r)) = 4 and (M_p,q,r)=1, whenever p, q, and r are pairwise coprime odd integers larger than 2. We need a preliminary property about Brieskorn spaces M_p,q,r. This property is actually common knowledge in algebraic topology but we were not able to find any bibliographical reference. For sake of reader's understanding, we write here a complete proof. The second and third homotopy groups of M_p,q,r vanish when 1/p + 1/q + 1/r - 1 is negative. In this case, M_p,q,r is diffeomorphic to a coset space of the form 𝖦/Π where 𝖦 is the universal cover of SL(2,ℝ ) and Π is a discrete subgroup of 𝖦 <cit.>. As a topological space, SL(2,ℝ) is homeomorphic to S^1×ℝ^2. (This can be derived quite easily from the Iwasawa decomposition of SL(2,ℝ); see <cit.>.) Since the universal cover of S^1 is ℝ, the space 𝖦 is ℝ^3 and thus contractible. The subgroup Π being discrete, it acts in a properly discontinuous way on 𝖦, which implies that the quotient map 𝖦→𝖦/Π is a covering maps (see <cit.> where the author prefers to use “covering space action” instead of “properly discontinuous action”). This implies in turn that there is a isomorphism between the homotopy groups π_n(𝖦) and π_n(𝖦/Π) for n≥ 2 <cit.>. In particular, 𝖦 being contractible, the second and third homotopy groups of M_p,q,r vanish. To ease the notation, denote by X any Brieskorn space M_p,q,r with p, q, and r pairwise coprime integers larger than 2. Then X has the following properties: * it is an integral 3-dimensional homology sphere (see the “Historical Remarks” in the introduction of the paper by Milnor <cit.>). * the homotopy groups π_2(X) and π_3(X) are 0 (Lemma <ref>). We claim that (X)=1 and ((X))≥ 3, which implies that ((X) = 4 (by inequalities (<ref>) and (<ref>) from Section <ref>). We finish the proof by showing these two equalities. By property <ref>, the reduced 0-dimensional homology of X vanishes, which makes it path-connected. By inequality (<ref>), we have then (X) ≥ 1. Assume for a contradiction that (X) ≥ 2. This implies the existence of a continuous _2-map S^2 → X. Because of property <ref>, we can extend this map to a continuous _2-map S^4 → X (connectivity is used to get a B^3→ X map that is a _2-map on the boundary, which allows then to get a _2-map S^3→ X, and then we repeat again this construction for the next dimension), which is not possible because X is 3-dimensional (we use again inequality (<ref>)). The space (X) is simply connected because X is path-connected (by inequality (<ref>)). Hence, Hurewicz's theorem applies and equality (<ref>) together with property <ref> imply that ((X))≥ 3. Proposition <ref> ensures that there exists a topological _2-space with a gap of 3 between the coindex of its suspension and its coindex. To achieve an arbitrary gap of k, the same proof works provided there exists a triangulable topological _2-space with * vanishing reduced homology up to dimension k-1. * vanishing homotopy groups from dimension 2 to k. We believe that such a space should exist for arbitrarily large k. §.§ Absent arrows Let two graph parameters be comparable if for all graphs they are ordered in the same way. Some parameters are not compared in Figure <ref>. They come in two categories: some pairs of such parameters are really not comparable; for other pairs, the question whether they are comparable or not is still open. The next proposition shows that Figure <ref> is exhaustive for ω(G): if there is no path in the figure from ω(G) to some parameter, this latter is not comparable with ω(G). None of the parameters max_() (over all Kneser representations), ((G))+3, (_0(G))+2, and __2((G))+3) is comparable with ω(G). Moreover, each of them can be arbitrarily larger than ω(G) and also arbitrarily smaller (the difference can be arbitrarily large in one direction or the other). For each parameter, we give a first family that shows that it can be made arbitrarily larger, and then we give a second family that shows that it can be made arbitrarily smaller. max_()): for Kneser graphs (3k,k+1) (see Section <ref>), this bound is tight (equal to k-1) while their clique number is 2; in the proof of inequality (<ref>), it has been shown that C_5^*n has max_()) at most n for every Kneser representation, while its clique number is equal to 2n. ((G))+3, (_0(G))+2, and __2((G))+3: again, for Kneser graphs (3k,k+1), these bounds are tight (equal to k-1) while their clique number is 2; however, disjoint union of two complete graphs gives a bound of 3 while the clique number is arbitrarily high. One of the messages of Proposition <ref> is that none of ω(G) and ((G))+3 provides a better bound on the chromatic number than the other. This must however be mitigated: it is known that “generically” the clique number provides a better bound. This is for instance formalized by a result due to Kahle <cit.>: while the clique number of the Erdős–Rényi graph G(n,1/2) is almost surely asymptotically equivalent to 2log_2(n), the value of ((G(n,1/2)))/log_2(n) lies almost surely asymptotically between 1 and 4/3. (The result is stated for the neighborhood complex, which is equivalent because of Theorem <ref>.) It is worth noticing that the bound b(G) is used in that paper to get the probabilistic upper bound on the connectivity of the neighborhood complex. Regarding parameters b(G) and χ(G): complete bipartite graphs show that b(G) can be arbitrarily larger than χ(G); and C_4-free graphs with high chromatic number (see proof of inequality (<ref>) where such graphs have already been used) show that χ(G) can be arbitrarily larger than b(G). Unfortunately, for the other pairs that are not compared, we were only able to obtain partial answers, like the following ones. Neither (_0(G)), nor __2(_0(G)) is comparable with ((G)). Moreover, each of them can be arbitrarily smaller than ((G)). We are going to see that there exists a (triangulable) topological _2-space X such that (X)=1 and ((X)) = 3, and a (triangulable) topological _2-space Y such that (Y)= n and __2 ((Y)) = 1. As usual, this provides then the desired conclusion with Theorem <ref>. Such a space X was actually described in the proof of Proposition <ref> since we established there that, for some values of p,q,r, the coindex of the Brieskorn space M_p,q,r is 1, while the connectivity of its suspension is 3 (see proof of Proposition <ref>). The space Y is formed by S^n and two copies of S^1: one S^1 is attached to the North pole, and one S^1 is attached to the South pole. We assume that the _2-action is the central symmetry on S^n and exchanges the two S^1's. We have (Y)= n and __2 ((Y)) = 1. It is plausible that each of (_0(G)) and __2(_0(G)) can also be arbitrarily larger than ((G)); this would actually be a consequence of the existence of complexes as described in Remark <ref>. Proposition <ref> shows that __2(_0(G))+2 can be smaller than (_0(G))+1. Whether it can be larger is open. The bound max_() can be arbitrarily smaller than ((G))+3. For G=C_5^*n, we have seen in the proof of inequality (<ref>) that max_(C_5^*n) is equal to n and that (G) is homotopy equivalent to S^3n-2, which implies that ((G))+3 is equal to 3n. We finish with the somehow only remaining questions about non-arrows. Do there exist graphs for which () is larger than ((G))+3? § COMPLEMENTARY REMARKS §.§ Further topological bounds The Hom complex (K_2,G) can be alternatively defined as the poset of “multihomomorphisms” from K_2 to G, and replacing K_2 by another arbitrary graph T provides another Hom complex (T,G); see, e.g., <cit.> or <cit.> for details. Björner and Lovász conjectured that ((T,G))+χ(T)+1 is a lower bounds on χ(G) (remember that (G) is homotopy equivalent to the order complex of (K_2,G)—Theorem <ref>—and thus the conjecture generalizes the bound based on the connectivity of (G)). This conjecture generated a lot of activity. Hoory and Linial <cit.> showed that the conjecture cannot be true for all graphs T, and Babson and Kozlov <cit.> established it when T is an odd cycle or a complete graph. This latter result has been considered as a breakthrough in topological combinatorics. Simplifications of the proof and complementary results were found soon after by Živaljević <cit.>, Schultz <cit.>, and Kozlov <cit.>. Anyway, Schultz proved that we always have ((G))+2 ≥((C_2r+1,G))+3 for any r, which shows that box complexes are not yet outdated by more general Hom complexes. §.§ Open questions We collect here the main open questions met in the survey. §.§.§ Improving Csoba's construction Is it possible to improve Csorba's construction, which shows that every free _2-simplicial complex is _2-homotopy equivalent to a certain (G)? See Section <ref>, p.page:csorba. §.§.§ Decidability of topological parameters Which of the parameters ((G)), (_0(G)), ((G)), and (_0(G)) are decidable? Which are not? See Section <ref>, p.page:dec. §.§.§ Computational complexity of topological parameters What is the complexity status of the computation of the topological parameters of Figure <ref>? Apart for b(G), (), and χ(G), which are -hard, the complexity status is open. See Section <ref>, p.page:compl. §.§.§ Computational complexity and the Borsuk–Ulam theorem Regarding complexity questions, maybe more fundamental than those mentioned above is Question <ref> (Section <ref>, p.q:kneser), asking whether proving that Kneser graphs have chromatic number n-2k+2 is as hard as establishing the Borsuk–Ulam theorem. §.§.§ Hedetniemi's conjecture beyond the chromatic number For some parameters of Figure <ref>, a Hedetniemi-type relation is known to hold. But it is still open for several of them. This is Question <ref> of Section <ref>, p.page:hedet-paramG. Moreover, similar questions can be asked for topological parameters attached to topological spaces, without any reference to graphs. See Section <ref>, p.page:hedet-param-top. §.§.§ Gap between ((G)) and (_0(G)) and generalized Brieskorn spaces Brieskorn spaces offer examples of free _2-spaces with coindex equal to 1 and with the coindex of their suspension equal to 4 (Proposition <ref>). They are used to show that the gap in inequality (<ref>) can be at least 2. Generalizations of Brieskorn spaces, as suggested in Remark <ref>, p.rem:coind-susp, would show the existence of a space with coindex 1 and arbitrarily large connectivity of its suspension, and would show in turn that the gap in inequality (<ref>) can be arbitrarily large (a question that is still open). §.§.§ Coindex and connectivity mod 2 for suspensions It is not known whether __2(_0(G))+1 can be larger than (_0(G)); see p.page:coind-conn2. This question can be formulated in a pure topological way: is there any topological _2-space X for which __2((X))+1 is larger than ((X))? §.§.§ Colorability defect and coindex Whether the colorability defect lower bound can be sometimes better than ((G))+2 is open; see Question <ref>, p.page:cdB. Note that a positive answer would also contribute to the question about the gap between ((G)) and (_0(G)). amsplain § PROOF OF THEOREM <REF> We first establish two technical lemmas. We denote the d-dimensional cross-polytope by ^d (notation already met in Section <ref> when we proved the Borsuk–Ulam theorem from the inequality ((G))+3≤χ(G)). There exists a (polynomially computable) simplicial _2-map ((K_d+1)) →∂^d. Identify the vertices of K_d+1 with the integers in {1,2,…,d+1}. This way, the vertices of (K_d+1) are identified with the integers in {± 1,± 2,…,± (d+1)}. For a vertex v of (K_d+1), set μ(v) (-1)^vv. Consider now a simplex σ of (K_d+1) and order its vertices in the increasing order of their absolute values: v_0,v_1,…,v_σ. Since no simplex of (K_d+1) can have vertices of opposite values, there is no ambiguity in the definition of this ordering. Let k be the number of sign changes in the sequence μ(v_0),μ(v_1),…,μ(v_σ), plus one. Let s be the sign of μ(v_0). Then, set λ(σ) s e_k (where the e_i are the unit vector of the standard basis of ^d). By definition of (K_d+1), the integer k is in {1,…,d}. The map λ is obviously a _2-map, and it is simplicial ((K_d+1)) →∂^d because when σ⊆σ' are two simplices of (K_d+1), the definition of k and s makes that λ(σ) and λ(σ') cannot be mapped to opposite vectors. There exists a (polynomially computable) piecewise affine _2-map h^V(K_d+1)→^d such that we have z_∞≤ d h(z)_∞ for those z∈^V(K_d+1) whose components are neither all positive, nor all negative. The simplicial complex ((K_d+1)) is a subcomplex of ∂^d+1. Let be the simplicial complex {0} * ∂^d (which can be seen as the triangulation of ^d obtained by adding a vertex at its center). We extend λ defined by Lemma <ref> to a map λ' defined on (∂^d+1) by setting λ'(v) 0 on the two vertices v of ∂^d+1 that are not in ((K_d+1)) (namely, the vertices corresponding to the facets containing the all-positive and all-negative points). The map λ' is a simplicial _2-map (∂^d+1) →. Now, for z ∈^V(K_d+1)∖{0}, set h(z) z_∞λ'(z/z_∞) (where λ' is identified with its affine extension), and set h(0) 0. This map h is a piecewise affine _2-map ^V(K_d+1)→^d. Since y_∞≥1/d for all points y ∈∂^d, the conclusion follows. We establish the two reductions. (Style1*=∙, Style2*=-, Hide=2, Indent=0.5cm, Space1=0.4cm, Space1*=0.4cm, Space2=0.2cm, Space2*=0.2cm) # Reduction of Coind-lower-bound to ε-Borsuk–Ulam ## Transforming the instances. Consider an instance of Coind-lower-bound. Denote by c the coloring of G, interpreted as a map V(G) → V(K_d+1). Whether c is proper can be checked in polynomial time. If it is not, we are done, and to formally satisfy the prescriptions of a reduction, we build any instance for ε-Borsuk–Ulam. If it is a proper coloring, then proceed as follows. For y=∑_v∈ V(G)y_ve_v, set c̅(y)∑_v∈ V(G)y_ve_c(v). This gives rise to a continuous _2-map c̅^V(G)→^V(K_d+1). Set then f h ∘c̅∘ g and εδ/d. This way we get an instance of ε-Borsuk–Ulam. ## Transforming the solutions. If the coloring is not proper, then any monochromatic edge is returned, independently of the solution of ε-Borsuk–Ulam (type-(<ref>) solution). Otherwise, consider a solution x ∈ S^d of ε-Borsuk–Ulam. We have f(x)-f(-x)_∞≤ε, i.e., f(x)_∞≤1/2ε since f is a _2-map by construction. Write g(x) as ∑_v∈ V(G)y_ve_v. Set σ{s· v s∈{+,-}, v ∈ V(G), s y_v >δ/(2q)}, with q=|V(G)|+1. If σ is not a simplex of (G), then x is a type-(<ref>) solution. If σ is a simplex of (G), then proceed as follows. For each u ∈ V(K_d+1), set z_u ∑_v∈ c^-1(u)y_v. Because c is a proper coloring, we actually have |z_u| ≥∑_v∈ c^-1(u) 2q|y_v|>δ|y_v| - 1/2δ, which implies z_∞+1/2δ≥y_∞. Since f(x)=h(z), we get h(z)_∞≤1/2ε, which in turn leads to z_∞≤1/2δ, with Lemma <ref> (since σ is a simplex of (G), the components of z are neither all positive, nor all negative). We have thus y_∞≤δ and therefore g(x)_∞≤δ, which means that x is a type-(<ref>) solution. # Reduction of ε-Borsuk–Ulam to Coind-lower-bound ## Transforming the instances. Consider an instance of ε-Borsuk–Ulam. Set G K_d+1 (complete graph with d+1 vertices), and color its vertices properly with d+1 colors. Set g x∈^d+1↦(f(-x) - f(x)) ∈^V(K_d+1), where ^d ^V(K_d+1) (by identifying ^V(K_d+1) with ^d+1), and δε. This way we get an instance of Coind-lower-bound. ## Transforming the solutions. Consider a solution of Coind-lower-bound. It cannot be a type-(<ref>) solution. It cannot be a type-(<ref>) solution either by property of the range of . Thus it is a type-(<ref>) solution x∈ S^d, such that g(x)_∞≤ε. Since is non-expansive, we have f(-x)-f(x)_∞≤ε, as desired.
http://arxiv.org/abs/2307.00605v1
20230702162253
Wave propagation in abstract dynamical system with boundary control
[ "M. I. Belishev" ]
math.DS
[ "math.DS", "math-ph", "math.MP" ]
[ [ August 1, 2023 ================== Let L_0 be a positive definite ope­ra­tor in a Hilbert space ℋ with the defect indexes n_±⩾ 1 and let { Ker L^*_0;Γ_1,Γ_2} be its canonical (by M.I.Vishik) boundary triple. The paper deals with an evolutionary dynamical system of the form u_tt+L_0^* u=0 in ℋ, t>0; u|_t=0=u_t|_t=0=0 in ℋ; Γ_1 u=f(t), t⩾ 0, where f is a boundary control (a Ker L^*_0-valued function of time), u=u^f(t) is a trajectory. Some of the general properties of such systems are considered. An abstract analog of the finiteness principle of wave propagation speed is revealed. Key words: symmetric semi-bounded operator, Vishik boundary triple, dynamic system with boundary control, finiteness of wave propagation speed. MSC: 35Lxx, 35L05, 35Q93, 47B25. Dedicated to the 85-th jubilee of A.S.Blagoveshchenskii § ABOUT THE PAPER ∙ A dynamical system with boundary control (DSBC) that we deal with, is determined by a symmetric semi-bounded operator with nonzero defect indexes. We are interested in most general properties of such systems. Motivation comes from a program of constructing a functional model of such operators (the so-called wave model: see <cit.>, <cit.>-<cit.>). The given paper develops the results <cit.> on the general properties of DSBC. Perhaps, most curious of new facts is that the finiteness principle of wave propagation speed (for short, FS principle), which is well known and holds in numerous applications, does have a relevant analog for abstract DSBC. ∙ The paper is dedicated to the jubilee of my teacher Aleksandr Sergeevich Blagoveshchenskii, one of the pioneers and creators of the dynamical inverse problems theory. At one time, he explained me the deepness and opportunities of the D'Alembert formula. It would not be an exaggeration to say that the given work is done in the manner and technique of Alexander Sergeevich. § OPERATOR L_0 §.§.§ Boundary triple ∙ As was noted above, DSBC is associated with a semi-bounded operator. The class of these operators that we deal with, is the following. We assume that L_0 is a closed densely defined symmetric positive definite operator in a Hilbert space ℋ with nonzero defect indexes; so that Dom L_0=ℋ; L_0⊂L_0^*; L_0⩾γ 𝕀, γ>0; 1⩽ n_+^L_0=n_-^L_0⩽∞ holds, where 𝕀 is the identity operator. Note that by virtue of n_±^L_0≠0 such an operator is necessarily unbounded. ∙ We denote 𝒦:= Ker L_0^* and use the (orthogonal) projection P in ℋ on 𝒦. Note that dim 𝒦 =n_±^L_0 holds. Let L be the extension of L_0 by Friedrichs: L_0⊂ L=L^*⊂ L^*_0, L⩾γ 𝕀, Ran L=ℋ. Its inverse L^-1 is a self-adjoint bounded operator in ℋ. The well-known decomposition by Vishik <cit.> is Dom L_0^*= Dom L_0 .+L^-1𝒦.+𝒦= Dom L.+𝒦; the latter equality is established in the framework of M.Krein's theory <cit.>. Thus, each y ∈ Dom L_0^* is uniquely represented in the form y=y_0+L^-1g+h =y'+h with some g, h∈𝒦 and y':=y_0+L^-1g∈ Dom L. The components are determined by y as follows: y'=L^-1L_0^* y, y_0=L^-1L_0^*(y-y'), h=y-y'-y_0. The operators Γ_1:=L^-1L_0^*-𝕀, Γ_2:=PL_0^*; Dom Γ_1,2= Dom L_0^* are called boundary operators. By definitions and (<ref>), Γ_1y=-h, Γ_2 y =g. Also, these definitions imply Ran Γ_1= Ran Γ_2=𝒦. Note that, in general, boundary operators may be unclosable; moreover, such a situation is typical in applications. However, if one endows Dom L_0^* with the graph-norm y^2_ graph=y^2+L_0^* y^2 then Γ_1,2 become continuous <cit.>. ∙ The relation (L_0^* u,v)-(u,L_0^* v)=(Γ_1u,Γ_2v)-(Γ_2u,Γ_1v), u,v∈ Dom L_0^* is valid (see, e.g., <cit.>). By operator theory terminology <cit.>, relations (<ref>) and (<ref>) mean that the collection {𝒦;Γ_1,Γ_2} constitutes the boundary triple of the operator L_0. The general boundary triple theory provides L_0=L_0^*↾[ Ker Γ_1∩ Ker Γ_2], L=L_0^*↾ Ker Γ_1 (see <cit.>, Chapter 7). ∙ A possible way to realize decomposition (<ref>) is to solve two "boundary value problems". Let y=y_0+L^-1g+h be the Vishik decomposition of y∈ Dom L_0^*. Then the elements h and g are uniquely determined by the relations L_0^* h=0; Γ_1 h =Γ_1 y. and L_0^*^2 w=0; Γ_1 w=0; Γ_2 w=Γ_2(y-h), respectively, where w:=L^-1g. Element h obeying the second relation in (<ref>), does exist due to (<ref>). It is unique. Indeed, if h' satisfies (<ref>) then ỹ:=h-h' obeys Γ_1 ỹ=0, i.e., ỹ∈ Dom L. The latter implies ỹ=0 by virtue of Dom L∩ Ker L_0^*={0} (see (<ref>)). Since L_0^*L_0^* L^-1g=L_0^* g=0, L^-1g∈ Dom L is valid (so that Γ_1L^-1g see (<ref>)=0 and Γ_2L^-1g=Γ_2(y-y_0-h)(<ref>)=Γ_2(y-h) hold), we see that w=L^-1g solves problem (<ref>). If w' also solves it, for w̃:=w-w' one has Γ_1w̃=Γ_2 w̃=0 that leads to w̃∈ Dom L_0 by virtue of (<ref>). Therefore, L_0^*w̃= L_0w̃∈ Ran L_0 and, hence, L_0^*w̃ Ker L_0^*. The latter makes L_0^*L_0^*w̃=0 possible only if L_0^*w̃=0. Since L_0^*w̃=L_0w̃=0, we arrive at w̃=0 by injectivity of L_0. Thus, to determine h and g in (<ref>), one can find h from (<ref>), solve (<ref>) and then get g=Lw. §.§.§ Example ∙ As an illustration, we consider the Laplace operator. Let (Ω,g) be a compact smooth[In the subsequent, smooth always means C^∞-smooth.] Riemannian manifold of dimension n⩾ 2 [the case Ω⊂ℝ^n is quite suitable for our goals.] with the smooth connected boundary Γ, let Δ be the Laplace-Beltrami differential operator in Ω. Let H^p(Ω), p=1,2, H^1_0(Ω)={y∈ H^1(Ω) | y|_Γ=0} and H^2_0(Ω)={y∈ H^2(Ω) | y|_Γ=∂_ν y|_Γ=0} be the Sobolev spaces (ν is the outward normal on Γ). We put ℋ:=L_2(Ω) and denote by Harm(Ω):={h∈ℋ | Δ h=0 in Ω∖Γ} the subspace of harmonic functions. The following is the well-known facts. The operator (minimal Laplacian) L_0:=-Δ↾ C^∞_0(Ω)=-Δ↾ H^2_0(Ω) is positive definite. Its adjoint (maximal Laplacian) is L_0^*=-Δ↾[H^2(Ω)+ Harm(Ω)], and 𝒦= Ker L_0^*= Harm(Ω) holds. The Friedrichs extension of L_0 is L=-Δ↾[H^2(Ω)∩ H^1_0(Ω)]. ∙ By (<ref>) and (<ref>), to describe how the boundary operators Γ_1,2 act, one needs to show how to find the harmonic functions h and g for a given y∈ Dom L_0^*. Since y_0 and L^-1g belong to H^1_0(Ω), we have y=h on Γ. Thus, the function h can be specified as the (unique) solution of the Dirichlet problem Δ h=0 in Ω∖Γ; h=y on Γ. To find the harmonic g, we recall that the summand y_0 in (<ref>) belongs to Dom L_0=H^2_0(Ω) and, hence, obeys ∂_ν y_0|_Γ=0. This implies ∂_ν L^-1g(<ref>)=∂_ν(y-y_0-h)=∂_ν(y-h). In the mean time, we have L^-1g∈ H^1_0(Ω) and L_0^*L_0^* L^-1g=L_0^* g=0. As a result, L^-1g obeys Δ^2 (L^-1g)=0 in Ω∖Γ; L^-1g=0, ∂_ν(L^-1g)=∂_ν(y-h) on Γ (h is already known). Solving this well-posed Cauchy problem for the biharmonic equation, we get L^-1g and then find g=Δ L^-1g. As is easy to recognize, (<ref>) and (<ref>) are some concrete versions of (<ref>) and (<ref>) respectively. We get rights to claim that L_0^* h=0 and L_0^*^2 w=0 are the abstract Laplace and biharmonic equations respectively. § DYNAMICS DETERMINED BY OPERATOR L_0 §.§.§ System α ∙ The boundary triple, in turn, determines a dynamical system with boundary control (DSBC) of the form ü+L_0^*u = 0 in ℋ, t>0; u|_t=0=u̇|_t=0=0 in ℋ; Γ_1 u = f in 𝒦, t ⩾ 0, where (̇ ̇ ̇)̇:=d/dt; f=f(t) is a 𝒦-valued function of time (boundary control). By u=u^f(t) we denote the solution (wave). For short, we call (<ref>)–(<ref>) just system α; function u^f(·) is its trajectory. Introduce the class of smooth controls ℳ:={f∈ C^∞([0,∞);𝒦) | supp f⊂(0,∞)}, vanishing near t=0. As is shown in <cit.>, for each f∈ℳ there exists a unique classical solution u=u^f(t)∈ C^∞([0,∞);ℋ) (a smooth wave) obeying u^f(t)∈ Dom L_0^*, t⩾ 0 and represented in the form u^f(t)=-f(t)+L^-1 2∫_0^tsin [(t-s)L^1 2] f̈(s) ds, t⩾ 0. Integrating by parts, we get the equivalent representation u^f(t)=-f(t)+L^-1∫_0^t{1-cos [(t-s)L^1 2]} ...f(s) ds=-f(t)+u^f_L(t), t⩾ 0, where the summand u^f_L(t)∈ Dom L corresponds to the element y'∈ Dom L in decomposition (<ref>). The right hand side in (<ref>) is meaningful for any f∈ H^2_ loc([0,∞); 𝒦) vanishing near t=0. For such controls, it defines a (generalized) solution u^f to (<ref>)–(<ref>). The following is some of its properties. ∙ In what follows, we assume that all time functions are extended to t< 0 by zero. Let a map T_τ delay functions by the rule ( T_τ w)(t):=w(t-τ). Since the operator L_0^* which governs the evolution of system α, does not depend on time, the steady state relation u^ T_τ f(t)=( T_τ u^f)(t)=u^f(t-τ), t⩾ 0, τ>0 and its consequence u^ḟ(t)= u̇^f(t), u^f̈(t)= ü^f(t)(<ref>)=-L_0^*u^f(t), t⩾ 0 hold. ∙ The following is the control theory attributes of system α. The space of controls (inputs) ℱ:=L_2^ loc(ℝ_+;𝒦) is an external space. It contains a family of delayed controls ℱ_τ:=𝒯_τℱ, τ>0 and the smooth class ℳ, which satisfies d^p/dt^pℳ=ℳ, p=1,2,… and is locally dense in ℱ: {f|_[0,T] | f∈ℳ}=L_2([0,T];𝒦), T>0. The space ℋ is an internal space, the waves (states) u^f(t) are its elements. It contains an increasing family of reachable sets 𝒰^τ:={u^f(τ) | f∈ℳ}, τ⩾ 0 and the total reachable set 𝒰:= span {𝒰^τ | τ⩾ 0}. By (<ref>) and (<ref>) an invariance of reachable sets L_0^*𝒰^τ=𝒰^τ, τ⩾ 0; L_0^*𝒰=𝒰 holds. In system α, the input-state map is W^τ:ℱ→ℋ, Dom W^τ=ℳ, W^τ f:=u^f(τ), τ⩾ 0. For any τ>0, the map W^τ is closable. 1. Take an arbitrary ϕ∈ℋ. Applying L^-12 in (<ref>), we have (L^-12u^f(τ),ϕ)=-(L^-12f(τ),ϕ)+ (L^-1∫_0^τsin[(τ-s)L^12]f̈(s) ds,ϕ)=:-I+II. Since L=L^*, the second summand is of the form II= ∫_0^τ ds (f̈(s),ψ(s)) with ψ(s):=L^-1sin[(τ-s)L^12]ϕ obeying ψ̈(s)=-sin[(τ-s)L^12]ϕ∈ℋ. Then one can easily justify integration by parts: II=[(ḟ(s),L^-1sin[(τ-s)L^12]ϕ)+(f(s),L^-12cos[(τ-s)L^12]ϕ) ]|_s=0^s=τ+ +∫_0^τ (f(s),sin[(τ-s)L^12]ϕ) ds=(f(τ),L^-12ϕ)+ +∫_0^τ (f(s),sin[(τ-s)L^12]ϕ) ds=(L^-12f(τ),ϕ)+∫_0^τ (sin[(τ-s)L^12]f(s),ϕ) ds. Substituting II in the form of the latter sum to (<ref>) and taking into account the arbitrariness of ϕ, one represents L^-12u^f(τ)=∫_0^τsin[(τ-s)L^12]f(s) ds. 2. In view of (<ref>), the value of the wave u^f(τ) is determined by the values f|_0⩽ t⩽τ of control (does not depend on f|_t>τ). Let f|_[0,τ]→ 0 in L_2([0,τ];𝒦) and u^f(τ)→ y in ℋ. The limit passage in (<ref>) leads to L^-12y=0. The latter implies y=0. Thus, f→ 0 and W^τ f→ y yields y=0, i.e., W^τ is closable. Denote ℱ^τ:=L_2([0,τ];𝒦) and ℳ^τ:={f|_[0,τ] | f∈ℳ}. By (<ref>), the value u^f(τ) of the wave is determined by the values f|_0⩽ t⩽τ of control (does not depend on f|_t>τ). Therefore, the map W^τ:ℱ^τ→ℋ, Dom W^τ=ℳ, W^τ f:=u^f(τ) is well defined and closable for all τ>0. §.§.§ Example ∙ In the example chosen above as illustration, we have 𝒦= Harm (Ω), whereas the bijection Harm (Ω)∋ y ↔ y|_Γ does occur. Therefore, by (<ref>), system α can be written in the equivalent (traditional) form of an initial - boundary value problem u_tt-Δ u = 0 in (Ω∖Γ)×ℝ_+; u|_t=0=u_t|_t=0=0 in Ω; u = f on Γ×ℝ_+, which describes propagation of wave u=u^f(x,t) in Ω, the wave being initiated by the boundary source (control) f=f(γ,t). Let us recall some of its known properties. For smooth controls of the class ℳ:={f∈ C^∞(Γ×ℝ_+) | supp f⊂Γ×ℝ_+} vanishing near t=0, system (<ref>)–(<ref>) has a unique classical solution u^f∈ C^∞(Ω×ℝ_+). For all τ>0, the map W^τ:f|_[0,τ]↦ u^f(·,τ) is continuous from ℱ^τ:=L_2(Γ×[0,τ]) to ℋ <cit.>. Let Ω^r:={x∈Ω | dist(x,Γ)<r} be a metric neighborhood of the boundary of radius r>0; denote T_*:=inf {r>0 | Ω^r=Ω}. The relation supp u^f(·,t)⊂Ω^t, t>0 holds and shows that waves move from the boundary into the manifold with velocity ⩽ 1. At the moment t=T_* the waves fill up the whole Ω. In what follows we refer to these facts as a finiteness of wave propagation speed principle (FS principle). It corresponds to a hyperbolicity of the initial boundary value problem (<ref>)–(<ref>). ∙ Introduce the reachable sets 𝒰^t:={u^f(·,t) | f∈ℳ} and note the evident relation 𝒰^s⊂𝒰^t for s<t. As a consequence of (<ref>), we have the embedding 𝒰^τ⊂ℋ^τ:={y∈ℋ | supp y⊂Ω^τ}. A remarkable fact, which is interpreted as a local approximate boundary controllability of system α, is that this embedding is dense: the equality 𝒰^τ = ℋ^τ, τ>0 holds <cit.>, <cit.>. Since Ω is compact, for τ⩾ T_* relation (<ref>) implies 𝒰^τ=ℋ, so that system (<ref>)–(<ref>) is controllable for any moment T>T_*. Note in addition that, in the given Example, the family of the projectors P^τ in ℋ onto 𝒰^τ is continuous with respect to τ. However, in the system governed by the Maxwell equations, the relevant family may have infinite-dimensional breaks: see <cit.>. Thus, the continuity of 𝒰^τ is definitely not a general fact. §.§.§ Controllability of system α ∙ For the abstract system α of the form (<ref>)–(<ref>) ant its reachable sets 𝒰^τ={u^f(τ) | f∈ℳ}, to discuss property (<ref>) is meaningless because there is no analog of the subspaces ℋ^τ. Nevertheless, the question can be posed for the total reachable set 𝒰 as follows. We say that system α is controllable if the equality 𝒰 = ℋ holds. If (<ref>) is not valid, we say 𝒟:=ℋ⊖𝒰 to be a defect (unreachable) subspace. The question is on the conditions which provide (<ref>). The answer is the following. Let A be an operator in a Hilbert space ℋ and 𝒢⊂ℋ be a subspace. We say that 𝒢 is an invariant subspace of A if Dom A ∩𝒢=𝒢 and A( Dom A ∩𝒢)⊂𝒢 hold [see some comments to this definition in <cit.>], whereas operator A_𝒢:𝒢→𝒢, Dom A_𝒢= Dom A∩𝒢, A_𝒢 y:=Ay is called a part of A in 𝒢. A symmetric densely defined operator A is said to be completely non-self-adjoint if it has no (substantially) self-adjoint parts, i.e, there are no parts A_𝒢, which satisfy A_𝒢^*=A_𝒢. System α is determined by operator L_0. As is shown in <cit.>, it is controllable, i.e., (<ref>) holds, if and only if L_0 is completely non-self-adjoint. ∙ There exist dynamical systems (<ref>)–(<ref>), in which (<ref>) takes the form 𝒰^τ=ℋ for any τ>0. As example, one can mention the system on Ω⊂ℝ^n governed by the Euler-Bernoully equation u_tt+Δ^2u=0 and relevant boundary controls <cit.>. The following agreement excludes such cases from consideration as trivial ones. For system α, we assume that there are τ>0 such that 𝒰^τ+ϵ⊖𝒰^τ≠{0} holds for any ϵ>0. In other words, it is assumed that the family of the reachable subspaces 𝒰^τ does have the positive growth points. In the concrete systems realizing the trivial case, the FS principle is not in force: (<ref>) is broken and the waves propagate with infinite velocity. If the condition in Convention <ref> is violated, our further results remain true but become trivial. For the rest of the paper, Convention <ref> is accepted. §.§.§ System β ∙ Consider a dynamical system β of the form v̈+Lv = ψ in ℋ, t>0; v|_t=0=v̇|_t=0=0 in ℋ, controlled by a source ψ, which is an ℋ-valued function of time. By v=v^ψ(t) we denote the solution. For smooth sources of the class 𝒩:={ψ∈ C^∞([0,∞);ℋ) | supp ψ⊂(0,∞)} vanishing near t=0, the solution is unique, classical, smooth, is represented in the form v^ψ(t)=L^-1 2∫_0^tsin[(t-s)L^1 2] ψ(s) ds, t⩾ 0 and belongs to Dom L for any t. By (<ref>), the latter implies Γ_1 v^ψ(t)=0, t⩾0. The right hand side of (<ref>) makes sense for ψ∈ L^ loc_2([0,∞);ℋ) and is referred to as a generalized solution. If ψ∈ H^1_ loc([0,∞);ℋ) then integration by parts in (<ref>) provides v^ψ(t)=L^-1∫_0^t{𝕀-cos[(t-s)L^1 2]} ψ̇(s) ds, t⩾0. In this case, v^ψ(t)∈ Dom L holds and relation (<ref>) remains valid. ∙ As a partial case, we deal with the instantaneous sources ψ=δ(t)y, where y∈ℋ and δ(·) is the Dirac delta-function, and the corresponding solutions v=v^δ y(t)=:v^y(t) of the system v̈+Lv = 0 in ℋ, t>0; v|_t=0=0, v̇|_t=0=y in ℋ. The solution is represented in the form v^y(t)=L^-1 2[sin (t L^1 2)] y, t⩾0. A generalized solution v^y is well defined for any y∈ℋ. If y∈ Dom L^1 2 then v^y(t)∈ Dom L and property (<ref>) holds for v^y: Γ_1 v^y(t)=0, t⩾0. ∙ Let us consider the relations between systems α and β. For any T>0, the relation (u^f(T),y)=-∫_0^T(f(t),Γ_2 v^y(T-t)) dt, f∈ℳ, y∈ Dom L is valid. By the choice of controls and proper smoothness of the corresponding trajectories u^f and v^y, one can easily justify the following calculations: 0(<ref>)=∫_0^T(ü^f(t)+L_0^* u^f(t),v^y(T-t)) dt=∫_0^T(ü^f(t),v^y(T-t)) dt+ + ∫_0^T(L_0^* u^f(t),v^y(T-t)) dt=[(u̇^f(t),v^y(T-t))+(u^f(t),v̇^y(T-t))]|_t=0^t=T+ +∫_0^T(u^f(t),v̈^y(T-t)) dt+∫_0^T(L_0^* u^f(t),v^y(T-t)) dt= (<ref>), (<ref>)=(u^f(T),y)+∫_0^T(u^f(t),v̈^y(T-t)) dt+∫_0^T(L_0^* u^f(t),v^y(T-t)) dt= (<ref>)=(u^f(T),y)+∫_0^T(u^f(t),v̈^y(T-t)+L_0^* v^y(T-t)) dt+ +∫_0^T[(Γ_1 u^f(t),Γ_2 v^y(T-t))-(Γ_2 u^f(t),Γ_1 v^y(T-t))] dt= (<ref>), (<ref>), (<ref>)=(u^f(T),y)+∫_0^T(f(t),Γ_2 v^y(T-t)) dt (we also use L_0^*v^y=Lv^y by L⊂ L_0^*). Comparing the beginning with the end, we arrive at (<ref>). One more relation between trajectories of systems α and β is the following. Quite analogous calculations starting from the equality 0=∫_0^T(ü^f(t)+L_0^* u^f(t), v^ψ(T-t)) dt lead to the relation ∫_0^T(u^f(t),ψ(T-t)) dt=-∫_0^T(f(t),Γ_2v^ψ(T-t)) dt, f∈ℳ, ψ∈𝒩. ∙ Let us derive a consequence of (<ref>). We say that a source ψ acts from a subspace 𝒢⊂ℋ if ψ(t)∈𝒢 holds for all t. Fix a positive σ<T; let ψ act from 𝒰⊖𝒰^σ [Recall Convention <ref> !]. Also, let f∈ℱ_T-σ∩ℳ be a delayed control. For such a choice, by (<ref>) we have u^f(t)∈𝒰^σ for all 0⩽ t⩽ T that obeys (u^f(t),ψ(T-t))=0, 0⩽ t⩽ T. Hence, the left integral in (<ref>) vanishes and we obtain ∫_T-σ^T(f(t),Γ_2v^ψ(T-t)) dt=∫_0^σ(f(T-t),Γ_2v^ψ(t)) dt=0. As a result, by arbitrariness of f we conclude that for any smooth ψ acting from 𝒰⊖𝒰^σ the relation Γ_2 v^ψ(t)=0, 0⩽ t⩽σ, holds. It will be used later. Quite analogously, by the use of (<ref>) for system (<ref>)–(<ref>), the relation y∈[𝒰⊖𝒰^σ]∩ Dom L implies Γ_2 v^y(t)=0, 0⩽ t⩽σ is derived. § ABSTRACT FS PRINCIPLE As was already mentioned, to discuss property (<ref>) for abstract systems α and β is meaningless: there are no manifolds, domains, boundaries in them. However, a remarkable fact is that a relevant analog of FS principle for them does exist. ∙ At first, let us turn to the Example and clarify, which fact related to FS principle, we are going to reveal in the abstract case. The corresponding systems β is v_tt-Δ v = ψ in (Ω∖Γ)×ℝ_+; v|_t=0=v_t|_t=0=0 in Ω; v = 0 on Γ×ℝ_+. Fix 0<σ<τ<T_*. Assume that the source ψ acts from the subspace ℋ^τ⊖ℋ^σ, i.e., supp ψ(·,t)=Ω^τ∖Ω^σ, t>0 holds. Thus, the source is located in a `layer' Ω^τ∖Ω^σ, which is separated from the boundary Γ by distance σ, whereas Ω∖Ω^τ is a nonempty open set. In such a case, the wave v^ψ propagates in both directions from the layer with velocity 1. By the latter, at the moment t>0 it is located in the bigger layer Ω^τ+t∖Ω^σ-t. I terms of subspaces, this can be written as v^ψ(t)∈ℋ^τ+t⊖ℋ^σ-t. It is the property that has an abstract analog for system β. [t] ∙ The relevant analog is the following. For convenience in formulation, we put 𝒰^t|_t<0:={0} and Ψ:=L_2^ loc(ℝ_+;ℋ). Let 0<σ<τ and let a source ψ∈Ψ act from the subspace 𝒰^τ⊖𝒰^σ. Then the relation v^ψ(t)∈𝒰^τ+t⊖𝒰^σ-t is valid for all t>0. The proof consists of a few steps. Step 1. Here we derive an auxiliary relation. By C_s,t:={(ξ,η)∈ℝ^2_+ | 0⩽η⩽ t, s-t+η⩽ξ⩽ s+t-η} we denote a characteristic cone of the string equation u_tt-u_ss=0. Let f∈ℳ and ψ∈𝒩 be a control and a source in systems α and β. The relation (v^ψ(s),u^f(t))=-1 2∫_C_s,t[(Γ_2 v^ψ(ξ),f(η))+(ψ(ξ),u^f(η))] dξ dη, 0⩽ t⩽ s is valid. For the Blagoveshchenskii function b(s,t):=(v^ψ(s),u^f(t)), one has b_tt(s,t)-b_ss(s,t)=(v^ψ(s),ü^f(t))-(v̈^ψ(s),u^f(t))(<ref>), (<ref>)= =-(v^ψ(s),L_0^* u^f(t))+(L v^ψ(s)-ψ(s),u^f(t))by L⊂L_0^*= =-(v^ψ(s),L_0^* u^f(t))+(L_0^* v^ψ(s),u^f(t)) -(ψ(s),u^f(t))(<ref>)= =(Γ_1 v^ψ(s),Γ_2 u^f(t))-(Γ_2 v^ψ(s),Γ_1 u^f(t))-(ψ(s),u^f(t))(<ref>), (<ref>))= =-(Γ_2 v^ψ(s),f(t))-(ψ(s),u^f(t))=:F(s,t) in ℝ_+×ℝ_+. In the mean time, (<ref>) provides the zero Cauchy data: b|_t=0=b_t|_t=0=0 on the bottom [s-t,s+t]⊂ℝ_+ of the cone C_s,t. Integrating by D'Alembert formula, we get b(s,t)=-1 2∫_C_s,tF(ξ,η) dξ dη which is (<ref>). Now, fix (s,t) provided C_s,t⊂ C_σ 2,σ 2. By this choice, in the cone C_s,t we have Γ_2 v^ψ(ξ)|_ξ⩽σ(<ref>)=0 and (ψ(ξ),u^f(η))=0, the latter being valid in view of u^f(η)∈𝒰^η⊂𝒰^σ 2⊂𝒰^σ, whereas ψ(ξ) is orthogonal to 𝒰^σ. Thus, both summands under integral in (<ref>) vanish in the cone and we get (v^ψ(s),u^f(t))=0. Since f∈ℳ is arbitrary, the last equality means that v^ψ(s)𝒰^t holds. Keeping s fixed and varying t∈[0,σ-s] (until (s,t)∈ C_σ 2,σ 2 holds), we get v^ψ(s)𝒰^σ-s. Varying s in the admissible segment [0,σ 2], we arrive at v^ψ(s)∈𝒰⊖𝒰^σ-s, 0⩽ s⩽σ 2 . Step 2. In the above considerations, to extend the segment to [0,σ] is not possible since for s<t the bottom [s-t,s+t] of the cone C_t,s does not fit in ℝ_+. Therefore, we change the cone for C'_s,t:={(ξ,η)∈ℝ^2_+ | 0⩽ξ⩽ s, ξ-s+t⩽η⩽ -ξ+s+t} . Fix (s,t) provided C'_s,t⊂ C'_σ 2,σ 2. Repeating the same calculations as on Step 1, we arrive at the Cauchy problem for the string equation b_tt-b_ss=F(s,t), 0<s<t; b|_s=0=b_s|_s=0(<ref>)=0, t⩾ 0 for the same b and F as before. Integrating by D'Alembert, we get (v^ψ(s),u^f(t))=-1 2∫_C'_s,t[(Γ_2 v^ψ(ξ),f(η))+(ψ(ξ),u^f(η))] dξ dη, 0⩽ s⩽ t. By (<ref>) and orthogonality (ψ(ξ),u^f(η))=0 for all ξ<σ, the summands in the integral vanish and we obtain (u^f(t),v^ψ(s))=0 for (s,t)∈ C'_σ 2,σ 2. Keeping σ 2<t<σ fixed and extending s from 0 to σ-t, we conclude that (v^ψ(σ-t),u^f(t))=0 holds for 0⩽ t⩽σ. Since f∈ℳ is arbitrary, the latter obeys v^ψ(t)𝒰^σ-t, i.e., v^ψ(t)∈𝒰⊖𝒰^σ-t, σ 2⩽ t⩽σ . Putting (<ref>) and (<ref>) together, we obtain v^ψ(t)∈𝒰⊖𝒰^σ-t, 0⩽ t⩽σ and establish the first part of the Theorem for smooth ψ. Approximating ψ∈Ψ with smooth sources and using the continuity of the map ψ|_[0,t]→ v^ψ(t), we complete the proof of the first part. It remains to prove the relation v^ψ(t)∈𝒰^τ+t. Step 3. Let I^t:ℋ→ℋ, I^ty:=v^y(t) be a map that resolves problem (<ref>)–(<ref>). By (<ref>) we have I^t=L^-1 2sin [tL^1 2], so that I^t is a bounded self-adjoint operator in ℋ. Let us establish the following. The relation I^t𝒰^τ⊂𝒰^τ+t, τ>0, t>0 holds. Take an f∈ℳ and θ>0. By virtue (<ref>), for a source ψ∈Ψ acting from 𝒰⊖𝒰^τ+θ one has (v^ψ(τ+θ-t), u^f(t))=0, 0⩽ t⩽θ+τ. Putting t=τ, we have 0=(v^ψ(θ), u^f(τ))(<ref>)=(∫_0^θ L^-1 2sin[(θ-s)L^1 2]ψ(s) ds, u^f(θ) )= = ∫_0^θ(ψ(s),L^-1 2sin[(θ-s)L^1 2]u^f(τ) ) ds. By arbitrariness of ψ, the latter leads to L^-1 2sin[(θ-s)L^1 2]u^f(τ) 𝒰⊖𝒰^τ+θ, 0⩽ s⩽θ, that is equivalent to L^-1 2sin[(θ-s)L^1 2]u^f(τ)∈𝒰^τ+θ. Putting s=0 and θ=t, we get I^t u^f(τ)⊂𝒰^τ+t. Since the waves u^f(τ) are dense in 𝒰^τ, we arrive at (<ref>). Let a source ψ satisfy ψ(t)∈𝒰^τ for all t⩾ 0. Representing v^ψ(t)(<ref>)=∫_0^t [I^t-s ψ(s)] ds, t>0, and taking into account I^t-sψ(s)(<ref>)∈𝒰^τ+t for all s⩽ t, we conclude that v^ψ(t)∈𝒰^τ+t is valid and, thus, prove Theorem <ref>. The crucial observation that interior products of waves satisfy the string equation □ b=F is due to A.S.Blagoveshchenskii. It was it that made possible to develop a version of the BC-method for solving dynamical (time-domain) inverse problems <cit.>. Simple but productive trick, which consists in changing the roles of the spatial variable s and time t (with the replacement of the cone C_s,t by the cone C'_s,t) was also invented by Aleksandr Sergeevich <cit.>. § WAVE PARTS OF SYSTEMS AND OPERATORS §.§.§ Systems β_𝒟 and β_𝒰 ∙ Recall that system β is of the form v̈+Lv = ψ in ℋ, t>0; v|_t=0=v̇|_t=0=0 in ℋ, and its trajectory is v^ψ(t)=L^-1 2∫_0^tsin[(t-s)L^1 2] ψ(s) ds, t>0. As a partial case, we deal with the sources ψ=δ(t)y and the corresponding trajectory v=v^δ y=:v^y of the system v̈+Lv = 0 in ℋ, t>0; v|_t=0=0, v̇|_t=0=y in ℋ, which is represented by v^y(t)=L^-1 2sin[(t-s)L^1 2] y, t>0. ∙ Recall the decomposition ℋ=𝒰⊕𝒟, where 𝒰:= span {𝒰^t | t>0}. If ψ(t)∈𝒟 (ψ(t)∈𝒰) holds for t>0 then v^ψ(t)∈𝒟 (v^ψ(t)∈𝒰) is valid for all t>0. If y∈𝒟 (y∈𝒰) holds then v^y(t)∈𝒟 (v^y(t)∈𝒰) is valid for all t>0. 1. Let ψ(t)∈𝒟∩𝒩, t>0. Since ψ(t)𝒰 for t>0, relation (<ref>) easily imply Γ_2 v^ψ|_t>0=0. Hence, by (<ref>) we have (u^f(t),v^ψ(s))=0 for all s,t. Therefore, v^ψ(s)𝒰 holds for all s>0. By approximating, if necessary, the source ψ∈𝒟 with the sources of the class 𝒩, one cancels the restriction ψ(t)∈𝒟∩𝒩. As one can easily verify, the same is true for the sources ψ=δ(t) y with y∈𝒟: we have v^y(t)∈𝒟 for all t>0. 2. Let ψ(t)∈𝒰, t>0. Fix an arbitrary y∈𝒟. By (<ref>) we have (v^ψ(t),y)= ∫_0^t(L^-1 2sin[(t-s)L^1 2] ψ(s),y) ds= =∫_0^t(ψ(s),L^-1 2sin[(t-s)L^1 2] y) ds=∫_0^t(ψ(s),v^y(t-s)) ds= =0, t>0, the latter equality being valid due to relation v^y(t)∈𝒟 proved above. Thus, we arrive at v^ψ(t) y, i.e., v^ψ(t)∈𝒰, t>0. ∙ As a result, we conclude that system β evolves either in the subspace 𝒟 or in the subspace 𝒰, depending on the source ψ acting from 𝒟 or 𝒰 respectively. It means that β splits in two independent (noninteracting) systems β_𝒟 and β_𝒰, the second system sharing the common evolution space with DSBC α. If α is controllable, i.e., 𝒰=ℋ holds, then 𝒟={0} and system β_𝒟 is absent [It is the case in the Example.]. Recall that the latter occurs if and only if operator L_0, which determines all systems under consideration, is completely non-selfadjoint <cit.>. One can claim that system β_𝒟 is a part of system β uncontrollable (unobservable) from boundary. This picture is in full agreement with the general systems theory: see <cit.>, Chapter 10. §.§.§ Space and wave parts of L_0^* ∙ Fix T>0 and assume that operator L_0^* has a part in 𝒰^T. Recall that this means 𝒰^T∩ Dom L_0^*=𝒰^T and L_0^* [𝒰^T∩ Dom L_0^*]⊂𝒰^T. Simplifying the notation, we denote the part L_0^*_ 𝒰^T by L_0^*^T. This part is a closable operator in 𝒰^T and we preserve the same notation L_0^*^T for its closure. We say L_0^*^T to be a space part of L_0^* in 𝒰^T. In the mean time, the lineal set 𝒰^T of smooth waves is dense in 𝒰^T and invariant: L_0^*𝒰^T=𝒰^T holds (see (<ref>)). Therefore the operator L_0^*^T_u:𝒰^T→𝒰^T, Dom L_0^*^T_u=𝒰^T, L_0^*^T_u y:=L_0^* y is well defined, densely defined and closable in 𝒰^T. We preserve the same notation L_0^*^T_u for its closure and call it a wave part of L_0^* in 𝒰^T. As is evident, L_0^*^T_u⊂L_0^*^T holds but something more can be said about relations between these parts. In the following Lemma, by isomorphism we mean an injective, surjective, bounded, and boundedly invertible operator. Recall that 𝒦:= Ker L_0^*. Denote ℱ^T:=L_2([0,T];𝒦) and ℳ^T:={f|_[0,T] | f∈ℳ}. Assume that W^T is an isomorphism from ℱ^T to 𝒰^T and 𝒦∩𝒰^T={0} holds. Then the space and wave parts coincide: L_0^*^T=L_0^*^T_u holds. Choose a pair (y,L_0^*^T y)=(y,L_0^* y)∈ graph L_0^*^T. In view of (<ref>) and (<ref>), one can find a sequence of controls g_n ∈ℳ^T provided u^g_n(T)→L_0^* y in 𝒰^T. By isomorphism of W^T, this sequence has to converge: g_n→ g in ℱ^T. Representing uniquely g_n=-f̈_n, g=-f̈ with f_n∈ℳ^T, we have the convergence f_n→ f in ℱ^T, which implies u^f_n(T)→ u^f(T) in 𝒰^T. Along with the latter convergence, one has L_0^* u^f_n(T)(<ref>)=-ü^f_n(T)=u^-f̈_n(T)=u^g_n(T)→L_0^* y. As a result, we conclude that (u^f(T),L_0^* y)∈ graph L_0^*^T_u holds. In the mean time, L_0^*^T_u⊂L_0^*^T obeys graph L_0^*^T_u⊂ graph L_0^*^T. Hence, both pairs (y,L_0^* y) and (u^f(T),L_0^* y) belong to graph L_0^*^T. The latter follows to (y-u^f(T),0)∈ graph L_0^*^T, i.e., y-u^f(T)∈𝒦. In view of 𝒦∩𝒰^T={0} we arrive at y=u^f(T) and conclude that the graphs of the space and wave parts of L_0^* in 𝒰^T coincide, i.e., L_0^*^T=L_0^*^T_u does hold. ∙ The assumption on W^T to be an isomorphism is rather restrictive: for instance, it is invalid in the Example. However, analyzing the proof of Lemma <ref>, it is easy to remark that such an assumption can be relaxed as follows. It suffices to require the convergence of L_0^* u^f_n(T) to imply the convergence of u^f_n(T) in 𝒰^T, whereas the convergence of f_n in ℱ^T is not necessary. As can be shown, the latter holds in the Example for times T<T_*. In this regard, it is worth noting that the convergence of L_0^* u^f_n(T) implies the convergence of the summands u^f_n_L(T)∈ Dom L in representation (<ref>). This reflects a general fact: in (<ref>), if L_0^* y_n converges then y'_n=L^-1L_0^* y_n also converges. The counterexamples of L_0^*^T≠L_0^*^T_u are not known and a hope for the equality with no assumptions is still alive. §.§.§ Completeness of waves ∙ In system β one can introduce a 'source–state' map I^tψ:=v^ψ(t), t>0 for ψ∈ L^ loc_2(ℝ_+;ℋ) [It is used in <cit.> and called an isotony]. Fix a subspace 𝒜⊂ℋ and denote by Ψ_𝒜:= L^ loc_2(ℝ_+;𝒜) the space of sources acting from 𝒜. In this notation, the statement of Theorem <ref> takes the form I^tΨ_𝒰^τ⊖𝒰^σ⊂𝒰^τ+t⊖𝒰^σ-t, 0<σ<τ, t>0. In the mean time, in the Example, as well as in many other applications, a stronger relation occurs: not embedding but equality holds. It is interpreted as a completeness of waves in domains, which they fill up. In the abstract case, by analogy with applications, one may speak about completeness of waves in the filled subspaces. Below we show a result of this kind under some additional assumption. ∙ Let P^ϵ be the projection in 𝒰 onto 𝒰^ϵ; denote P^ϵ_:=𝕀-P^ϵ. Assume that there is a continuous (in norm) family of the bounded operators N^ϵ, 0⩽ϵ⩽ϵ_* such that N^0=𝕀; N^ϵ y∈ Dom L, P^ϵ_ N^ϵ y=P^ϵ_ y, y∈ Dom L^*_0 holds. Note that, by (<ref>), one has y-N^ϵ y∈𝒰^ϵ. In the Example, in capacity of N^ϵ one can take the multiplication by a smooth function χ^ϵ provided 0⩽χ^ϵ(·)⩽ 1, χ^ϵ|_Ω∖Ω^ϵ=1, χ^ϵ|_Γ=0. Parameter ϵ_* is chosen to provide the interior boundary of the subdomain Ω^ϵ_* to be smooth. By analogy to χ^ϵ we call N^ϵ a neutralizer. Let operator L_0 be such that the neutralizers N^ϵ, 0⩽ϵ⩽ϵ_* do exist and let operator L_0^* have a space part L_0^*^ϵ in each subspace 𝒰^ϵ. Then the relation I^tΨ_𝒰^τ=𝒰^τ+t, τ>0, t>0 is valid. (sketch) Fix 0<τ<T and take f∈ℳ^T. The corresponding waves u^f(T) constitute a dense set in 𝒰^T by definition of the latter. As is evident, the projections P^ϵ_ u^f(T) are dense in 𝒰^T⊖𝒰^ϵ. Loosely speaking, the idea of the proof is to represent u^f(T) as a wave produced by a relevant source F, which acts from the subspace 𝒰^τ. The wave u^f that satisfies (<ref>)–(<ref>), is also determined by the system ü+L_0^* u= δu̇^f(τ)+δ̇u^f(τ), τ<t<T; u|_t=τ=u̇|_t=τ=0; Γ_1 u= f, τ⩽ t⩽ T, where δ=δ(t) is the Dirac function. Taking ϵ<τ and representing u^f=N^ϵ u^f+ [u^f-N^ϵ u^f], we get the system β (with a shifted time) of the form N̈^̈ϵ̈ ̈ü^̈f̈+L N^ϵ u^f=F^ϵ, τ<t<T; N^ϵ u^f|_t=τ=Ṅ^̇ϵ̇ ̇u̇^̇ḟ|_t=τ=0 (we use L_0^* N^ϵ u^f=LN^ϵ u^f) with a source F^ϵ(t):=-[d^2/dt^2+L_0^*](u^f(t)-N^ϵ u^f(t))+ δu̇^f(τ)+δ̇u^f(τ), where u^f(t)-N^ϵ u^f(t)∈ Dom L_0^*∩𝒰^ϵ holds. By the latter, we have L_0^*[u^f(t)-N^ϵ u^f(t)]=L_0^*^ϵ[u^f(t)-N^ϵ u^f(t)]∈𝒰^ϵ⊂𝒰^τ, where L_0^* ϵ is the space part of L_0^* in 𝒰^ϵ. In the mean time, u^f(τ) and u̇^f(τ) belong to 𝒰^τ. So, the source F^ϵ does act from the subspace 𝒰^τ, its time of acting being equal to T-τ. By the latter, shifting time t→ t-τ in (<ref>)–(<ref>) and applying Theorem <ref>, we see that the source F^ϵ produces the wave v^F^ϵ(T-τ)=N^ϵ u^f(T). When f varies in ℳ^T, the projections P^ϵ_ N^ϵ u^f(T)=P^ϵ_ u^f(T) of such waves constitute a complete system in 𝒰^T⊖𝒰^ϵ. Tending ϵ→ 0, by (<ref>) we conclude that there is a sequence {F^ϵ} of the sources, which act from 𝒰^τ and provide v^F^ϵ(T-τ)→ u^f(T). Therefore, these sources produce a system of waves complete in 𝒰^T. Since τ and T are arbitrary, it is easy to see that what has been proved is equivalent to the equality (<ref>). To justify the formal operations with δ and δ̇, one needs to approximate them by a proper smooth regularizations: see, e.g, <cit.>. The idea to use a neutralizer comes from the Example, where its existence is guaranteed and do not require additional assumptions. §.§.§ One more abstract property Here is one more fact that takes place in the Example, which can be generalized. At first glance, it looks very specific but, as will be shown, does have an abstract analog. Recall that the Friedrichs extension L=-Δ of the minimal Laplacian is defined on Dom L=H^2(Ω)∩ H^1_0(Ω). Let y∈ Dom L and supp y⊂Ω∖Ω^τ for a positive τ<T_*, so that supp y is separated from the boundary Γ by the distance τ. In such a case, we have y|_Γ=∂_ν y|_Γ=0 and hence y∈ H^2_0(Ω) i.e., y∈ Dom L_0 holds. Let τ>0 satisfy 𝒰⊖𝒰^τ≠{0} and y∈ [𝒰⊖𝒰^τ]∩ Dom L hold. Then the relation y∈ Dom L_0 is valid. Recall that y(<ref>)∈ Ker Γ_1. Let T>τ. By y∈ Dom L and (<ref>), we have Γ_2 v^y|_0< t⩽τ=0. Since y∈ Dom L, one has v̇^y(t)(<ref>)=cos[tL^1 2] y=L^-1cos[tL^1 2] Ly, t⩾ 0. This implies v̇^y∈ C([0,T]; Dom L), where Dom L is endowed with the L-graph norm <cit.>. By corresponding continuity of Γ_1,2, we get Γ_2 v̇^y|_t=+0=Γ_2y=0. So, y∈ Ker Γ_1∩ Ker Γ_2, i.e., y(<ref>)∈ Dom L_0 does hold. §.§.§ A bit of philosophy A character and goal of this paper may be commented on as follows. In our opinion, working in specific branches of mathematical physics (like inverse problems), it is however reasonable to pay attention to abstractions. Let us refer to the authority of classicists. According to Van der Waerden, a maxima, which Emmy Noether adhered to in her work, claims that any interconnection between numbers, functions and operations becomes transparent, available for further generalization and productive only after that, as it is separated from any specific objects and is reduced to general terms. Systems α and β are the general terms. We try to follow the maxima. 99 Bel DAN'87 M.I.Belishev. An approach to multidimensional inverse problems for the wave equation. Dokl. Akad. Nauk SSSR, 297 (1987), 524–527. (Russian). English transl. in Soviet Math. Dokl., 36, 481484. B DSBC IP 2001 M.I.Belishev. Dynamical systems with boundary control: models and characterization of inverse data. Inverse Problems, 17: 659–682, 2001. B JOT M.I.Belishev. A unitary invariant of a semi-bounded operator in reconstruction of manifolds. Journal of Operator Theory, Volume 69 (2013), Issue 2, 299-326. B EACM M.I.Belishev. Boundary Control Method. Encyclopedia of Applied and Computational Mathematics, Volume no: 1, Pages: 142–146. DOI: 10.1007/978-3-540-70529-1. ISBN 978-3-540-70528-4 BD_DSBC M.I.Belishev, M.N.Demchenko. Dynamical system with boundary control associated with a symmetric semibounded operator. Journal of Mathematical Sciences, October 2013, Volume 194, Issue 1, pp 8-20. DOI: 10.1007/s10958-013-1501-8. BSim_1 M.I.Belishev, S.A.Simonov. Wave model of the Sturm-Liouville operator on the half-line. St. Petersburg Math. J., 29 (2018), no. 2, 227–248. BSim_2 M.I.Belishev, S.A.Simonov. A Wave Model of Metric Spaces. Functional Analysis and Its Applications, April 2019, Volume 53, Issue 2, pp 7985. doi.org/10.1134/S0016266319020011. BSim_3 Mat Sbor M. I. Belishev, S.A.Simonov. A Wave Model of Metric Space with Measure. Mat. Sbornik, 2019, to appear. BSim_4 M.I.Belishev, S.A.Simonov On evolutionary first-order dynamical system with boundary control. Zapiski Nauchnykh Seminarov POMI, 2019, 483, 41-54. (in Russian) BelBlag M.I.Belishev, A.S.Blagoveschenskii. Dynamical Inverse Problems of Wave Theory. SPb State University, St-Petersburg, 1999 (in Russian). BirSol M.Sh.Birman, M.Z.Solomak. Spectral Theory of Self-Adjoint Operators in Hilbert Space. Reidel Publishing Comp., 1987. Blag A.S.Blagovestchenskii. On a local approach to the solving the dynamical inverse problem for inhomogeneous string. Trudy MIAN V.A. Steklova 115 (1971), 28-38 (in Russian). Blag2 A.S.Blagovestchenskii. Inverse Problems of Wave Processes. VSP, Netherlands, 2001. Demch M.N. Demchenko. On a partially isometric transform of divergence-free vector fields. J. Math. Sci., 166 (1) (2010) 11–22. MMM V.F.Derkach, M.M.Malamud. Theory of symmetric operator extensions and boundary value problems. (in Russian) Kiïv, 2017. ISBN 966-02-2571, ISBN 978-966-02-8267-4 (v.104) KFA R.Kalman, P.Falb, M.Arbib. Topics in Mathematical System Theory. New-York: McGraw-Hill, 1969. LTrDynamReported I.Lasiecka, R.Triggiani. Recent advances in regularity of second-order hyperbolic mixed problems, and applications. In Christopher K. R. T. (ed.) et al. Jones, editor, Dynamics reported. Expositions in dynamical systems, volume 3, pages 104–162. Berlin: Springer-Verlag, 1994. LT I.Lasiecka and R.Triggiani. Exact Controllability of the Euler-Bernoully Equation with Boundary Controls for Displacement and Moment. Journal of Mathematical Analysis and Applications, v. 146, no 1, 1990. Tat D.Tataru. Unique continuation for solutions to PDE's: between Hormander's and Holmgren's theorem. Comm. PDE, 20 (1995), 855–884. Vishik M.I.Vishik. On general boundary value problems for elliptic differential equations. Proceedings of Moscow Math. Society, 1 (1952), 187–246 (in Russian). English translation: Amer. Math. Soc. Transl. Ser. 224 (1963), 107–172.
http://arxiv.org/abs/2307.00274v1
20230701090712
Common Knowledge Learning for Generating Transferable Adversarial Examples
[ "Ruijie Yang", "Yuanfang Guo", "Junfu Wang", "Jiantao Zhou", "Yunhong Wang" ]
cs.LG
[ "cs.LG", "cs.CV" ]
School of Computer Science and Engineering, Beihang University Beijing China School of Computer Science and Engineering, Beihang University Beijing China School of Computer Science and Engineering, Beihang University Beijing China State Key Laboratory of Internet of Things for Smart City Department of Computer and Information Science, University of Macau Macau China School of Computer Science and Engineering, Beihang University Beijing China This paper focuses on an important type of black-box attacks, i.e., transfer-based adversarial attacks, where the adversary generates adversarial examples by a substitute (source) model and utilize them to attack an unseen target model, without knowing its information. Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures (e.g. ResNet-18 and Swin Transformer). In this paper, we observe that the above phenomenon is induced by the output inconsistency problem. To alleviate this problem while effectively utilizing the existing DNN models, we propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples with better transferability, under fixed network architectures. Specifically, to reduce the model-specific features and obtain better output distributions, we construct a multi-teacher framework, where the knowledge is distilled from different teacher architectures into one student network. By considering that the gradient of input is usually utilized to generated adversarial examples, we impose constraints on the gradients between the student and teacher models, to further alleviate the output inconsistency problem and enhance the adversarial transferability. Extensive experiments demonstrate that our proposed work can significantly improve the adversarial transferability. Common Knowledge Learning for Generating Transferable Adversarial Examples Yunhong Wang August 1, 2023 ========================================================================== § INTRODUCTION Recent studies have shown that deep neural networks (DNNs), such as convolutional neural networks (CNNs), transformers and etc., are vulnerable to adversarial attacks, which add special yet imperceptible perturbations to benign data to deceive the deep models to make wrong predictions. This vulnerability poses a considerable threat to the safety of the DNN-based systems, especially those applied in security-sensitive domains, such as autonomous driving, face-scan payment, etc. Since different DNN architectures usually function differently, their corresponding vulnerabilities are also different. Therefore, existing adversarial attack techniques are usually specifically designed for different DNN architectures, to discover the safety threats of different DNN architectures. Due to privacy or copyright protection considerations, black-box attacks tend to possess more applicability in real scenarios. In this paper, we focus on an extensively studied scenario of black-box attacks, i.e., transfer-based adversarial attack, which assumes that the adversary do not have access to the target model. Instead, the attacker can train substitute models to generate adversarial examples to attack the target model. For convenience, we only consider image classification DNN models in this paper. For common CNN models, to enhance the transferability of adversarial examples, various techniques have been proposed, and they can be briefly classified into two categories according to their mechanisms, i.e., gradient modifications <cit.> and input transformations <cit.>. The former type of methods usually improves the gradient ascend process for adversarial attacks to prevent the adversarial examples from over-fitting to the source model. The latter type of methods usually manipulates input images with various transformations, which enables the generated adversarial perturbations to adapt different input transformations. Consequently, these adversarial examples possess a higher probability of transferring to successfully attack the target unknown model. The recent success of vision transformers (ViT) has also prompted several studies on devising successful attacks on the ViT type of architectures.  <cit.> construct substitute model based attack methods for ViTs, according to their unique architectures, such as attention module, classification token, to generate transferable adversarial examples. Currently, existing methods usually employ pre-trained classification models as the source (substitute) model (as well as the target model in the experiments) directly, for achieving transfer-based adversarial attacks. One of the core reasons, which affects the transferability of these adversarial attacks, is the similarity between the source (substitute) model and the target model. By assuming that the source model and the target model are identical, the attack becomes a white-box attack. Then, the transferability is expected to be high, and the attack success rate is equivalent to that in the corresponding white-box setting. Intuitively, two models with similar architectures and similar weights tend to possess high transferability <cit.>. On the contrary, models with significantly different architectures and weights usually exhibit low transferability. For example, when we generate adversarial examples in ResNet-18 and test them on ViT-S, the attack success rate is 45.99%, which is lower than the transferability from ResNet-18 to Inception-v3 (62.87%). Since different network architectures and weights will induce different outputs, we believe that the low adversarial transferability is due to the output inconsistency problem, as depicted in Figure <ref>. As can be observed, even when each of the models gives correct classification result, the output probabilities are still inconsistent. Besides, the inconsistency between two different CNN models is usually smaller than that between two models from different architectural categories, e.g., a CNN model and a transformer-based model. Apparently, this inconsistency is harmful to adversarial transferability, because typical adversarial attacks are usually designed to manipulate the target model's output probability and this inconsistency will increase the uncertainty of the outputs of the target model. To better describe this output inconsistency, KL divergence is employed to numerically represent it. As shown in Figure <ref>, higher output inconsistencies tend to induce lower transferability and vice versa. To alleviate the above problem, a straightforward solution is to construct a universal network architecture which possesses relatively similar output distributions as different types of DNN models. Unfortunately, this universal network architecture and its training strategy are both difficult to be designed and implemented. Besides, this solution is highly unlikely to effectively utilize the existing pre-defined DNN architectures, which are much more convenient to be applied in real scenarios. To alleviate this output inconsistency problem and effectively utilize the existing pre-defined DNN models, in this paper, we propose a common knowledge learning (CKL) method for the substitute (source) model to learn better network weights to generate adversarial examples with better transferability, under fixed network architectures. Specifically, to reduce the model-specific features and obtain better output distributions, we adopt a multi-teacher approach, where the knowledge is distilled from different teacher architectures into one student network. By considering that the gradient of input is usually utilized to generated adversarial examples, we impose constraints on the gradients between the student and teacher models. Since multiple teach models may generate conflicting gradients, which will interfere the optimization process, we adopt PCGrad <cit.> into our work to diminish the gradient conflicts of the teacher models. Our contributions are summarized as follow. * We analyze the relationship between adversarial transferability and property of the substitute (source) model, and observe that a substitute model with less output inconsistency to the target model tends to possess better adversarial transferability. * To reduce the model-specific features and obtain better output distributions, we propose a common knowledge learning framework to distill multi-teacher knowledge into one single student network. * For generating adversarial examples with better transferability, we propose to learn the input gradients of the teacher models and utilize gradient projection to reduce the conflicts in the gradients of multiple teachers. * Extensive experiments on CIFAR10 and CIFAR100 demonstrate that our method is effective and can be easily integrated into transfer-based adversarial attack methods to significantly improve their attack performances. § RELATED WORK §.§ Adversarial Attacks Adversarial attack is firstly proposed by  <cit.>. Subsequently, a large number of adversarial attack methods are proposed, which are usually classified into two categories according to the adversary's knowledge <cit.>, i.e., white-box and black-box attacks. The black-box attacks can be further classified into query-based and transfer-based attacks. White-box attacks usually assume that the adversary can access all the necessary information, including the architecture, parameters and training strategy, of the target model. Query-based attacks usually assume that the adversary can obtain the outputs by querying the target model <cit.>. Transfer-based attacks generate adversarial examples without the access to the target model, whose assumption is the closest to practice. Under such circumstance, the adversary usually exploits a substitute model to generate adversarial examples and utilize the examples to deceive the target model <cit.>. Since our work focus on the transfer-based scenario, we will introduce this attack scenario in details in next subsection. §.§ Transfer-based Attacks Since different DNN architectures usually function differently, existing transfer-based attack techniques are usually specifically designed for different DNN architectures. For CNN architectures, the attack approaches in transfer-based scenarios can mainly be classified into two categories, i.e., gradient modifications and input transformations. For gradient modifications based methods, <cit.> firstly proposes MI-FGSM to stabilize the update directions with a momentum term to improve the transferability of adversarial examples. <cit.> adopts the Nesterov accelerated gradients into the iterative attacks. <cit.> proposes an Adam iterative fast gradient tanh method (AI-FGSM) to generate adversarial examples with high transferability. Besides, <cit.> adopts the AdaBelief optimizer into the update of the gradients and constructs ABI-FGM to further boost the attack success rates of adversarial examples. Recently, <cit.> introduces variance tuning to further enhance the adversarial transferability of iterative gradient-based attack methods. On the contrary, input transformations based methods usually applies various transformations to the input image in each iteration to prevent the attack method from being overfitting to the substitute model. <cit.> presents a translation-invariant attack method, named TIM, by optimizing a perturbation over an ensemble of translated images. Inspired by data augmentations, <cit.> optimizes adversarial examples by adding image cropping operation to each iteration of the perturbation generation process. Recently, Admix <cit.> calculates the gradient on the input image admixed with a small portion of each add-in image while using the original label, to craft adversarial examples with better transferability.  <cit.> improves adversarial transferability via an adversarial transformation network, which studies efficient image transformations to boosting the transferability.  <cit.> proposes AutoMa to seek for a strong model augmentation policy based on reinforcement learning. Since the above approaches are designed for CNNs, their performances degrade when their generated adversarial examples are directly input to other types of DNN architectures, such as vision transformers <cit.>, mlpmixer <cit.>, etc. Since the transformer-based architectures have also been widely applied in image classification task, several literatures have also presented transfer-based adversarial attack methods when transformer-based architectures are employed as the source (substitute) model. <cit.> proposes a self-attention gradient attack (SAGA) to enhance the adversarial transferability. <cit.> introduces two novel strategies, i.e., self-ensemble and token refinement, to improve the adversarial transferability of vision transformers. Motivated by the observation that the gradients of attention in each head impair the generation of highly transferable adversarial examples,  <cit.> presents a pay no attention (PNA) attack, which ignores the backpropagated gradient from the attention branch. §.§ Knowledge Distillation A common technique for transferring knowledge from one model to another is knowledge distillation. The mainstream knowledge distillation algorithms can be classified into three categories, i.e., response-based, feature-based and relation-based methods <cit.>. The feature-based methods <cit.> exploits the outputs of intermediate layers in the teacher model to supervise the training of the student model. The relation-based method <cit.> explores the relationships between different layers or data samples. These two types of methods requires synchronized layers in both the teacher and student models. However, when the architectures of the teacher and student models are inconsistent, the selection of proper synchronized layers becomes difficult to achieve. On the contrary,  <cit.>, which is a response-based method, constrains the logits layers of the teacher and student models, which can be easily implemented for different tasks without the above mentioned synchronization problem. Therefore,  <cit.> is adopted in our proposed work. § METHODOLOGY §.§ Notations Here, we define the notations which will be utilized in the rest of this paper. Let x∈𝒳⊆ R^C× W × H denote a clean image and y is its corresponding label, where 𝒳 is the image space. Let z=(o_1, o_2, ..., o_i, ..., o_K)∈ R^K be the output logit of a DNN model, where K is the number of classes. T_1(·), T_2(·), ...,T_n(·) are employed to denote the teacher networks. S(·) stands for the student network. Correspondingly, L_S(·), L_T_i(·) are utilized to denote the losses of the student and teacher models, respectively. The goal of an generated adversarial example x^* is to deceive the target DNN model, such that the prediction result of the target model F_tar(·) is not y, i.e., iargmax F_tar(x^*)≠ y. Meanwhile, the adversarial example is usually desired to be similar to the original input, which is usually achieved by constraining the adversarial perturbation by l_p norm, i.e., x^*-x_p ≤ϵ, where ϵ is a predefined small constant. §.§ Overview According to Figures <ref> and <ref>, we can observe that the output inconsistency problem significantly affects the transferability of adversarial examples, i.e., high output inconsistency usually indicates low transferability, and vice versa. Since the output inconsistency within each type of DNN architectures is usually less than that of the cross-architecture models, the adversarial examples generated based on the substitute model from one type of DNN architectures (e.g. CNNs) usually give relatively poor attack performance on the target models from other types of DNN architectures (e.g. ViT, MLPMixer). The straightforward solution to alleviate the output inconsistency problem, i.e., designing a new universal network architecture and its training strategy, are quite difficult, inefficient and inconvenient for real scenarios. To alleviate the aforementioned problems, in this paper, we propose a common knowledge learning (CKL) framework, which distills the knowledge of multiple teacher models with different architectures into a single student model, to obtain better substitute models. The overall framework is shown in Figure <ref>. Firstly, we select teacher models from different types of DNN architectures. The student model will learn from their outputs to reduce the model-specific features and obtain common (model-agnostic) features, to alleviate the output inconsistency problem. Since the input gradient is always utilized in typical adversarial attack process, the student model will also learn from the input gradients of the teacher models to further promote the transferability of the generated adversarial examples. After training the student model, in the test stage, this student model will be utilized as the source (substitute) model to generate adversarial examples. §.§ Common Knowledge Distillation As can be observed from Figure <ref>, when the same input is fed into different models, the output probabilities are quite different, which actually reveals that there exists feature preference in deep models. Apparently, the output inconsistency problem is induced by these model-specific features, because these features, which are considered to be distinctive to one model, may not be distinctive enough to others. Under such circumstance, when an adversarial example is misclassified by the source model to the other class ŷ(ŷ≠ y), it contains certain manipulated features which are distinctive to the source model. However, if these manipulated features are not distinctive enough to the target model, the adversarial example may not be able to deceive the target model. According to the analysis above, it is crucial to identify and emphasize the common (model-agnostic) features among different DNN models in the substitute model, such that when these model-agnostic features is manipulated to generate the adversarial examples, the target model will possess a higher possibility to be deceived, i.e., the adversarial transferability will be improved. Therefore, we construct a multi-teachers knowledge distillation method to force the student model to learn and emphasize the common features from various DNN models. Since different DNN models usually possess different architectures, we constrain the model outputs between the student and teacher models, by adopting  <cit.>. Specifically, the KL divergence is exploited to measure the output discrepancy between each teacher model T_i(·) and the student model S(·), which is formulated as KLdiv(T_i(x), S(x)) = ∑_k=1^K S(x)_k · log S(x)_k/T_i(x)_k, where K represents the number of classes. To jointly utilize all the teacher models, the knowledge distillation (KD) loss L_KD is defined as L_KD = Σ_i=1^N KLdiv(T_i(x), S(x)). §.§ Gradient Distillation Since the input gradients are commonly utilized in the main-stream adversarial attack methods, such as FGSM, MIM, DIM, TIM, VNI-FGSM etc., if two networks F(x) and G(x) satisfy ∇_x L_F(x)=∇_x L_G(x), adversarial examples generated by these methods will be identical when either of the two networks are employed as the source model. Additionally, if ∇_x L_F(x)=∇_x L_G(x), the losses of F(x) and G(x) differ by at most one constant, i.e., there exists a constant C that L_F(x) = L_G(x) + C. Since the outputs of two models are more likely to be inconsistent when their losses are different, if the input gradients of two models are similar, these two models tend to possess less output inconsistency. Although this assumption cannot be exactly satisfied in real scenarios, it can still be useful in generating transferable adversarial examples. Therefore, we constrain the student model to learn the input gradients from the teacher networks, to further improve the adversarial transferability. Since our framework will utilize multiple teacher networks to teach one student model, it is vital to design suitable approaches to learn multiple input gradients. Under such circumstance, a simple solution is to design a multi-objective optimization problem which minimizes the distances between the input gradients of the student model and each teacher model. This optimization problem can be simplified by minimizing the distance between the input gradient of the student model and the averaged input gradients of the teacher models (which can be regarded as a representative value among all the input gradients of the teacher models), as min∇_x L_S(x) - g̅(x)_2^2, where g̅(x)= Σ_i=1^N g_i(x) and g_i(x) = ∇_x L_T_i(x). For convenience, we let g_i denote g_i(x) and let g̅ denote g̅(x) in the rest of this section, when the input x is not necessary to be emphasized. Unfortunately, there exists conflicting gradients among them, which is similar to the gradient conflict problem in multi-task learning <cit.>. This gradient conflict problem means that there exists a j, satisfying g_j ·g̅ < 0. If ∇_x L_S(x) is gradually closer to g̅, this gradient conflict problem will actually move ∇_x L_S(x) further away from g_j, which is the input gradient of the j-th teacher model. To address this issue, inspired by the PCGrad <cit.> method, we adjust our optimization objective function by projecting one of the two conflicting gradients onto the normal plane of the other gradient. Specifically, when we have two conflicting gradients, g_i and g_j, we will replace g_i with g_i = g_i - g_i· g_j/g_j· g_j g_j. This replacement step is performed for all the gradients. After the replacement step, we calculate d(x)=Σ_i=1^N g_i(x) as the ground truth for the student model to learn. Then, the gradient objective function becomes L_Grad = ∇_x L_S(x) - d(x)_2^2. By combining Eq. <ref> and Eq. <ref>, the final loss of our common knowledge learning (CKL) for training the student network can be obtained as L = L_KD + λ· L_Grad, where λ is the hyperparameter employed to balance the two loss terms. §.§ Generating Adversarial Examples with CKL After the training process, we utilize the trained student model (S) as the source (substitute) model to generate adversarial examples. Our framework can be easily combined with the existing transfer-based adversarial attack methods. For demonstration, here we leverage DI-FGSM <cit.> as an example to explain the adversarial example generation process. Let CE(·) denote the commonly utilized Cross Entropy loss. Let φ (·) denote the input transformations, i.e., resizing and padding <cit.>. We set x_0 = x. Then, the adversarial example generation process can be formulated as L(x_t) = CE(S(φ (x_t)), y) g_t = ∇_x_t L(x_t) m_t+1 = μ m_t + g_t/g_t_1 x_t+1 = Clip_x, ϵ(x_t + α· sign(m_t+1)), where ϵ is a predefined small constant to constrain the maximum magnitudes of the generated adversarial example. Clip_x, ϵ(·) forces the modified value to stay inside the L_∞ ball ({x_t | x_t-x_∞≤ϵ}). α is the step size and μ is momentum. This process terminates when t reaches the maximum number (N) of iterations, and x_N is the finally generated adversarial example. § EXPERIMENTS In this section, we firstly introduce the necessary information for our experiments. Then, we present the non-targeted results in Sec. <ref> and Sec. <ref>. Next, in Sec. <ref>, we conduct the experiments in targeted attack scenario. At last, we conduct an ablation study on the effects of our proposed modules in Sec. <ref>. §.§ Experimental Settings Datasets. Two widely used classification datasets, i.e., CIFAR10 and CIFAR100, are employed in our experiments. Both of two datasets possess images with the size of 32× 32 × 3. For each dataset, 50,000 images are selected as the training set for training the student model, and 10,000 images are selected as the testing set for generating adversarial examples. Networks. Nine networks with different types of DNN architectures are employed as either source model or target model, which includes ResNets <cit.>, VGG-16 <cit.>, DenseNet-121 <cit.>, Inception-v3 <cit.>, MobileNet-v2 <cit.>, ViT-S <cit.>, Swin Transformers <cit.>, MLPMixer <cit.>, and ConvMixer <cit.>. To learn common knowledge from different types of DNN architectures, ResNet-50, Inception-v3, Swin-T, and MLPMixer are constructed as the teacher models. Baselines. Our method is compared with several attacks, including MI-FGSM <cit.>, DI-FGSM <cit.>, VNI-FGSM <cit.>, and ILA-DA <cit.>. The numbers of attack iterations M is set to 30 and step size is set to 1/255 in all the experiments. Implementation Details. We employ the training set to train the student model (S) and testing set for generating the adversarial examples. In the training process, the momentum SGD optimizer is employed, with an initial learning rate lr=0.1 (annealed down to zero following a cosine schedule), momentum 0.9, and weight decay 0.0003. The maximum epoch number is 600. In the attack stage, we constrain the adversarial example and origin input by the l_∞ ball with ϵ = 8/255, i.e., x^*-x_∞≤ 8/255. For DI-FGSM <cit.>, each input benign image is randomly resized to rnd × rnd × 3, with rnd ∈ [28, 32), and then padded to the size 32 × 32 × 3 in a random manner. Evaluation Metric. The attack success rate (ASR) is employed to evaluate the attack performance. It is defined as the probability that the target model is fooled by the generated adversarial examples, i.e., ASR = 1 - #{correct samples}/#{total samples}, where # denotes the number of elements in the set. §.§ Non-targeted Attack Results on CIFAR10 The attack success rates (ASR) of the non-targeted attack on CIFAR10 are reported in Table <ref>. Note that the elements in the first row and column represent the target and source models, respectively. We compare our method with MI-FGSM, DI-FGSM, and VNI-FGSM, which are abbreviated as MI, DI, and NI, respectively. Results comparison to ILA-DA <cit.> are provided in the supplementary material due to the space limit. Their original methods generate adversarial examples by directly employing the pre-trained models. Meanwhile, our adversarial examples are generated by the student model, which is trained by our CKL framework. As can be observed, our CKL can give significant improvements compared to the corresponding baseline methods, which proves the effectiveness of our proposed work for adversarial transferability. Transferability to the unseen models. Note that ResNet-50, Inception-v3, Swin Transformer, and, MLPMixer are employed as teacher models and utilized to train the student models. As can be observed, when selected these four models as the target models, the results with our CKL framework are significantly improved, compared to their corresponding baselines. To better verify the effectiveness of our CLK, we also employ the unseen models, e.g., VGG-16, DenseNet-121, and ViT-S, for evaluations, and our CKL can also achieve significant improvements. For example, when the source model is ResNet-18 and the target model is ViT-S, our method can obtain up to 25% gains. This phenomenon indicates that our CKL framework can learn effective common knowledge from the teacher models and leverage them to the unseen models. Transferability to the cross-architecture models. The cross-architecture transferability is usually a challenging problem for the baseline attack methods, as can be observed from the results. For example, when the source model is selected as ViT-S, the correspondingly generated adversarial examples' transferability to DenseNet-121 is relatively low, i.e., only 23.26% for DI-FGSM. On the contrary, by integrating our CKL framework, the attack transferability can be largely improved, due to the common knowledge learning at the training stage. §.§ Non-targeted Attack Results on CIFAR100 For better assessment of our proposed work, we further validate our CKL method on CIFAR100 and the results are shown in Table <ref>. The experimental setups are identical to these in Section <ref>. DI-FGSM and VNI-FGSM are employed as the baseline methods. As can be observed, our method has a consistent improvement on the CIFAR100 dataset, whatever the attack method and the source model are. Besides, as shown in the last column of Table <ref>, which reports the averaged ASRs of the test models, our CKL method can improve the averaged value for at least 7 percentage points, compared to the corresponding baseline methods. In addition, for the cross-architecture transferability, our method usually gives an improvement of more than 10 percentage points. For example, when the source model is ResNet-50 and the target model is MLPMixer, `DI+CKL' outperforms `DI' up to 18.19% and `VNI+CKL' outperforms `VNI' up to 18.37%. These results further verify the effectiveness of our CKL method. §.§ Targeted Attack Results on CIFAR10 Here, we evaluate the effectiveness of our proposed method on targeted adversarial attack. The targeted attack requires the target model to classify the adversarial examples into a pre-specific class t(t≠ y), while the non-targeted attack only requires the model to make a wrong prediction. Thus, the targeted attack is indeed a more challenging problem. Note that the targeted attack results on CIFAR100 are provided in the supplementary material due to limited space. To evaluate the performance of our method, we employ two baseline methods, i.e., VNIFGSM <cit.> and Logit attack <cit.>. By following <cit.>, we set the maximum number of iterations to 300, step size to 2/255 and ϵ to 8/255. We randomly generate a target label t(t≠ y) for each data pair (x, y). We utilize the testing set to generate adversarial examples and employ ResNet-18 and Swin Transformer as the source models. The generated adversarial examples are tested on VGG-16, DenseNet-121, Convmixer, and ViT-S, which are not overlapped with the teacher models. Note that for targeted attack, the attack success rate tASR is computed as tASR = #{x∈𝒳' | iargmaxF_tar(x)=t }/#{x∈𝒳'}, where F_tar(x) denotes the output class of the target model and 𝒳' represents the adversarial examples set. As can be observed, the tASR scores, as shown in Table <ref>, are usually significantly lower than the corresponding ASR scores, as shown in Table <ref>, with the same settings. Besides, we can observe that the cross-architecture transfer attack usually leads to lower tASR values, compared to attacking a target model, which is in the same type of DNN architectures. For example, when VNI is employed as the attack method and ResNet-18 is utilized as its source model, ConvMixer only obtains 27.01% tASR score, because the distinctive features of ResNet-18 and ConvMixer tend to be different. On the contrary, when our CKL framework is integrated, the corresponding results obtain large gains, e.g., up to 22.45% improvement for the above example. This phenomenon also indicates that our CKL framework can enable the student (substitute) model to learn common knowledge from multiple teacher models, which significantly improves the adversarial transferability. §.§ Ablation Study Input Gradient Distillation Scheme. Here, the effectiveness of our input gradient distillation scheme is validated. For better comparisons, we firstly introduce several variants of the objective function in learning the input gradients. (i) The student model is training without gradient learning, i.e., it only employs Eq. <ref> as the objective function, which is denoted as `w/o teacher gradients'. (ii) The objective function is replaced by the averaged value of multiple teacher models' gradients, i.e., ∇_x L_S(x) - g̅(x)_2^2, where g̅(x)= Σ_i=1^N g_i(x), which is denoted as `average of teacher gradients'. (iii) The gradient objective function is replaced by the maximum input gradient value of the teacher models. Considering that the positive and negative signs of the gradients do not affect the final results, we select the max absolute value of gradients. This objective function is set to ∇_x L_S(x) - g̅(x)_2^2, where g_max(x)=g_i(x) and |g_i(x)|=jmax|g_j(x)|. This variant is denoted as `max of teacher gradients'. Here, ResNet-18 is employed as the source model. The teacher models are identical to that in Section <ref>. As can be observed in Table <ref>, learning the input gradient from teacher models is effective. Moreover, our method is more effective than these variants. Effects of λ. Here, we study the effects of the hyperparameter λ in Eq. <ref>. To assess its impacts, we set λ= 1, 5, 10, 50, 100, 500, 1000, 2000. ResNet-18 is employed as the source model and The target models include VGG-16, ResNet-18, ResNet-50, DenseNet-121, Inception-v3, ConvMixer, MLPMixer, Swin-T, and ViT-S. The results are shown in Figure <ref>. As can be observed, when λ value is small, the objective function in Eq. <ref> is dominanted by the first term, i.e., the objective function of knowledge distillation, and the result remains essentially unchanged. With the increasing of λ, the second term, i.e., the objective function of input gradient distillation, starts to function gradually, and thus the performance gradually increases. However, when λ is relatively large, e.g., λ=1000, the attack success rate will decline. Thus, to achieve a good balance between the knowlege distillation and input gradient distillation objectives on all the models, we select λ=500 in our experiments. Selection of the Teacher Models. To learn a common knowledge from different types of DNN architectures, ResNet-50, Inception-v3, Swin-T, and MLPMixer are employed as the teacher models in our experiments. Here, we study the effects of different selections of the teacher models, i.e., (a). 4 CNN models, including ResNet-18, ResNet-50, Inception-v3, and VGG-16; (b). 2 CNN models and 2 non-CNN models, including ResNet-50, Inception-v3, Swin-T, and MLPMixer; (c). 4 non-CNN models, including Swin-T, MLPMixer, ConvMixer, and ViT-S. For convenience, ResNet-18 is employed as the student (source) model. The results are shown in Table <ref>, where RN-18, RN-50 and DN-121 are the abbreviations of ResNet-18, ResNet-50 and DenseNet-121, respectively. As can be observed, when the teacher models are all selected from the non-CNN models, the adversarial transferability to the non-CNN models is relatively high while that to the CNN models is relative low, because the student model learns more bias from the non-CNN models. Besides, we observe an interesting phenomenon that when all the teacher models are selected from CNNs, the attack success rate on the target CNN models are actually not the best. The best results are obtained by selecting 2 CNN and 2 non-CNN models as the teacher models, which implies that learning from both the CNN and non-CNN models is more effective, when attacking the CNN models. § CONCLUSION In this paper, we observe that the output inconsistency problem significantly affects the transferability of adversarial examples. To alleviate this problem while effectively utilizing the existing DNN models, we propose a common knowledge learning (CKL) framework, which distills the knowledge of multiple teacher models with different architectures into a single student model, to obtain better substitute models. Specifically, to emphasize the model-agnostic features, the student model is required to learn the outputs from multiple teacher models. To further reduce the output inconsistencies of models and enhance the adversarial transferability, we propose an input gradient distillation scheme for the student model. Extensive experiments on CIFAR10 and CIFAR100 have demonstrated the superiority of our proposed work. ACM-Reference-Format In appendix, we firstly verify that the conflicting gradients do exist when multiple teachers are employed, in Section <ref>. Then, we compare our method with ILA-DA in Section <ref>, to show that our method can also function decently with the intermediate level based attack methods. In Section <ref>, we conduct experiments with the targeted attack settings on CIFAR100. At last, in Section <ref>, we compare our method with ensemble attack, which is a commonly used technique by combining multiple substitute models as the source model in adversarial attack. § CONFLICTING GRADIENT PHENOMENON AMONG DEEP MODELS In our CKL method, we adopt PCGrad <cit.> to alleviate the conflicting gradient problem. In this section, we experimentally verify that the conflicting gradient problem does exist. Conflicting gradient is defined by <cit.> as Definition 1 presents. Definition 1. Two gradients g_i and g_j are called as conflicting when g_i · g_j < 0. To verify the that input gradients from different deep models exist conflicts, we employ the CIFAR10 testing set to compute ∇_x L_M_i(x) ·∇_x L_M_j(x), where x denotes the input image, L_M_i(x) denotes the loss of the i-th model and ∇_x L_M_i(x) is the corresponding input gradient. We define R_ij as the ratio of conflicting gradients between M_i and M_j, which is formulated as R_ij = #{x∈𝒳 | ∇_x L_M_i(x) ·∇_x L_M_j(x)<0 }/#{x∈𝒳}. Here, # denotes the number of elements in the set and 𝒳 is the dataset, e.g., the CIFAR10 testing set in our experiments. We employ VGG-16, ResNet-18, ResNet-50, DenseNet-121, MobileNet-v2, Inception-v3, ResNet-34, Convmixer, MLPmixer and Swin Transformer to conduct the experiment. The results of R_ij are shown in Figure <ref>. We observe that the conflicting gradient problem is a common phenomenon between different deep models from either different or the same types of DNN architectures. Typically, a higher value indicates that there are more conflicting gradients between the two models. Besides, the ratio of conflicting gradients tends to be higher for two models from two different DNN architectures. Taking VGG-16 as an example, the ratios of conflicting gradients between itself and non-CNN models, i.e., Convmixer, MLPMixer and Swin Transformer, are 0.45, 0.52 and 0.50, respectively, which are usually higher than that between VGG-16 and other CNN models. § COMPARISON WITH ILA-DA To further demonstrate the effectiveness of our CKL method, we compare it with the SOTA intermediate level based attack method, i.e., ILA-DA <cit.>. Since ILA-DA requires a pre-specific intermediate layer to obtain the feature map, it cannot directly employ the Transformer-based models as its source model. Therefore, we respectively utilize VGG-16, ResNet-18 and ResNet-50 as the source model to generate adversarial examples. Target models include CNN and non-CNN models. The results are shown in Table <ref>. As can be observed, our CKL method can consistently improve ILA-DA's performances, whatever the target model is. The averaged improvement of the ASR results are more than 18%, which further proves the effectiveness of our CKL method when being integrated to the intermediate level based attacks. § TARGETED ATTACK ON CIFAR100 We conduct targeted adversarial attack experiments on the CIFAR100 Dataset. VNI-FGSM and Logit attack are employed as our baselines. We set the maximum perturbation as ϵ=8/255 and maximum number of iterations to 300. The step size is set to 2/255. We report the targeted attack success rate (tASR) in Table <ref>. The first column introduces the source models and the first row presents the target models. `Average' represents the average tASR values of all the target models. As can be observed, the tASR scores on CIFAR100 are usually lower than the corresponding results on CIFAR10, as shown in Table 3 of the manuscript, which implies that the targeted attack on CIFAR100 is a more complicated problem. Apparently, our CKL method still significantly ourperforms the baseline methods. § COMPARISON WITH ENSEMBLE ATTACK Ensemble attack is a commonly used technique to generate adversarial examples with better transferability, by utilizing multiple source models. On the contrary, our CKL framework distills the knowledge of multiple teacher models into one single student model. Here, we compare our CKL method with the ensemble strategy. MI-FGSM is employed as the attack method. ResNet-50, Inception-v3, Swin-T, and MLPMixer are utilized as the teacher models, which are also the substitute models for the ensemble attack. We conduct this experiment on the CIFAR10 testing set. The ensemble attack is achieved by utilize the averaged value of the four outputs to generate adversarial examples. The results are shown in Table <ref>. As can be observed, if the target model is one of the teacher (known) models, ensemble attack gives better performance. Meanwhile, when the target model is unseen, our CKL method obviously outperforms the ensemble strategy, which validates the effectiveness of our common knowledge learning. Besides, ensemble strategy possesses two obvious defects. Firstly, when there exist non-CNN models in the ensemble models, it cannot employ any intermediate level based attacks, because intermediate level based attacks require a pre-specific intermediate layer to obtain the intermediate level feature map. Unfortunately, the non-CNN models can hardly give feature maps. Secondly, the ensemble strategy tends to induce higher computational complexity during the attack process, especially when the model size of the substitute models are large. Meanwhile, once our student model is obtained, the time cost of our attack process is much lower than the ensemble strategy, because the student model is usually less complex than the teacher models. We compare the running time of generating adversarial examples in the last column of Table <ref>, which is obtained on a single RTX 3080Ti GPU. It is obvious that our method is faster than the ensemble attack. When ResNet-18 is employed as the student model, our method (41.10s) is more than 25 × faster than the ensemble strategy (1174.65s).
http://arxiv.org/abs/2307.02144v1
20230705094036
Kolam Simulation using Angles at Lattice Points
[ "Tulasi Bharathi", "Shailaja D Sharma", "Nithin Nagaraj" ]
cs.IT
[ "cs.IT", "math.CO", "math.IT" ]
Use of Non-Maximal entangled state for free space BBM92 quantum key distribution protocol Ravindra P. Singh August 1, 2023 ========================================================================================= Kolam is a ritual art form practised by people in South India and consists of rule-bound geometric patterns of dots and lines. Single loop Kolams are mathematical closed loop patterns drawn over a grid of dots and conforming to certain heuristics. In this work, we propose a novel encoding scheme where we map the angular movements of Kolam at lattice points into sequences containing 4 distinct symbols. This is then used to simulate single loop Kolam procedure via turtle moves in accordance with the desired angular direction at specific points. We thus obtain sequential codes for Kolams, unique up to cyclic permutations. We specify the requirements for the algorithm and indicate the general methodology. We demonstrate a sample of Kolams using our algorithm with a software implementation in Python. § INTRODUCTION Kolam is a collection of classes of predominantly geometric designs widely prevalent in South India, drawn on the freshly washed threshold of homes at dawn, using dry rice powder or soapstone powder or using a paste of rice or clay in water. Women practitioners draw Kolams from memory and practice, without reference to a sample image. It is well-known that Kolam drawing requires focus and practice as the possibilities of error are overwhelming. The Kolams we discuss here are drawn with reference to a grid of dots. In drawing single-loop Kolams, the artist has to make the correct decision about the direction of the loop in the vicinity of each point in the grid, failing which the Kolam cannot be completed and must be fully erased and re-drawn. Competent Kolam artists never make a mistake in executing a Kolam pattern, as the floor is cleaned before the Kolam is drawn upon it and erasure of the Kolam at any point damages its appearance. Kolams are thus `finished designs' in the sense that they are not improvised, although improvisation is not ruled out. Artists must be familiar with the Kolam they propose to execute. Thus, with each Kolam can be identified a unique sequence of moves, assuming a given starting point. Kolam artists may start the Kolam at any point they desire, but they have the entire image in their minds and execute the moves unerringly so as to obtain the desired final image. In this paper, we propose a deterministic algorithm which simulates this procedure. For this purpose, we establish a starting point for the Kolam to be drawn and translate the Kolam to a sequence of angular movements connected by smooth lines. The resulting sequence uniquely, up to cyclic permutations, describes the given Kolam. § LITERATURE SURVEY Kolam construction received the attention of scholars from computer science and computer graphics and an early review of extant methods was provided by <cit.>. An updated review of computational methods of Kolam generation was undertaken by <cit.>. A significant number of the methods for Kolam construction deconstruct the final image and propose algorithms to generate the Kolam by concatenating the constituent units together correctly <cit.>. Finite, repetitive and recursive Kolam structures, based on known traditional Kolams, are discussed by <cit.>, based on array rewriting methods discussed in <cit.>. It is relevant to note that of the large number of Kolam-like patterns that can be drawn, very few would however qualify as Kolams. There is a notion of “correctnes” or satisfaction of a set of heuristics, pertaining to symmetry and unicursality, which restricts the possibilities for a Kolam. <cit.> computes the number of isomorphic Kolams for particular dot grid dimensions. Identification of the typical repetitive patterns in the Kolam is a key part of the algorithmic procedures proposed by <cit.>. Recursive algorithms are used to generate sequences of Kolams of the same pattern. The variety and scope of such approaches can also be gauged from the discussion in <cit.>. Lindenmayer language and turtle moves have been used by various authors who recognized the nested patterns in Kolam <cit.>. Lindenmayer language is suitable for generative structures and was originally proposed for modeling the development of plants. The addition of turtle moves and turtle graphics enabled the depiction of both plant-like and also closed loop structures. Angle of rotation for such Kolams is a fixed parameter, which may take positive or negative directions. It is noteworthy that while such algorithms do generate the desired final Kolam images, they do not necessarily reproduce the human act of Kolam. In other words the sequence of lines constructed may differ although the final image is the same. The methods discussed by <cit.> refer to the procedural nature of Kolam production and propose algorithms for Kolam generation by coding the unit tile enclosing each Kolam dot. The method is based on classifying the finite number of patterns that are generated around a Kolam dot by the looping Kolam line. It may be said that the sequential patterns so created are based entirely on local considerations, i.e. the desired pattern around a given Kolam dot. The digital sequence so generated is unique up to cyclic permutations and the starting point is selected arbitrarily or by setting up a convention, in a fixed orientation. The present algorithm shares these features. <cit.> also discusses N-lines, which reflect the overall (global) structure of the nodes and tree branches in Kolams. We observe that the human act of Kolam drawing is based on decisions about direction of movement, taken at various points (lattice points) as the looping curve traverses the Kolam dot grid, The present paper therefore proposes an algorithm that mimics the human decision-making behaviour in drawing Kolam. The decision-making takes into account the global requirements of the Kolam as well as local requirements, simultaneously. By `global' we mean here the overall shape of the Kolam and by `local' just the shape of the Kolam in the vicinity of a given Kolam dot. Our algorithm anticipates the overall structure of the curve in making a local determination. We first discuss the methodology involved in Kolam simulation using angles at lattice points. Secondly, the data pre-processing required for simulation is given. Thirdly, the results of Kolam simulation are presented. § TERMINOLOGY In order to discuss the algorithmic procedure, we introduce some terminology. We may borrow terminology from graph theory, lattices or knot theory in mathematics, as well as others. By Kolam, we refer to the a stylized geometrical pattern of rigid or flexible lines, which is a distinctive cultural artefact of Southern India (although it may be practised elsewhere as well). A Kolam dot grid is the mathematical arrangement of dots, which is mandatory for drawing certain classes of Kolams, which we shall refer to as dot-Kolams. We shall refer to the dots of this underlying grid as Kolam dots. The Kolams discussed in this paper are restricted to looping Kolams, or Line-Around-Dots (LAD) Kolams <cit.> in which the solid lines of the Kolam curve around the Kolam dots but never go through a Kolam dot. Traditionally these are called chuzhi Kolam, the Tamil technical word referring to a loop. We have a reference plane on which the dot grid is laid out and the perpendicular axes of this plane may be identified with the cardinal directions. The square dot grid, aligned with the cardinal directions, will be referred to as a n× n dot grid. In the case that the square dot grid is inclined at 45 degrees to the cardinal directions, we treat it as a rotated square Kolam. Rhombic Kolam <cit.> has a dot grid with a different arrangement of dots, with successive rows containing successive odd numbers of dots. Such a dot grid will be referred to using the notation 1-n-1, where n refers to the maximal width of the Kolam and is usually an odd number. Both square and rhombic Kolams are treated in this paper (see Fig. <ref>) . A single-loop Kolam is a Kolam that is in the form of a closed loop that is entwined with the Kolam dot grid in LAD mode. Reference to the importance of this type of Kolam is also found in the earliest Western compilation of Kolams written by John Layard <cit.>. The traditional technical term for such closed loop Kolams is Brahmamudi, which is suggestive of the importance attached to single closed loops for which the starting/end point is ambiguous. The term mudi refers to a top-knot, and the connection with the mathematical concept of knots can also be observed. Here, a knot is understood to be a 2-dimensional tie whose ends cannot be identified. For a set of rules on the allowed and disallowed patterns in a looping Kolam, the reader is referred to <cit.>. Given a Kolam dot grid, we impose upon it a lattice structure such that every Kolam dot is enclosed within a unit cell, with the lattice points lying in the cardinal directions around it (see Figs. <ref>, <ref>, <ref>). We refer to the lattice point where the Kolam begins to be drawn as starting point. We characterize the Kolam by the sequence of angles made at the lattice points with respect to the axes of the plane, assuming a certain starting point. The Kolam here goes through the lattice points (but not through the Kolam dots). The solid line forms smooth turns or goes straight through at each lattice point (see Figs. <ref>, <ref>). Thus, the Kolam is represented as a sequence of angles, once information about the dot grid (n× n or 1-n-1) and starting point are given. A convention for starting point is adopted; for square Kolams it shall be the 270° lattice point (lowest point) of the unit cell of the top-most and left-most Kolam dot; for rhombic Kolams it shall be the 270° lattice point of the unit cell of the left-most Kolam dot (see Fig. <ref>). These roughly correspond to some natural starting points for right-handed Kolam artistes, but the choice is otherwise arbitrary. The lattice point at which the Kolam is currently being drawn is referred to as the current point. The three succeeding lattice points through which the Kolam must pass after the current point are referred to as the first, second and third points respectively and the numbering is with respect to the current point. § METHODOLOGY Kolam selection is made before commencing the act of drawing. Kolam drawing is initiated by specifying the corresponding dot grid. (Note that several different Kolams may have the same dot grid.) For the purposes of the algorithm, an abstract lattice structure is constructed over the Kolam dot grid by placing a rotated square unit cell around each Kolam dot. The Kolam is drawn starting at a lattice point. Thereafter, a decision as to the direction in which to extend the line has to be taken. Mathematically speaking, there are infinite possibilities, but Kolam construction entails discretizing these infinite decisions into just 4 possible outcomes, viz. 45°, 135°, -45°, -135°. Note that the angular values are established with reference to the orthogonal axis system to which the Kolam dot grid is aligned. Figures <ref> and <ref> depict the idea of decision making while drawing a Kolam. The shape of the Kolam line between two lattice points may be convex or concave, as can be seen from the diagram. This choice of shape requires anticipating two steps at each decision-making point, thus the algorithm will look-ahead to maximum three subsequent Kolam points at each stage of its development. Suppose that the Kolam dot grid has points separated by a distance L√(2). Then the unit cell in the lattice has sides of length L. Starting at a particular lattice point, for every distance of length L along the edges of the lattice cells we make a selection one out of the four angular directions given above. The angle information at the lattice points of a Kolam which determines the sequential path of a Kolam is not arbitrary and is pre-determined from the shape of the Kolam to be depicted. In other words, the Kolam is fully described by the sequence of angles, up to isomorphisms. The choice of angle at lattice points and how to draw a Kolam using it are shown in Figures <ref> - <ref>. § DATA PRE-PROCESSING §.§ Kolam encoding Every lattice point is like an origin in an XY-plane, in the sense that it has no pre-defined direction. At each lattice point, we can assume four inter-cardinal directions as possibilities, as shown in Figure <ref>. We represent the 4 angles, 45°, 135°, -45°and -135°(i.e. the inter-cardinal directions) by the symbols a, b, c and d respectively, for ease of use as well as generalization purposes (see Table <ref>). Now, we claim that every dot Kolam of the Brahmamudi (single closed loop) variety can be represented using a finite sequence of elements from this alphabet (although the converse is not true). Further, each such sequence uniquely describes the corresponding Kolam, up to isomorphisms. A 1-3-1 Kolam with its angle information in degrees is shown in Figure <ref>. The angles of 1-3-1 Kolam are replaced with their corresponding symbols and is shown in Figure <ref>. The Kolams and their corresponding sequences are given in Table <ref> and <ref>. Analyzing the dot Kolams under study, we assert that there are only 4 distinct shapes to be modeled. These shapes are shown in Figure <ref>. These four basic Kolam shapes also cover distinct numbers of sides of the lattice cells: shape-I extends over only one side (distance of L), shape-II extends over two sides (distance equal to 2L), shape-III extends over three sides (distance of 3L) and shape-IV extends over four sides of the unit cell (distance of 4L). Most of the single-loop Kolams involve these 4 shapes only. Kolam drawing proceeds by shape determination after every distance of length L. Given the four distinct shapes possible, the shape determination for each segment requires looking ahead to up to three successive steps (i.e. 4 successive elements of the sequence) starting from the current element, with a minimum of one step, which is adequate if there is no change in the direction (i.e. shape-I), to a maximum of 3 steps, if there is a direction change at every lattice point around the Kolam dot (i.e. shape-IV). The order of the changes in the angles determines how to draw a particular shape. Thus, these sequential patterns are nothing but the shape of a Kolam as it is drawn sequentially. The pattern and its corresponding shape with an arrow mark showing its direction of drawing are given in Figures <ref>-<ref>. The encoded sequence pertaining to a Kolam is always processed from left to right. Therefore, the same Kolam can be represented by shifted versions or cyclic permutations of the encoding sequence. The shape determination conditions are modeled as a nested loop in our algorithm. We denote the elements of the sequence as follows: 1. Current element(e_c) - Outer loop element while iterating the inner loop2. First element(e_1) - First element succeeding the current element in the sequence3. Second element(e_2) - Second element succeeding the current element4. Third element(e_3) - Third element succeeding the current elementThe four patterns correspond to the number of changes in the succeeding angles or number of changes in the succeeding elements of a sequence compared to the previous element of the sequence while iterating through it. They are: Shape-I: 0 consecutive changes ≡ current element is the same as the first element. Example: aa (look-ahead to 1 step) Shape-II: 1 consecutive change ≡ current and first element are different AND first and second element are the same. Example: abb (look ahead to 2 steps) Shape-III: 2 consecutive changes ≡ current and first element are different AND first and second element are different AND second and third element are the same. Example: cabb (look ahead to 3 steps) Shape-IV: 3 consecutive changes ≡ current and first element are different AND first and second element are different AND second and third element are different. Example: dbac (look ahead to 3 steps) §.§ Handling initialization issues The Kolam sequence always begins as per the convention discussed in section <ref>. But some Kolams begin with the shape-IV(dbac) given in Figure <ref> which starts drawing at the right lattice point of the beginning dot. This location is not according to our convention of beginning of a sequence. This situation is covered by the following rule incorporated into the present algorithm: If the third element from the last of the sequence is `d', then `d' will be added in the beginning of the sequence before iterating the sequence and the shape-IV(dbac) will be drawn first when we start iterating the sequence. If the third element from the last of the sequence is `b', then `b' will be added in the beginning of the sequence and also at the third position from the last of the sequence before iterating the sequence. It indicates that the Kolam is not beginning with the shape-IV(dbac) and helps in identifying the final shape. §.§ Terminating condition Our algorithmic conditions look ahead to the maximum 3 elements from the current element to determine the shape to be drawn. Two z's are added at the end of the sequence which act as nulls (symbols with no effect) if the final shape is shape-I or shape-II, otherwise it leads to an error or a wrong shape selection. For example, if the sequence ends with `dzz', there is a change from d→z and no change from z→z. Though it looks like a shape-II pattern, `z' is a null and it is not any possible pattern of shape-II specified in Figures <ref> and <ref>. As it does not meet any of our algorithmic conditions, the loop will be terminated after iterating through these z's. The Kolam drawing will be finished. For shape-III and shape-IV, even if there are no z's, no problem will arise. For example, if the sequence ends with `dcab', the shape-IV (dcab) will be drawn and the loop will be terminated without any errors. The Kolam drawing will be finished. But if z's are not added, it raises an error or selects a wrong shape when the final shape is either shape-I or shape-II. For example, if the sequence ends with `aab', there is no change from a→a and this is a shape-I (a) pattern but there is a change from a→b. It is the first condition with no shape specified in section <ref>. Now `a' becomes the current element. There is a change from a→b and it needs to take second step to determine the shape. As there is no element after `b', it will raise an error. To avoid these errors and discrepancies and to encode all Kolams in a standard way, we add z's in the end of a Kolam sequence. The loop will always be terminated after iterating through these z's. § KOLAM SIMULATION ALGORITHM The algorithm for Kolam simulation depends on the number of changes in the successive symbols, which gives us the pattern for drawing one of the 4 possible shapes discussed in section <ref>. The sequence is iterated and while it is being iterated, it checks for the following conditions. 1. n consecutive no change (the same symbols or 0 changes) – draw the shape-I n-1 times and its corresponding orientation (from memory while iterating). If n=1, then no shape will be drawn and it is a no-shape condition.2. 1 consecutive change or 2 consecutive changes with the second element equal to `z' – draw the shape-II and its corresponding orientation (from memory while iterating).3. 2 consecutive changes and the second element is not equal to `z' – draw the shape-III and its corresponding orientation (from memory while iterating).4. 3 consecutive changes and the third element is not equal to `z' – draw the shape-IV and its corresponding orientation (from memory while iterating). Formally, a complete description of the algorithm is described in Algorithm <ref>. We can also use the sequence in a cyclic iteration without z's and adding `b' in the third position from the last of the sequence. In this case it will draw the Kolam infinite times without stopping. The output of this Kolam is given in Figure <ref>. § RESULTS AND DISCUSSION We simulated nine Navagraha (3×3 square) Kolams, one 1-3-1 Kolam, one 1-5-1 Kolam and one 1-7-1 Kolam using this methodology in Python using the Turtle library. The Navagraha Kolam digital sequences are given in Table <ref> and the rhombic Kolam sequences are given in Table <ref>. The Kolam drawing outputs are presented in Figures <ref>-<ref>.Each figure shows two images taken while simulating the Kolam and an image of the Kolam design after its final completion. Real-life Kolams can be physically drawn by starting at any point and in any direction. But in our proposed methodology, we have used a single starting position and a single starting direction (135°) for Kolam encoding. The proposed methodology is successful in the simulation of drawing the Kolams by using angles at the lattice points of the lattice structure imposed on a Kolam dot pattern. More known Kolams can be encoded and added to the library. § CONCLUSION Kolams are fascinating geometrical designs with a very rich cultural and traditional relevance. In this work, we have focused on single-loop dot Kolams which start and end at the same point (also known as Brahmamudi). We have mimicked the human act of Kolam drawing, which entails decision-making at each lattice point of the lattice structure imposed on a Kolam dot grid. The decision-rule for proceeding is based on anticipating the next three moves required. We propose a novel encoding scheme by mapping the angular movements of the Kolam at the lattice points into a sequence containing exactly four distinct symbols. The decision-rules are mapped onto the distinct permutations of lengths 2, 3 and 4 containing 4 distinct symbols. This methodology is intuitive and helps us to easily place the known Brahmamudi Kolams in the digital space and to simulate Kolam drawing for educational and practical purposes. We propose to investigate modern digital applications of Kolam in the area of data encoding and cryptography in our future work. § ACKNOWLEDGMENTS The work presented in this paper was supported by a grant of the Ministry of Education, Govt. of India, under the Indian Knowledge Systems (IKS) Competitive Research Program, 2021. unsrt 10 Ascher2002 Marcia Ascher. The kolam tradition. American Scientist, 90(1):56–63, 2002. Akhilesh Akhilesh Kumar and Shailaja D Sharma. Survey of computational methods in kolam. Paper presented at HOMI Young Scholars' Conference, IIT Gandhinagar, February 2021. SIROMONEY197463 Gift Siromoney, Rani Siromoney, and Kamala Krithivasan. Array grammars and kolam. Computer Graphics and Image Processing, 3(1):63–82, 1974. SIROMONEY1973447 Gift Siromoney, Rani Siromoney, and Kamala Krithivasan. Picture languages with array rewriting rules. Information and Control, 22(5):447–470, 1973. Yanagisawa2007 Kiwamu Yanagisawa and Shojiro Nagata. Fundamental study on design system of kolam pattern. Bulletin of the Society for Science on Form, 21:133–134, 2007. Nagata2023 Shojiro Nagata. Traditional kolam patterns: Formation, symmetry and fractal nature. In The Computation Meme: Computational Thinking in the Indic Tradition, K Gopinath and Shailaja D Sharma (Eds). IISc Press, Bangalore, (forthcoming) 2023. LINDENMAYER1968280 Aristid Lindenmayer. Mathematical models for cellular interactions in development i. filaments with one-sided inputs. Journal of Theoretical Biology, 18(3):280–299, 1968. Nagata Shojiro Nagata. How many loops kolam loop pattern consists of. Forma, January 2015. John John Layard. Labyrinth ritual in south india: Threshold and tattoo designs. Folklore, 48(2):115–182, 1937.
http://arxiv.org/abs/2307.02775v1
20230706050534
Divergent geodesics in the Universal Teichmüller space
[ "Xinlong Dong", "Hrant Hakobyan" ]
math.CV
[ "math.CV", "30F60, 30C62" ]
Single-shot Transverse Wakefield Mapping with a Hollow Electron Beam S. S. Baturin August 1, 2023 ==================================================================== Thurston boundary of the universal Teichmüller space T(𝔻) is the space PML_bdd(𝔻) of projective bounded measured laminations of 𝔻. A geodesic ray in T(𝔻) is of generalized Teichmüller type if it shrinks the vertical foliation of a holomorphic quadratic differential. We provide the first examples of generalized Teichmüller rays which diverge near Thurston boundary PLM_bdd(𝔻). Moreover, for every k≥ 1 we construct examples of rays with limit sets homeomorphic to k-dimensional cubes. For the latter result we utilize the classical Kronecker approximation theorem from number theory which states that if θ_1,…,θ_k are rationally independent reals then the sequence ({θ_1 n},…,{θ_k n}) is dense in the k-torus 𝕋^k. § INTRODUCTION In this paper we continue the study of asymptotic behavor of geodesics in the universal Teichmüller space T(𝔻) started in <cit.>. In these works the main goal was to exhibit large families of geodesics which converge at Thurston's boundary ∂_∞ T(𝔻). The latter can be identified with the space of projective classes of bounded geodesic measured laminations of the hyperbolic plane PML_bdd(𝔻), cf. <cit.>. In this paper we construct the first examples of divergent geodesics in T(𝔻). Moreover, we show that the limit set of a divergent geodesic in PML_bdd(𝔻) can be homeomorphic to a k-dimensional cube [0,1]^k, for every k≥ 1. For finite dimensional Teichmüller spaces the first examples of divergent Teichmüller geodesics were given by Lenzhen in <cit.>. Limit sets of Teichmüller and Weil-Petersson geodesics in the context of finite dimensional Teichmüller spaces were actively investigated recently, see for instance <cit.>. Thus the present paper can be thought of as a first step in obtaining analogous results for T(𝔻). The techniques used in this paper are very different from the finite dimensional ones and are more analytical in nature. Similar to <cit.> we rely heavily on careful estimates of classical modulus for degenerating families of curves in domains with chimneys. In <cit.> it was essentially shown that if D is a domain with chimneys s.t. the blowups of D near boundary points converge to a half plane, a complement of a quadrant, or a complement of a half line then the geodesic in T(𝔻) corresponding to D was convergent. The idea behind constructing a divergent geodesic then is to consider domains so that the blowups near a boundary point do not converge (in Hausdorff topology). Specifically, we consider domains D with infinitely many chimneys accumulating to {0}×(0,∞) so that on some scales D “looks like a half-plane” and on other scales it “looks like a quadrant”. Interestingly, such a construction can yield both convergent as well as divergent geodesics. Theorems <ref> and <ref> characterize which of the two possibilities occurs depending on the widths and the relative positions of the chimneys. The rest of this introduction describes our results in more detail. §.§ Universal Teichmüller space and Thurston boundary Let 𝔻 and 𝕊^1 denote the unit disk and the unit circle in the complex plane ℂ, respectively. The universal Teichmüller space, denoted by T(𝔻), is defined as the collection of all quasisymmetric (or qs) self-maps of 𝕊^1, which fix 1,i and -1. Universal Teichmüller space may be equipped with the Teichmüller metric d_T(·,·) as follows. Given f,g∈ T(𝔻), the Teichmüller distanse between f and g is defined by d_T(f,g) = 1/2inf_h log K_h, where, the infimum is over all the quasiconformal (or qc) extensions h:→ of g∘ f^-1:𝕊^1→𝕊^1, and K_h denotes the maximal dilatation of the qc map h, see Section <ref> for the definitions of these terms. Thurston boundary of T(𝔻), denoted by ∂_∞T(𝔻) was defined and studied in <cit.>. In particular, it was shown in <cit.> that ∂_∞T(𝔻) can be identified with the space of projective bounded measured laminations of the unit disk , denoted by PML_bdd(). In <cit.> Šarić and the second named author studied the asymptotic behavior of geodesic rays in T(). Specifically, suppose φ is a holomorphic quadratic differential φ on . If for every t∈[0,1) the Beltrami differential μ_φ(t)=t|φ|/φ is extremal (see Section <ref> for the definition of extremality) then the corresponding path t↦ T_φ(t):=[μ_φ(t)] in T() is a geodesic ray, which will be called a generalized Teichmüller geodesic ray corresponding to φ. If φ is an integrable holomorphic quadratic differential then μ_φ(t) is uniquely extremal by <cit.> and hence T_φ is a geodesic ray. We call such a ray simply a Teichmüller geodesic ray corresponding to φ. In <cit.> it was proved that every Teichmüller geodesic ray in T() converges (in weak ^* topology) to a unique point in ∂_∞T(𝔻)=PML_bdd(), and that distinct Teichmüller geodesic rays converge to distinct points in PML_bdd(). In particular, there is an open and dense set of geodesics in T(𝔻) which have distinct limits at Thurston boundary. In <cit.> if was shown that for a large class of generalized Teichmüller geodesic rays the convergence to unique points of PML_bdd() still holds, however, distinct geodesics can converge to the same point at infinity. In view of the above it is natural to ask if there are divergent geodesics in T(𝔻), and if so what are the possible sets in PML_bdd(𝔻) which can occur as limit sets of such divergent geodesics. §.§ Main results To describe our results more precisely we recall the construction of a natural class of generalized Teichmüller rays. Suppose we are given a simply connected domain D⊊ℂ with a chimney. This means that there is a non-trivial finite interval (a,b)⊂ℝ such that (a,b)×(0,∞)⊂ D and ({a}∪{b})× (t,∞)⊂∂ D for some t>0. Let ϕ_D:𝔻→ D be a conformal map and consider the holomorphic quadratic differential φ_D=dw^2, where w(z)=ϕ_D(z), z∈𝔻. It turns out, see e.g. <cit.>, that in this case Beltrami differential t|φ_D|/φ_D is extremal, and the path t↦ T_φ_D(t)=[t|φ_D|/φ_D] is a generalized Teichmüller ray. Geometrically t|φ_D|/φ_D can be thought of as the Beltrami coefficient corresponding to the vertical compression map (x,y)↦(x,1-t/1+t y) of D, which degenerates as t→1, see Section <ref>. Šarić and the second author considered rays {T_φ_D(t)} corresponding to domains with finitely many chimneys or infinitely many chimneys (a_i,b_j) × (0,∞) so that the sequences {a_n} and {b_n} do not have accumulation points in ℝ. They showed that for such domains the corresponding generalized Teichmüller rays have limits in Thurston's boundary ∂_∞T(𝔻), see <cit.>. In this paper we consider domains with accumulating chimneys. Specifically, let {x_n}_n=0^∞ be a strictly decreasing sequences of positive numbers so that x_n→0 as n→∞. Letting Q_4={ z: z >0, z <0 } denote the forth quadrant in ℂ, we define the domain W=W({x_n})⊂ℂ as follows: W= Q_4∪⋃_n=0^∞ ((x_2n+1,x_2n) ×ℝ). In Theorem <ref> we give a complete description of the asymptotic behavior of the generalized Teichmüller ray corresponding to W, provided the sequence x_n converges to 0 fast enough. In particular, we show that the ray t↦ T_φ_W(t) either converges or it diverges and its limit set Λ⊂ PML_bdd(𝔻) is homeomorphic to [0,1]. To state our results more precisely we introduce some notation. Given a geodesic in the unit disk we denote by δ_ the Dirac mass at . For every chimney C_n=(x_2n+1,x_2n)×ℝ we consider the point z_n∈∂𝔻, which corresponds to points in W which are “in C_n and are near ∞”, i.e., if z→ z_n then φ_W(z)∈ C_n and (φ_W(z))→∞. Let _n^+ and _n^- be the geodesics in 𝔻 connecting the preimages of x_2n and x_2n+1 to z_n, respectively. Let λ_W be the geodesic measured lamination supported on these geodesics and giving each a unit mass, i.e., λ_W=∑_n=0^∞ (δ__n^+ + δ__n^-). We also denote by g the geodesic connecting -1 to -i. The following result shows that the limit set of t↦ T_φ_D(t) is either a point or an interval. Let W be a domain defined as in (<ref>) s.t. x_n+1/x_nn→∞⟶ 0. Then the limit set Λ⊂ PLM_bdd(𝔻) of the generalized Teichmüller ray t↦ T_φ_W(t) can be described as follows: Λ = { s δ_g + (2/3)λ_W : s∈[m,M]}, where m =1+lim inf_n→∞∑_i=0^2n+1(-1)^ilog x_i/log x_2n+1, M=1+lim sup_n→∞∑_i=0^2n+1(-1)^ilog x_i/log x_2n. In particular, t ↦ T_φ_W(t) diverges at ∂_∞T(𝔻) if and only if m<M. Theorem <ref> is an easy consequence of Theorem <ref> and is proved after the statement of the latter. Applying Theorem <ref> to x_n≍1/n! yields an example of a convergent geodesic ray with the limit point 3/2δ_g+2/3λ_W∈ PML_bdd(𝔻), see Examples <ref> and <ref>. To obtain divergent geodesics we need to consider sequences x_n which converge to 0 faster than 1/n!. For instance, we have the following corollary of Theorem <ref>, see Example <ref>. Fix p>1. Let x_0=1, x_1=a and x_n=x_n-1^p for n>1 (equivalently x_n=a^p^n-1). Then t↦ T_φ_W(t) is a generalized Teichmüller geodesic ray which diverges at Thurston boundary with a limit set given by (<ref>), where m=1+1/p+1, and M=1+p/p+1. §.§.§ Moduli of famililes of curves To prove Theorems <ref> and <ref> we restate them in terms of the limiting behavior of moduli of families of curves. Let w=ϕ(z) be the conformal map of the unit disk 𝔻 onto W such that ϕ^-1 takes w=0 to z=-1, w=1 to z=1, and as z approaches -1 we have that ϕ(z) tends to ∞ while staying in the fourth quadrant of ℂ , see Figure <ref>. Given open disjoint arcs I,J on the unit circle ∂𝔻, let _I,J = (I,J;𝔻) be the curve family connecting I to J in 𝔻. For every >0 we denote by V_ the vertical compession map (x,y)↦(x, y). Observe that if =(t)=1-t/1+t, then →0 as t→1 , that is, as the corresponding point [t|ϕ_W|/ϕ_W]∈ T(𝔻) leaves every compact subset of T(𝔻). Let ^_I,J =V_(ϕ(_I,J)). Using the connection between the Liouville measure and moduli of curve families, see Lemma <ref> or <cit.>, understanding the behavior of the geodesic t↦ T_ϕ_W(t) is equivalent to understanding the behavior of ^_I,J for different choices of pairs of arcs I,J∈𝕊^1, as →0. Namely, to show that a point λ∈ PML_bdd(𝔻) is a limit point of the generalized Teichmüller geodesic t↦ T_ϕ_W(t) it is sufficient to show that there is a sequence _n→0 so that an appropriate rescaling of the sequence ^_n_I,J converges to λ(I× J). It follows essentially from <cit.> that if the intersection of the box I× J and geodesic lamination 𝔏= {g}∪⋃_n=0^∞{_n^+,_n^-} is empty then lim_→0^_I,J/1/πlog1/=0. On the other hand, if (I× J)∩𝔏 contains a single geodesic from 𝔏 that is not g then lim_→0^_I,J/log1/=2/3. If (I× J)∩𝔏 = g we prove the following. Suppose W satisfies the conditions of Theorems <ref> (or Theorem <ref>). If (I× J) ∩𝔏 =g then lim inf_→0^_I,J/1/πlog1/ = m, lim sup_→0^_I,J/1/πlog1/ = M. Lemma <ref> is a key result used in the proof of Theorems <ref> and <ref>. It follows from Proposition <ref> and Lemma <ref>. §.§.§ Divergent geodesics with higher dimensional limit sets Using Theorem <ref> we construct for every k∈ℕ a domain 𝒲_k such that the limit set corresponding to the ray t→ T_φ_𝒲_k(t) is homeomorphic to [0,1]^k. For instance, for k=2 we have the following result, which follows from Theorem <ref>. Fix a∈(0,1), and p,q>1 so that log p/ log q is irrational. Let x_n=a^p^n-1, x_n' = 3-a^q^n-1, and C_n, C_n' be chimneys over the intervals (x_2n+1,x_2n) and (x'_2n,x_2n+1'), respectively. Then the generalized Teichmüller ray t↦ T_φ_𝒲_p,q(t) corresponding to the domain W_p,q={z : 0< (z)<3, (z)>0}∪⋃_n=1^∞ C_n∪ C'_n diverges at ∂_∞T(𝔻) and its limit set Λ⊂ PML_bdd(𝔻) in Thurston boundary is homeomorphic to [0,1]^2. A key result used in the proof of Theorem <ref> is the dynamical fact that every orbit {T_θ^∘ n(x)}_n=1^∞ of the irrational rotation T_θ : x↦{θ +x} is dense in the circle ℝ/ℤ, where θ=log p / log q and { y }=y-⌊ y ⌋ denotes the fractional part of a real number y. To prove a higher dimensional version of Theorem <ref>, see Theorem <ref>, we construct domains with k families of chimneys accumulating to distinct half-lines corresponding to sequences of the form {a^p_i^n}_n=1^∞, i∈{1,…,k}, such that the numbers log p_1, …, log p_k are rationally independent. We use a well known approximation theorem of Kronecker, which generalizes the above mentioned fact about the density of the orbits of an irrational rotation of 𝕊^1 to higher dimensional tori 𝕋^k=𝕊^1 ×…×𝕊^1. In Theorems <ref> and <ref> if λ_1,λ_2 ∈Λ then the supports of λ_1 and λ_2 are the same. This raises the following question. Is there a limit set Λ⊂ PML_bdd(𝔻) and geodesic measured laminations λ_1 and λ_2, so that λ_1,λ_2 ∈Λ and the supports of λ_1 and λ_2 are distinct. It would be interesting to undestand the possible topology of limit sets of geodesic rays in T(𝔻). For instance, even the following basic question seems to be open. Is there a Teichmüller geodesic in T(𝔻) so that the limit set Λ⊂ PML_bdd(𝔻) has nontrivial topology (e.g. π_1(Λ) ≠{0})? The rest of this paper is organized as follow. In Section <ref> we provide the necessary definitions, notation, and some auxiliary results. In Section <ref> we state Theorem <ref>, prove Theorem <ref> and provide several explicit examples of divergent and convergent geodesics in T(𝔻). Sections <ref> and <ref> are the technical core of the paper, which are devoted to the proof of Theorem <ref>. In Section <ref> we prove the existence of generalized Teichmüller geodesics with higher dimensional limit sets. § BACKGROUND §.§ Visual boundary of the universal Teichmüller space If E is a subset of 𝕊^1 we will denote by |E| the arclength (or Hausdorff 1-measure) of E. A homeomorphism h:𝕊^1→𝕊^1 is said to be quasisymmetric if there is a constant 1≤ M <∞ such that 1/M≤|h(I)|/|h(J)|≤ M, for all circular arcs I, J in 𝕊^1 with a common boundary point and disjoint interiors such that |I|=|J|. A homeomorphism is quasisymmetric if and only if it extends to a quasiconformal map of the unit disk. The universal Teichmüller space, denoted by T(𝔻), consists of all quasisymmetric h: 𝕊^1 →𝕊^1, which fix 1,i and -1. The Teichmüller distanse between elements f and g in T(𝔻) is defined as follows.: d_T(f,g) = 1/2inflog K_g∘ f^-1, where, given a quasisymmetric mapping h of 𝕊^1, we denote K_h = inf{ K_h̃ | h̃:𝔻→𝔻 h}, with K_h̃ denoting the maximal dilatation of the quasiconformal map h̃ extending h to 𝔻. We refer to <cit.> for background on quasisymmetric and quasiconformal maps and, in particular, for the definitions of the maximal dilatation of a quasiconformal map and quasiconformal extensions. The universal Teichmüller space T(𝔻) may also be defined as the set of equivalence classes of Beltrami coefficients [μ]∈ B_1/∼, where B_1 is the unit ball in L_∞(𝔻), and μ∼ν whenever the corresponding quasiconformal mappings f^μ,f^ν:𝔻→𝔻 coincide on the boundary circle, i.e. f^μ|_𝕊^1 = f^ν|_𝕊^1, see <cit.>. Given two Beltrami coefficients μ_0 and ν_0 the Teichmüller distance between [μ_0] and [ν_0] in T(𝔻) is defined as follows d_T([μ_0],[ν_0]) = 1/2inf_μ∈[μ_0] ν∈[ν_0]log1+μ-ν/1-μ̅ν_∞/1-μ-ν/1-μ̅ν_∞. Given a quasisymmetric mapping h:𝕊^1→𝕊^1, a quasiconformal mapping f:𝔻→𝔻 continuously extending h is called an extremal quasiconformal mapping (for its boundary values) if it has the smallest maximal dilatation K_f among all such extensions of h to 𝔻. A Beltrami coefficient μ is said to be extremal or uniquely extremal if μ_∞≤ν_∞μ_∞ < ν_∞, respectively, whenever ν∼μ and ν≠μ. If μ is an extremal Beltrami coefficient then μ_s=(1+|μ|)^s-(1-|μ|)^s/(1+|μ|)^s+(1-|μ|)^sμ/|μ|, is also extremal for every s∈[0,∞), see <cit.>. Moreover, with the notation as above we have _T([0],[μ_s]) = s·_T([0],[μ]). Therefore, the path s↦ [μ_s] is a geodesic ray in the Teichmüller metric in T(𝔻), i.e. it is a geodesic path starting at [0], passing through [μ] and leaving every compact set in T(𝔻) as s→∞. The collection of all geodesic rays in T(𝔻) is the visual boundary of the universal Teichmüller space, see <cit.>. §.§ Thurston boundary of T(𝔻) In <cit.> Thurston's boundary of the Teichmüller space T(S) of a closed surface S was defined by embedding T(S) into the space of geodesic currents on S. In <cit.> the definition for a Thurston type boundary of Techmüller spaces was given for all Riemann surfaces, and in particular for T(𝔻). We briefly recall some of the relevant notation, see <cit.>. The space G(𝔻) of oriented geodesics on 𝔻 can be identified with 𝕊^1×𝕊^1∖ diag. Observe that a quasisymmetric mapping f: 𝕊^1 →𝕊^1 induces a self-map of the space of geodesics G(𝔻) by mapping a hyperbolic geodesic _(x,y) to _(f(x),f(y)), for every pair of distinct point x,y∈𝕊^1. We denote by f̃ the mapping induced by f on the space of geodesics. A geodesic current is a Radon measure on G(𝔻). The Liouville current ℒ is a geodesic current such that for every Borel set A⊂𝕊^1×𝕊^1 ∖ diag we have ℒ(A)=∫_A |dx| |dy|/|x-y|^2. An easy calculation shows that for a box of geodesics A=[a,b]× [c,d] we have ℒ([a,b]× [c,d])=log(a-c)(b-d)/(a-d)(b-c). The universal Teichmüller space T(𝔻) can be embedded into the space 𝒞(𝔻) of geodesic currents via the Liouville embedding by setting ℒ(f)=(f̃)^*(ℒ)∈𝒞(𝔻), where the right hand side denotes the pullback of the Liouville measure by f̃. Moreover, whenever f is a quasisymmetric then ℒ(f) is a bounded geodesic current in the sense that sup_A ℒ(f)(A)<∞, where the supremum is over all geodesic boxes A such that ℒ(A)=log 2 (thus A is a Mobius image of the box [1,i]×[-1,-i]). Liouville embedding ℒ is in fact a homeomorphism of T(𝔻) onto its image in 𝒞(𝔻) equipped with the uniform weak* topology, see <cit.>. As a set Thurston boundary of T(𝔻), which is denoted by ∂_∞T(𝔻), is defined as the collection of asymptotic rays to ℒ(T(𝔻)) in 𝒞(𝔻). Equivalently, ∂_∞T(𝔻) can be identified with the space of projective classes of bounded measured laminations on 𝔻, or PML_bdd(𝔻), see <cit.>. §.§ Generalized Teichmüller rays in T(𝔻). In this paper we continue the study of behavior of geodesic rays in T(𝔻) as they approach infinity, i.e. leave every compact subset of T(𝔻) started in <cit.>. Specifically, let φ be a holomorphic quadratic differential on 𝔻. Suppose that μ_φ(t):=t|φ|/φ is an extremal Beltrami differential for some (or equivalently all) t∈(0,1). By (<ref>) we have d_T([0],[μ_φ(t)]) = 1/2log1+μ_φ(t)_∞/1-μ_φ(t)_∞ = 1/2log1+t/1-t. Therefore, if 0<s<t<1 then by (<ref>) again we have d_T([μ_φ(s)],[μ_φ(t)])=d_T([0],[μ_φ(t)])-d_T([0],[μ_φ(s)]). Hence the path t↦ [t|φ |/φ] is a geodesic ray in the Teichmüller metric in T(𝔻). We say that the path (<ref>) is a generalized Teichmüller ray if the Beltrami coefficient t|φ |/φ is extremal for some (or equivalently all) t∈(0,1). §.§ Generalized Teichmüller rays and vertical compression A natural class of generalized Teichmüller rays can be obtained as follows. Given a simply connected domain D⊊ℂ and a conformal map ϕ_D:𝔻→ D consider the holomorphic quadratic differential φ_D=dw^2, where w(z)=ϕ_D(z), z∈𝔻. If the Beltrami differential t|φ_D|/φ_D happens to be extremal for some t∈(0,1) then the generalized Teichmüller ray T_φ_D(t)=[μ_φ_D(t)] for φ=φ_D can alternatively be described as follows, see <cit.> For every 0<≤ 1, let V_ be the vertical compression map of the plane, i.e. V_(x,y)=(x, y), and let ϕ_D, be a conformal map of the disc 𝔻 onto V_(D) and let ϕ_D:=ϕ_D,1:𝔻→ D. The mapping ψ_D,:=ϕ_D,^-1∘ V_∘ϕ_D is a quasiconformal mapping of 𝔻 onto itself with the Beltrami differential given by (1-/1+) |φ_D|/φ_D and the maximal dilatation 1/ε. If V_ (respectively, ϕ_D,^-1∘ V_∘ϕ_D) is an extremal quasiconformal map for its boundary values on ∂D (respectively, on 𝕊^1), then the path ↦τ_ε,D= [(1-/1+) |φ_D|/φ_D] with ε decreasing from 1 to 0 is a generalized Teichmüller ray in T(𝔻), and we have ([0],τ_,D) =1/2log^-1→0⟶∞. §.§ From Teichmüller rays to conformal modulus Suppose D is a domain in the complex plane as above, and ↦τ_,D is the corresponding Teichmüller ray. Abusing the notation slightly, we will denote by ψ_D, (or ψ_), if D is clear from the context) the corresponding quasisymmetric mapping of 𝕊^1. Therefore, the Teichmüller ray corresponding to D (or φ_D) can also be represented as follows: ↦ψ_=ψ_D,. To study the asymptotic behavior of the geodesic ↦ψ_ in T(𝔻) near ∂_∞T(𝔻), i.e., as →0, in <cit.> the authors used Liouville embedding of the Teichmüller space and a connection between Liouville measure and the classical modulus of families and curves. §.§.§ Limit sets of geodesic rays in T(𝔻) Let be a bounded measured lamination of the Hyperbolic plane 𝔻. We will denote by the projective class of . We say that a bounded projective measured lamination λ∈ PML_bdd(𝔻) is a limit point of a geodesic ray ↦ψ_D, if there is a sequence of positive numbers {_i}_i=1^∞ approaching 0 such that ℒ(ψ_D,_i)i→∞⟶, in the weak* topology. The limis set of the generalized Teichmüller ray ↦ψ_D, is the collection of all its limit points in PML_bdd(𝔻). To obtain a more explicit expression of the limit points of the rays in T(𝔻) we recall that for a given box [a,b]×[c,d]∈ G(𝔻) and >0 we have ℒ(ψ_)([a,b]×[c,d]) = (̃ψ̃_̃)̃^*(ℒ)([a,b]×[c,d]) = ℒ(ψ_([a,b]) ×ψ_([c,d])). Denoting x^:=ψ_(x) for every x∈𝕊^1 we can write the equality above as ℒ(ψ_)([a,b]×[c,d]) = ℒ([a^,b^] × [c^,d^]). Thus, to find the limit points of a generalized Teichmüller geodesic we need to find the asymptotic behavior of ℒ([a^,b^] × [c^,d^]) as approaches 0. To do this we recall the notion of modulus of families of curves. §.§.§ From Liouville measure to conformal modulus. Let be a family of curves in a domain Ø⊂ℂ. A non-negative Borel function ρ on Ø is called admissible for , if l_ρ(γ ):=∫_γρ (z)|dz|≥ 1 for every ∈. Conformal modulus of then is defined by = inf_ρ∬_Øρ^2 dxdy, where the infimum is over all admissible metrics ρ. Lemmas <ref>, <ref> and <ref> below, summarize some of the main properties of the modulus, which we will use below, see <cit.>. If _1 and _2 are curve families in ℂ, we will say that _1 overflows _2 and will write _1>_2 if every curve _1∈_1 contains some curve _2∈_2. Let _1,_2,… be curve families in ℂ. Then 1. Monotonicity: If _1⊂_2 then (_1)≤(_2). 2. Subadditivity: (⋃_i=1^∞_i) ≤∑_i=1^∞(_i). 3. Overflowing: If _1>_2 then _1 ≤_2. 4. Conformal invariance: If f:Ø→Ø' is a conformal map then f() = for any curve family in Ø, where f() denotes the family of curves {f() : ∈} in Ω'. The next two examples of curve families are fundamental and will be used repeatedly throughout the paper, see <cit.>. Let R=[0,l]×[0,w] and be the family of curves in R connecting the vertical sides, i.e., {0}×[0,w] and {l}×[0,w]. Then = w/l. Suppose A={z: 0<r_1<|z|<r_2}. Let _A and _A' be the families of curves in A separating and connecting the two boundary components of A, respectively. Then _A = logr_2/r_1/2π, _A' = 2π/logr_2/r_1. The following estimate for modulus will be used repeatedly below. Given two continua E and F in ℂ we denote (E,F) := dist(E,F)/min{ E, F}, and call (E,F) the relative distance between E and F in ℂ. For every pair of continua E,F⊂ℂ we have (E,F;ℂ) ≤π(1+1/2(E,F))^2. In particular if E_n and F_n are such that (E_n,F_n) is bounded away from 0 then (E_n,F_n;ℂ) is bounded above. The following result, connects Liouville measure and the moduli of curve fmilies and is central for our analysis. Let (a,b,c,d) be a quadruple of points on 𝕊^1 in the counterclockwise order, and let _[a,b]× [c,d] be the family of curves in 𝔻 connecting [a,b] to [c,d]. Then (_[a,b]× [c,d])-1/πℒ([a,b]× [c,d])-2/πlog 4→ 0 as (_[a,b]× [c,d])→∞, where ℒ is the Liouville measure. Note that by equation (<ref>) we have that (_[a,b]× [c,d])→∞ if and only if ℒ([a,b]× [c,d])→∞. Therefore it is enough to consider the asymptotic behavior of the modulus in order to find the asymptotic behavior of the Liouville measure. We next give a criterion from <cit.> which will help identify which elements of PLM_bdd(𝔻) can occur as limit points at infinity of generalized Teichmüller rays in T(𝔻). A set ℬ of boxes of geodesics is said to be dense among all boxes of geodesics if for any box [a,b]× [c,d]⊂ G(𝔻) there exists a sequence { [a_n,b_n]× [c_n,d_n]}_n in ℬ such that a_n→ a, b_n→ b, c_n→ c and d_n→ d as n→∞. Let D⊂ℂ be a domain such that ↦ψ_D, is a generalized Teichmüller ray. Suppose that there is a bounded geodesic current , a function m=m()→∞, as →0, and a sequence _i→0 such that the limit ([a,b]× [c,d])=lim_i→∞ V__i(ϕ_D(_[a,b]× [c,d]))/m(_i) exists for a box of geodesics [a,b]× [c,d]. Then is a limit point of the generalized Teichmüller ray ↦ψ_D, in the weak* topology if and only if (<ref>) holds for a dense set ℬ of boxes of geodesics in G(𝔻). We note that in <cit.> the sufficiency of the condition (<ref>) for a dense set of boxes was proved for the entire ray ↦ψ_D, rather than for a sequence of points ψ_D,_i, but the same proof gives the more general result above. On the other hand, the necessity of (<ref>) follows from (<ref>) by using (<ref>) and recalling the relevant definitions. In some cases it is easier to find the limiting geodesic measured lamination using a localization result which we formulate next. For η∈(0,π) we will denote by B_η() the η-box containing , which may be thought of as the η-neighborhood of . More precisely, if =_(a,b)∈ G(D) for some a = e^iα and b = e^iβ in 𝕊^1, we let B_η()=(e^i(α-η),e^i(α+η))× (e^i(β-η),e^i(β+η)). With this notation we have the following consequence of Corollary <ref>, which is an analogue of Lemma 6.6 in <cit.> and can be proved by selecting appropriately the family ℬ of boxes and using the basic properties of the modulus from Lemma <ref>. Let D⊂ℂ be a domain such that ↦ψ_D, is a generalized Teichmüller geodesic, and {_i}_i=0^∞ is a positive sequence which converges to 0 as i→∞. Suppose for every ∈ G(𝔻) there is a positive number η=η_γ>0 s.t. the limit m()=m_η(γ):=lim_i→∞ V__i(ϕ(_B_η()))/log1/_i exists for all 0<η<η_ and is independent of η. If there is a countable collection of geodesics {_j}_j=1^∞ s.t. m()>0 if and only if =_j for some j≥ 1, then τ__i,Di→∞⟶∑_j=1^∞ m(_j) _̣_j. §.§ Chimney domains and generalized Teichmüller rays We say that Ω⊂ℂ is a domain with a chimney if Ω contains a product subset C_(a,b) :=(a,b)× (0,∞) such that ∂ C_(a,b)∩{z : z>α}⊂∂Ω for some α≥ 0. If Ω is a domain with a chimney then by Theorem 6.1 in <cit.> the vertical compression map V_:(x,y)↦(x, y) is an extremal quasiconformal map for its boundary values on ∂Ω. Therefore, for any domain Ω with a chimney the path ↦ψ_Ω, defined in Section <ref> is a generalized Teichmüller geodesic. Hence, to conclude that ∈ PML_bdd(𝔻) is a limit point of the generalized Teichmüller ray ↦ψ_Ω, for a chimney domain Ω, by Corollary <ref> it is enough to verify that equality (<ref>) holds for a dense set of boxes of geodesics. § MAIN RESULT AND SOME EXAMPLES Let {a_n} and {b_n} be two sequences of positive numbers so that for 0<a_0<b_0=1, and for n≥1 we have a_n+1<b_n<a_n, and a_n,b_n→0 as n→∞. For n≥ 0 let I_n:=(a_n,b_n) and C_n:=(a_n,b_n)×[0,∞) be the “chimney” over the interval I_n. We define the domain W=W({a_n},{b_n})⊂ℂ as follows: W={ z: z >0, z <0 }⋃_n=0^∞ C_n, and observe that W is a simply connected domain with a sequence of chimneys which accumulate to the part of the imaginary axis in the upper half-plane, i.e., {z∈ℂ : z=0, z≥ 0}. Note that W is completely determined by the sequences {a_n}_n∈ℕ and {b_n}_n∈ℕ Let ϕ:𝔻→ W be the conformal map with the following properties lim_z→ 0ϕ^-1(z) = -1 lim_z→1ϕ^-1(z) =1 lim_ (z) → -∞ϕ^-1(z) = -i. By the discussion in Section <ref> the path ↦ψ_W, is a generalized Teichmüller geodesic. We will show that this geodesic does not converge to a unique point in ∂_∞T(𝔻). Moreover, we will describe the limit set of the geodesic completely and will show that it is in fact homeomorphic to a one dimensional simplex (an interval). To describe the limit set of {ψ_W,} we first introduce some notation. We denote by z_n the points on 𝕊^1 corresponding to the chimney C_n for n≥ 0. More specifically, denoting w=ϕ(z)∈ W, for n≥ 0 we let z_n:=lim_(w)→ +∞ (w)∈ I_nϕ^-1(w). We also define α_n = lim_w→ a_nϕ^-1(w), β_n = lim_w→ b_nϕ^-1(w). We will denote by _(x,y) the (unoriented) hyperbolic geodesic in 𝔻 with endpoints x,y∈𝕊^1. Moreover, for n≥ 0 we denote by _n^+ and _n^- the two geodesics starting at z_n and ending at α_n and β_n, respectively: _n^- := _(z_n, α_n), _n^+ := _(z_n,β_n). We also denote by g the gedesic from -1 to -i, i.e., g:=_(-1,-i). Below, as in Section <ref>, we denote by τ_,W the point in T(𝔻) corresponding to the vertical compression of W by >0, i.e., τ_,W=[ (1-/1+) |φ_W|/φ_W]. The following result states that the generalized Teichmüller geodesic ray ↦τ_,W, does not converge to any point in Thurston boundary, and, moreover, it describes the limit set of the ray in Thurston boundary ∂_∞ T(𝔻), i.e., all the possible points in PLM_bdd(𝔻) which occur as weak star limits of the geodesic ray ↦τ_,W. We recall that given a hyperbolic geodesic =_(x,y) in 𝔻 we will denote by δ_ the Dirac mass on . In particular, if B=(a,b)× (c,d)⊂ G(𝔻) is a box of geodesics, we will have that δ_(B) = 0, ∈ B, 1, Recall, that g=_(-1,-i) and _̣g is the Dirac mass on the geodesic connecting -1 and -i in 𝔻. We will also denote λ_W = ∑_n=0^∞ (δ__n^+ +δ__n^-). Thus, λ_W is a geodesic measured lamination supported on the geodesics connecting z_n to α_n and β_n for n∈ℕ. In what follows we use the notation A_n = ∏_k=0^n a_k, B_n=∏_k=0^n b_k=∏_k=1^n b_k. Let W be a domain defined as in (<ref>) so that there is constant 0<c<1 such that for all n∈ℕ we have max(b_n+1/a_n,a_n/b_n)<c<1 and max{√(a_n),√(b_n)}n→∞→0. Then the limit set Λ of the generalized Teichmüller ray ↦τ_,W in PML_bdd(𝔻) can be described as follows: Λ = { s δ_g + (2/3)λ_W : s∈[m,M]}, where m =1+lim inf_n→∞log (A_n-1/B_n)/log(1/a_n), M=1+lim sup_n→∞log (A_n-1/B_n)/log(1/b_n). Note that all the geodesic measured laminations s δ_g + λ_W have the same support and they only differ by the weight on the geodesic g. Letting x_2n=b_n and x_2n+1=a_n for n≥ 0, with x_0=1 we see that if x_n+1/x_n→0 then (<ref>) holds for every c>0. Therefore, for every ∈ (0,1) there is an N∈ℕ s.t. for n≥ N we have x_n+2/x_n< and hence x_N+2m≤^m x_N. Thus, if m is so large that n:=N+2m<3m and x_N^1/n<2 then x_n^1/n≤^m/N+2m x_N^1/n≤ 2^1/3. Hence √(x_n)→0 as n→∞, which implies (<ref>). By the definition of x_n above we have that (<ref>) s equivalent to (<ref>), which completes the proof. Since logA_n-1/B_n = logA_n/B_n +log1/a_n=logA_n-1/B_n-1 +log1/b_n we have that m=2-lim sup_n→∞log (B_n/A_n)/log(1/a_n), M=2-lim inf_n→∞log(B_n/A_n)/log (1/b_n+1). Therefore, since all the limit terms in (<ref>) and (<ref>) are non-negative, we have that 1≤ m<M≤ 2. In particular, every limit set Λ as in (<ref>) corresponds to some interval [m,M]⊂[1,2]. Below we will provide specific examples of sequences {a_n} and {b_n} showing that every interval [m,M]⊂(1,2) occurs as a limit set. It is also possible to have m=1 and M=2, but we leave such constructions to the reader. Next we provide some examples of geodesics rays which are either convergent or divergent at T_∞(𝔻). Suppose b_0=1, a_0=1/2, b_1=a_0/3, a_1=b_1/4, etc. Thus a_n=1/(2n+2)!, b_n =1/(2n+1)!. Then b_n/a_n=2(n+1), B_n/A_n = 2^n+1(n+1)! = (2n+2)!!, and therefore logB_n/A_n/log1/a_n = log (2n+2)!!/log (2n+2)! = log (2n+2)!!/log (2n+2)!! +log(2n+1)!!n→∞⟶1/2. Since loga_n/logb_n→1, we have that log(B_n/A_n)/log(1/b_n+1) → 1/2 as well. Therefore m=M=3/2 and the generalized Teichmüller geodesic converges to 3/2δ_g + 2/3λ_W. Generalizing this example slightly, we obtain the following. For every s ∈(1, 2) there is a domain W as above so that τ_,W→0⟶ s δ_g + (2/3)λ_W . For all nonnegative integers x, we define the rising factorial (x)^n̅ by (x)^n̅ = x · (x+1) · ... ·(x+n-1). Fix integers p, q and r with 1 ≤ p, q < r. Suppose b_0=1. Let a_n =(nr+1)^p̅/(nr+1)^r̅ b_n, for n ≥ 0, b_n =(nr+1)^q̅/(nr+1)^r̅ a_n-1, for n ≥ 1. Thus a_n = ∏_k=1^n(kr+1)^p̅ (kr+1)^q̅/(kr+1)^r̅(kr+1)^r̅ and b_n = ∏_k=1^n((k-1)r+1)^p̅ (kr+1)^q̅/((k-1)r+1)^r̅(kr+1)^r̅. Then b_n/a_n=(nr+1)^r̅/(nr+1)^p̅, B_n/A_n=∏_k=1^n(kr+1)^r̅/(kr+1)^p̅, and therefore logB_n/A_n/log1/a_n = log∏_k=1^n(kr+1)^r̅/(kr+1)^p̅/log∏_k=1^n(kr+1)^r̅ (kr+1)^r̅/(kr+1)^p̅(kr+1)^q̅ = 1/1+log∏_k=1^n(kr+1)^r̅/(kr+1)^q̅/log∏_k=1^n(kr+1)^r̅/(kr+1)^p̅ n→∞⟶1/1+r-q/r-p=r-p/2r-p-q∈ (0, 1). Since loga_n/logb_n+1n→∞⟶ 1, we have that logB_n/A_n/log1/b_n+1n→∞⟶r-p/2r-p-q as well. Therefore m=M= 2-r-p/2r-p-q∈ (1, 2). By choosing appropriate p, q and r, we obtain all rational s ∈ (1, 2). Hence, the generalized Teichmüller geodesic converges to s δ_g + 2/3λ_W. Next we construct the first example of a divergent geodesic. Suppose p>1, b_0=1, a_0=a∈(0,1) and for n≥ 1 we have a_n=b_n^p and b_n+1=a_n^p. Then a_n=a^p^2n and b_n+1=a^p^2n+1. Hence, A_n=B_n^p = a^p^2n+2-1/p^2-1=Ca_n ^p^2/p^2-1, where C=a^-1/p^2-1. Therefore, m =2-lim_n→∞logB_n/A_n/log1/a_n =2- lim_n→∞(1/p-1)log A_n/log1/a_n =2- (1-1/p)p^2/p^2-1 = 2-p/p+1=1+1/p+1. Similarly M =2-lim_n→∞logB_n/A_n/log1/b_n+1 =2- lim_n→∞(1/p-1)log A_n/log1/a_n^p =2- (1-1/p)p/p^2-1 = 2-1/p+1. Therefore, if p>1 then the limit set Λ in this case is given by the formula (<ref>), where [m,M]= [1+1/p+1,2-1/p+1]⊂(1,2). In particular, Λ is not a point, and geodesic ray ↦τ_,W diverges at infinity. More generally, we have the following. For every nontrivial interval I=[1+α,1+β]⊂ (1,2) there is a domain W as above so that the corresponding generalized Teichmüller geodesic ↦τ_,W diverges and its limit set Λ in PML_bdd(𝔻) is given by Λ={ s δ_g + (2/3)λ_W : s∈ I}. Suppose p,q>1, b_0=1, a_0=a∈(0,1) and for n≥ 1 we have a_n=b_n^p and b_n+1=a_n^q. Then a_n=a^p^nq^n, b_n=a^p^nq^n-1, and therefore A_n =a^(pq)^n+1-1/pq-1≍ a_n^pq/pq-1, B_n =a^p((pq)^n-1)/pq-1≍ a_n^p/pq-1. Therefore, since A_n=B_n^p we have m =2-lim_n→∞logB_n/A_n/log1/a_n =2- lim_n→∞(1/p-1)log A_n/log1/a_n =2- (1-1/p)pq/pq-1 = 1+q-1/pq-1. Similarly M =2-lim_n→∞logB_n/A_n/log1/b_n+1 =2- lim_n→∞(1/p-1)log A_n/log1/a_n^q =2- (1-1/p)p/pq-1 = 2-p-1/pq-1=1+ p(q-1)/pq-1. Therefore, [m,M]=[1+q-1/pq-1,1+p(q-1)/pq-1]⊂ (1,2). Observe, that if 0<α<β<1 then letting p=β/α and q=1-α/1-β we obtain that m=1+α and M=1+β. Therefore a limit set of the generalized Teichmüller geodesic can be given by (<ref>) where [m,M] is any interval in (1,2). § PROOF OF THEOREM <REF> To prove that the right hand side is a subset of the left hand side in (<ref>) of Theorem <ref> it is enough to show that for every s∈[m,M] there is a sequence n_k=n_k(s) such that τ__n_k,Wk→∞⟶ s δ_g + (2/3)λ_W ∈Λ. For the opposite inclusion we need to show that every limit point of τ_,W as →0 is of the form s δ_g + (2/3)λ_W for some s∈[m,M]. By Lemma <ref>, (<ref>) would follow if we showed that for every ∈ G(𝔻) there is a constant η_>0 and a sequence {n_k} so that for η<η_γ we have the following: lim_k→∞ V__n_k(ϕ(_B_η()))/1/πlog1/_n_k= s, =g, 2/3, ∈⋃_n=0^∞{_n^+,_n^-} , 0, , where s∈[m,M]. For ≠ g equality (<ref>) follows from the proof of <cit.>. Thus, to prove (<ref>) it is enough to assume =g=_(-1,-i). To simplify the notation for every curve family in 𝔻 we denote ^=V_(ϕ()). In particular, if I,J are disjoint arcs on the unit circle ∂𝔻 and _I,J = (I,J;𝔻) is the curve family connecting I to J in 𝔻, we denote ^_I,J =^_I× J = V_(ϕ(_I,J)). The following lemma implies (<ref>) in the case of =g and therefore also proves Theorem <ref>. For every pair of disjoint arcs I and J on 𝕊^1 we define 𝕄_I,J():=^_I,J/1/πlog1/. For every s∈[m,M] there is a sequence _n∈[a_n+1,a_n] for which lim_n→∞𝕄_I,J(_n) = s, whenever I,J are disjoint arcs on 𝕊^1 s. t. g∈ I× J and 1∉ I∪ J. Note that the condition on the arcs I and J can be formulated more concretely as follow. Assuming that -1∈ I and -i∈ J we have J⊂{ (z) <0}, and 1∉ I. Before proceeding, we explain how Lemma <ref> implies equality (<ref>) for =g. Note that for every η>0 small enough we can find boxes of geodesics B'=I'× J' and B”=I”× J” satisfying the conditions of Lemma <ref>, such that B'⊂ B_η(g) ⊂ B”. Therefore, by monotonicity of modulus we have ^_B'≤^_B_η(g)≤^_B' and since the limit (<ref>) is the same for the boxes B' and B” it would have to be the same for B_η(g). We start by showing that the limit (<ref>) is independent of the arcs I and J. Let I_0 ={|z|=1}∩{(z)>0}, J_0=ϕ^-1({0}× (-∞,-1)). If I and J are disjoint arcs of 𝕊^1 s.t. g∈ I× J and 1∉ I∪ J then for a sequence _k→0 we have lim_k→∞𝕄_I,J(_k) =lim_k→∞𝕄_I_0,J_0(_k) whenever the limits above exist. It is enough to show that there is a constant C≥ 1 such that |^(I,J) - ^(I_0,J_0)|≤ C for →0. Let Q_1 be the first quadrant in the plane, i.e., Q_1 = {z∈ℂ : z>0, z>0}, and let Q_2,Q_3, Q_4 denote the other quadrants ordered counterclockwise. Define I_1= I∩ (Q_1∪ Q_2), I_2= I∩ Q_3, J_1=J∩ Q_3, J_2= J∩ Q_4. Since {1,-i}∩ I =∅ and {-1,1}∩ J =∅ we have that I=I_1∪ I_2 and J=J_1∪ J_2. Denoting as before ^(E,F) = V_(ϕ(E), ϕ(F); W), whenever E,F⊂𝕊^1 we have ^(I,J) ⊂^(I_1,J_1) ∪^(I_1,J_2) ∪^(I_2,J_1) ∪^(I_2,J_2). To simplify the notation, given an arc I⊂𝕊 we let Ĩ be the subset of ∂W so that ϕ^-1(Ĩ)=I, and Ĩ^ = V_(Ĩ), where abusing the notation slightly by ϕ^-1 we denote the extension of ϕ^-1 to W. Note that since ∂W is not locally connected at any point ia, with a>0, we have that if 0∈ I we cannot write Ĩ=ϕ(I) since ϕ does not extend continuously to 0∈∂𝔻. However if 0∉ I then Ĩ=ϕ(I). Note that ^(I_1,J_2) overflows ([0,1],J̃_2^;W). Since J̃_2^=ϕ(J_2) we have that the relative distance between [0,1] and J̃_2^ is bounded below and therefore by Lemma <ref> there is a c_1>0 such that ^(I_1,J_2) ≤([0,1],J̃_2^;W) ≤ c_1. Since ϕ(I_2) and ϕ(J_1) both belong to the imaginary axis, relative distance between the Ĩ_2^ and J̃_1^ does not change with ϵ and therefore is bounded below. Thus ^(I_2,J_1) ≤ c_2 for some c_2>0. Finally, Δ(I_2^,J_2^ϵ)→∞ since (I_2^)→0 as →0. Thus, the moduli of all the families on the right hand side of (<ref>) are bounded above as →0, except for ^(I_1,J_1), and as →0 we have ^(I_1,J_1) ≤^(I,J) ≤^(I_1,J_1) + C_1 Therefore, to prove (<ref>) it is enough to show that there is a constant C≥ 1 such that |^(I_1,J_1) - ^(I_0,J_0)|≤ C. Since I_1⊂ I_0 we have ^(I_1,J_1) ⊂^(I_0,J_1∩ J_0) ∪^(I_0,J_1∖ J_0)⊂^(I_0, J_0) ∪^(I_0,J_1∖ J_0) Observe that since J̃_1∖J̃_0 is either empty or an interval compactly contained in {0}× (-∞,0), we have that Δ(I_0,J̃_1^∖J̃_0^) is independent of and thus is bounded below away from zero. Hence, by Lemma <ref> we have ^(I_1,J_1) ≤^(I_0,J_0) +C. On the other hand, we also have the inclusions ^(I_0,J_0) ⊂^(I_0∩ I_1, J_0) ∪^(I_0∖ I_1, J_0) ⊂^( I_1, J_0∩ J_1)∪^( I_1, J_0∖ J_1) ∪^(I_0∖ I_1, J_0) ⊂^( I_1, J_1)∪^( I_1, J_0∖ J_1) ∪^(I_0∖ I_1, J_0). Just like above, since J̃_0∖J̃_1 is compactly contained in {0}× (-∞,0), we have that Δ(I_1,J̃_0^∖J̃_1^) is bounded away from 0 and hence the modulus of ^(I_1, J_0∖ J_1) is bounded above by a constant for all >0. Finally, let N be the smallest natural number n so that z_n∈ I_1. From the construction it follows that ^(I_0∖ I_1,J_0) overflows ([a_N,1], J̃_0^). Since Δ([a_N,1],J̃_0^) ≥a_N/1-a_N>0 for all >0, we again have that ^(I_0∖ I_1,J_0) is bounded above independently of . Therefore, using the inclusions above we obtain ^(I_0,J_0) ≤^(I_1,J_1) +C, for some constant C and all >0. This proves (<ref>), which in turn implies (<ref>) and Proposition <ref> Continuing with the proof of Lemma <ref> we observe that 𝕄_I_0,J_0() is a continuous function of in (0,1). This follows from Caratheodory's theorem on continuous extension of Riemann mapping at the points where ∂ W is locally connected. However, for readers convenience we give a direct proof. From the definition of 𝕄_I_0,J_0() it is enough to show that ^_I_0,J_0 is continuous. Note that ^_I_0,J_0 = (V_(ϕ(I_0)), V_(ϕ(J_0));W) = (ϕ(I_0),{0}× (-∞,-);W). Therefore if 0<ζ< then ^_I_0,J_0⊂^ζ_I_0,J_0 then ^_I_0,J_0≤^ζ_I_0,J_0≤^_I_0,J_0 + (^ζ_I_0,J_0∖^_I_0,J_0). Let τ=(-ζ)/2. Then ^ζ_I_0,J_0∖^_I_0,J_0 overflows the family of the curves connecting the boundary components of the annulus A(-iζ+/2, τ,-τ) in W. Therefore, as ζ→ or, equivalently, if τ→0, we have |^ζ_I_0,J_0-^_I_0,J_0| ≤π(log-τ/τ)^-1τ→0⟶0. Hence ^_I_0,J_0/(1/πlog1/) is continuous in . Since 𝕄_I_0,J_0() is continuous, every s which is bounded between the inferior and superior limits of 𝕄_I_0,J_0() as →0 is a subsequential limit of 𝕄_I_0,J_0(). By Proposition <ref> the same would also hold for 𝕄_I,J(), whenever I and J satisfy the conditions of Lemma <ref>. Therefore, Lemma <ref> follows from Lemma <ref> which states that the inferior and superior limits of 𝕄_I_0,J_0() as approaches 0 are equal to m and M, respectively. § MODULUS BOUNDS By Proposition <ref> a key step in proving Lemma <ref> and Theorem <ref> are bounds on 𝕄_I_0,J_0(), which are established in this section. To simplify the notation we let =ϕ((I_0,J_0;𝔻))=(ϕ(I_0),ϕ(J_0);W), and for every >0 we let ^=V_(). Since V_(ϕ(I_0))=ϕ(I_0) we have ^ =(ϕ(I_0),{0}×(-∞,-);W). Furthermore, given z∈ℂ, and 0<r<R<∞ we will denote A(z,r,R) = B(z,R)∖B(z,r). To simplify the exposition we denote m_n =1+log (A_n-1/B_n)/log(1/a_n)=2-log (B_n/A_n)/log(1/a_n) M_n =1+log (A_n-1/B_n)/log(1/b_n)=2-log (B_n-1/A_n-1)/log(1/b_n) Therefore, by Lemma <ref> and Proposition <ref>, Theorem <ref> would follow from the following result. With the notation above we have the following equalities lim inf_→0𝕄_I_0,J_0() =lim inf_n→∞m_n =m, lim sup_→0𝕄_I_0,J_0() =lim sup_n→∞M_n=M. The rest of this section is devoted to proving Lemma <ref>. §.§ Lower bounds for 𝕄_I_0,J_0() For n≥ 1 we have 𝕄_I_0,J_0() ≥ 2-logB_n/A_n/log1/, ∈ [b_n+1,a_n], 𝕄_I_0,J_0() ≥ 1+logA_n-1/B_n/log1/, ∈ [a_n,b_n]. In particular, for ∈[b_n+1,b_n] we have 𝕄_I_0,J_0()≥ m_n, where m_n is defined as in (<ref>). For k≥ 0 we denote G_k,1^ ={∈^ : ⊂ A(0, a_k,b_k)}, G_k,2^ ={∈^ : ⊂ A(0, b_k,a_k-1)}. Suppose ∈[b_n+1,a_n]. Denote G_n^ = {∈^ | ⊂ A(0,, a_n)}. Using the motonicity and overflowing properties of modulus, as well as the fact that the families G_n^ and G_k,i^ for k∈{0,…,n} and i∈{1,2}, are pairwise disjoint, we have Γ^ ≥ G_n^ + ∑_k=0^n ( G_k,1^ + G_k,2^) ≥2/πloga_n/+1/π∑_k=0^n logb_k/a_k+ 2/π∑_k=1^n loga_k-1/b_k = 2/πloga_n/+1/πlogB_n /A_n + 2/πlogA_n-1/B_n = 2/πlog1/ -1/πlogB_n/A_n. Dividing both sides by 1/πlog1/ gives the first line in (<ref>). For ∈[a_n,b_n] we similarly estimate Γ^ ≥1/πlogb_n/+2/πlogA_n-1/B_n + 1/πlogB_n-1/A_n-1 =1/πlog1/+1/πlogA_n-1/B_n Dividing by 1/πlog1/ gives the second line in (<ref>). Using (<ref>), the fact that 1/(log(1/)) is an increasing function, and the first line in (<ref>), we obtain (<ref>). §.§ Upper bounds for 𝕄_I_0,J_0() To obtain upper bounds for 𝕄_I_0,J_0() we need an auxilliary result. As before we let ^=V_((ϕ(I_0),ϕ(J_0);W))=(ϕ(I_0),{0}×(-∞,-);W) for ∈(0,1). Without loss of generality we may assume that (0) belongs to the imaginary axis. Thus ((0))<-. We will denote by _k,1^ and _k,2^ the following subfamilies of ^: _k,1^ ={∈^ : i(0) ∈ [a_k,b_k]}, k≥ 0, _k,2^ ={∈^ :i (0) ∈ [b_k,a_k-1]}, k≥ 1, and let _0,2^={∈^: i(0) ∈ [1,∞)}. Note that if b_k< then ^_k,1 is empty, and if a_k-1< then _k,2^ is empty. There is a constant 0<C<∞ depending on c such that ^_k,1 ≤1/πlogb_k/max{,a_k} + C, < b_k, k≥0, ^_k,2 ≤2/πloga_k-1/max{,b_k} + C, <a_k-1, k≥1, ^_0,2 ≤ 2. To obtain inequality (<ref>) observe that if ∈_0,2 then contains a subcurve connecting the interval [0,1] and {z∈ W : dist(z,[0,1])=1}∩{ (z)<0}. Every such subcurve of has length at least 1 and is contained in the region {z ∈ W : dist(z,[0,1])<1}∩{ z<0}. The latter region has area 1+π/4<2 and therefore by the overflowing property we obtain (<ref>). Proof of (<ref>). Fix >0 and an integer k≥ 1 so that < a_k-1. If ≥ b_k let _k' = {∈_k,2^ : ⊂ A(0,c,a_k-1/c)}. Note that every ∈_k' has a subcurve which separates the two boundary circles of the annulus A(0,c,a_k-1/c) and lies in the fourth quadrant Q_4. Hence, (_k') ≤2/πloga_k-1/c/c = 2/π[loga_k-1/ + 2log1/c]. Furthermore, if ∈_k,2^∖_k' then it connects the two boundary components of either the annulus A(0,a_k-1,a_k-1/c) or the annulus A(0,c,). Therefore, since ⊂{ z >0}, we have (_k,2^∖_k') ≤ 2 ·π/log (1/c). Since _k,2^≤_k' + (_k,2^∖_k'), it follows that (<ref>) holds with the constant C_1 = 2/πlog1/c^2+ 2π/log1/c for ∈[b_k,a_k-1]. If <b_k then the same argument as above holds by replacing with b_k. Therefore we obtains ^_k,2≤1/πloga_k-1/b_k +C_1 thus completing the proof of (<ref>). Proof of (<ref>). Fix >0 and k≥0 so that <b_k. If ≥ a_k let _k” = {∈^_k,1 : ⊂ A(0, c,b_k/c)}, and F_k =^_k,1∖_k”. Just like above F_k ≤ 2 ·π/log (1/c). To bound from above _k”, we let ζ_k=(a_k,0) and consider the following subfamilies of _k”: ℱ_k^1 ={∈_k” : ((1)) ≤}, _k^2 ={∈_k”: ((1)) ≥, ∩ B(ζ_k,(1-c))=∅}, _k^3 = _k”∖ (ℱ_k^1∪ℱ_k^2). To estimate _k^1, note that for every ∈_k^1 we have either (1)∈[ca_k,a_k]∪({a_k}×[0,]) or not. In the former case, since ((0))≤ - and ((1)) ≥ 0, we have that ()≥. Thus the modulus of the subfamily of curves in _k^1 with (1)∈[ca_k,a_k]∪({a_k}×[0,]) can be estimated from above by ∫_N_([ca_k,a_k]∪({a_k}×[0,])^-2 dxdy <8^2 ·^-2 = 8, where N_r(E) denotes the r-neghborhood of a set E⊂ℂ. On the other hand, if (1)∉[ca_k,a_k]∪({a_k}×[0, ]) then either intersects the horizontal line {(z)=b_k} or not. If ∩{(z)=b_k}≠∅ then contains a subcurve which connects the horizontal sides of the square [0,b_k]×[0,b_k]. If ∩{(z) =b_k}=∅ then contains a subcurve which connects the vertical sides of the rectangle [0,b_k]×[-c^-1b_k,b_k]. Therefore, using the overflowing property we obtain: _k^1 ≤ 8+1+ b_k-(-b_k/c)/b_k= 10+1/c. To estimate the modulus of _k^2, we first pick an arbitrary curve ∈_k^2. Note, that since b_k+1/a_k<c, b_k/a_k-1<c, and ⊂ A(0,c, c^-1b_k) we have that (1) ∈ ({a_k}∪{b_k})× (,∞). In particular, ((1))∈{a_k, b_k}. If ((1))=b_k then like above we have two cases: either ⊂[0,b_k]×[-b_k/c,b_k] or not. As above in the former case connects the vertical sides of [0,b_k]×[-b_k/c,b_k], and in the latter case it has a subcurve connecting the horizontal sides of the square [0,b_k]×[0,b_k]. Therefore the modulus of the family {∈_k^2 : ((1))=b_k} is bounded above by b_k/c+b_k/b_k+1 = c^-1+2. Next, suppose ∈_k^2 is such that ((1))=a_k. Recall, that ζ_k=(a_k,0)∈ℝ and consider the annulus R_k=A(ζ_k,(1-c),c^-1b_k). Since ∩ B(ζ_k, (1-c))=∅ it follows that intersects the vertical interval {a_k}× [-b_k/c,-(1-c)] and ∩{z>a_k} is contained in R_k. Therefore, _k^2 overflows the family of curves in the semi-annulus R_k∩{ z >a_k} which separate the two boundary components of R_k. The modulus of the latter family is 1/πlogb_k/c/(1-c) =1/πlogb_k/ +1/πlog1/c(1-c). Combining the two cases above we obtain _k^2 ≤1/πlogb_k/ +1/πlog(1/c(1-c))+1/c+2. Finally, _k^3 overflows the family of curves connecting the boundary components of the annulus A(ζ_k,(1-c),), which has modulus 2π/log1/1-c. Since _k,1^≤ (ℱ_k^1 ∪ℱ_k^2∪ℱ_k^3∪ F_k), combining the estimates above we obtain (<ref>) with C_2=12+2/c+2π/log1/c + 2π/log1/1-c +1/π[ log1/c +log1/1-c]. Therefore Lemma <ref> holds with C=max (C_1,C_2). We are now ready to estimate 𝕄_I_0,J_0() from above. For n≥ 1 we have 𝕄_I_0,J_0() ≤ 2-logB_n/A_n/log1/+o(1), ∈[b_n+1,a_n] 𝕄_I_0,J_0() ≤ 1+logA_n-1/B_n/log1/+o(1), ∈[a_n,b_n]. In particular, for ∈[a_n,a_n-1] we have 𝕄_I_0,J_0()≤ M_n+o(1), where M_n is defined in (<ref>). Suppose ∈[b_n+1,a_n], for some n≥ 1. Then _n +1,2^ = {∈^ : i(0) ∈ [,a_n]}. Therefore, by Lemma <ref> there is a constant C (not the same though as in estimates (<ref>) and (<ref>)) so that for n large enough we have ^ ≤^_n+1,2 + ∑_k=0^n( _k,1^ + _k,2^) ≤2/πloga_n/ + 1/π∑_k=0^nlogb_k/a_k + 2/π∑_k=1^nloga_k-1/b_k + C n ≤2/πlog1/ - 1/πlogB_n/A_n +Cn Since ∈[b_n+1,a_n] we obtain 𝕄_I_0,J_0() ≤ 2-log(B_n/A_n)/log(1/)+Cn/log(1/a_n)≤ 2-log(B_n/A_n)/log(1/)-C/log(√(a_n)) Since √(a_n)→0 as n→0, we obtain (<ref>) for ∈[b_n+1,a_n]. Suppose ∈[a_n,b_n]. Then _n,1^ = {∈^ : i (0) ∈ [,b_n]}. Just like above, by estimate (<ref>) of Lemma <ref> there is a (possibly new) constant C such that ^ ≤_n,1^ +_n, 2^+∑_k=0^n-1( _k,1^ + _k,2^) ≤1/πlogb_n/ + 1/π∑_k=0^n-1logb_k/a_k + 2/π∑_k=0^nloga_k-1/b_k+Cn =1/πlog1/ + 1/πlogB_n/A_n-1 + 2/πlogA_n-1/B_n +Cn =1/πlog1/ + 1/πlogA_n-1/B_n +Cn. Dividing both sides by 1/πlog(1/) and using the fact that ≤ b_n we obtain 𝕄_I_0,J_0() ≤ 1+log (A_n-1/B_n)/log (1/) + Cn/log(1/)≤ 1+log (A_n-1/B_n)/log (1/) - C/log(√(b_n)). Since √(b_n)→0 as n→0 we obtain (<ref>) for ∈[a_n,b_n]. Since the right hand side in the first inequality in (<ref>) is decreasing in we obtain that for ∈[b_n,a_n-1] we have 𝕄_I_0,J_0() ≤ M_n+o(1). Similarly, the right hand side in the second inequality in (<ref>) is increasing in and hence the maximum in the interval [a_n,b_n] is attained for =b_n. This gives (<ref>) in [a_n,b_n], thus completing the proof. §.§ Completing the proof of Lemma <ref> By Proposition <ref> we have that the left hand side in (<ref>) is greater than or equal to m. To show the opposite inequality, let n_k be such that lim_k→∞ m_n_k = m. Then letting _k=a_n_k in the second inequality in (<ref>) we obtain 𝕄_I_0,J_0(_k) ≤ 1+logA_n_k-1/B_n_k/log1/a_n_k+o(1)k→∞⟶ m, which proves (<ref>). Similarly, Proposition <ref> implies that lim sup_→0𝕄_I_0,J_0()≤lim sup_n→∞ (M_n+o(1))=M, and therefore we only need to show the opposite inequality. Let n_l be such that lim_l→∞ M_n_l = M. Then letting _l=b_n_l in the second inequality in (<ref>) we obtain 𝕄_I_0,J_0(_l) ≥ 1+logA_n_l-1/B_n_l/log1/b_n_ll→∞⟶ M. This completes the proof of inequality (<ref>) and, as explained in the beginning of this section, also proves Theorem <ref>. § LIMIT SETS OF HIGHER DIMENSIONS In this section, for every k≥ 2 we construct a domain with chimneys W_k such that the generalized Teichmüller geodesic ray ↦τ_W_k, has a limit set Λ in Thurston boundary T_∞(𝔻) which is a k simplex, i.e., is homeomorphic to [0,1]^k. We will give the detailed construction for k=2 and explain how the case of general k≥ 2 can be constructed in a similar way. §.§ Limit sets of dimension 2 Let p,q∈(1, ∞) and a∈(0,1). Let b_0=1, a_0=a, b_1=a^p. For n≥ 1, let a_n=b_n^p, and b_n=a_n-1^p. Therefore, for n≥ 1 we have a_n= a_n-1^p^2=a^p^2n, b_n=a^p^2n-1. Also, let d_0=2, c_0=3-a, d_1=3-a^q. For n≥ 1, let c_n=3-(3-d_n)^q, and d_n = 3-(3-c_n-1)^q. Therefore for n≥ 1 we have c_n=3-a^q^2n, d_n=3-a^q^2n-1. Below it will be convenient to use the following notation u_n=3-c_n=a^q^2n, v_n=3-d_n=a^q^2n-1. Moreover, we let U_n = ∏_i=0^n u_i, V_n =∏_i=0^n v_i. Define the chimneys C_n=(a_n,b_n)×ℝ and C_n'=(c_n,d_n)×ℝ and the domain W=W_p,q by W_p,q={z:z<0, 0<(z)<3}∪⋃ _i=1^∞C_n∪ C_n'. We denote by ψ: 𝔻→ W the conformal map such that lim_z→0ψ^-1 (z) =-1, lim_z→3ψ^-1 (z) = 1, lim_(z)→ -∞ψ^-1 (z) = -i. We denote by a_n', b'_n,c'_n and d'_n the preimages under ψ of a_n,b_n,c_n and d_n, respectively. Also, we define z_n and z'_n as follows: lim_ (z) → +∞ z∈ C_n ψ^-1 (z) = z_n, lim_ (z) → +∞ z∈ C'_nψ^-1 (z) = z'_n. As before we denote by _(x,y) the hyperbolic geodesic in 𝔻 with endpoints x,y∈𝕊^1. Moreover, for n≥ 0 we denote by _n^+ and _n^- the two geodesics starting at z_n and ending at a'_n and b'_n, respectively. Similarly, for n≤ 0 we denote by _n^+ and _n^- the geodesics starting at z'_n and ending at c'_n and d'_n, respectively. With this notation in hand we define the measured lamination λ_W by λ_W = ∑_n=0^∞ (δ__i^+ + δ__i^-) + ∑_n=0^∞ (δ__i^+ + δ__i^-) Let g_1=_(-1,-i) and g_2=_(1,-i). Theorem <ref> gives an example of a geodesic ray in T(𝔻) with the limit set in Thurston boundary being homeomorphic to a square. Suppose p,q>1 are such that log p/log q is an irrational number. If the domain W is defined as in (<ref>) then the generalized Teichmüller geodesic ↦τ_,W diverges and the accumulation set Λ in ∂_∞ T(𝔻) can be described as follows: Λ = {λ_s,t : (s,t)∈ [m_p,M_p]× [m_q,M_q] }, where λ_s,t= s δ_g_1 + t δ_g_2 + 2/3λ_W, m_ℓ=1+1/1+ℓ and M_ℓ=2-1/1+ℓ. Observe that all the measured laminations λ_s,t are supported on the same geodesic lamination, which we will denote by Σ. Thus, Σ = g_1∪ g_2 ⋃_n=0^∞_n^+ ∪_n^- ∪_n^+ ∪_n^-. We first show the inclusion ⊃ in (<ref>). For this we will show that for every (s,t)∈[m_p,M_p]×[m_q,M_q] there is a sequence _n_k→ 0 such that τ__n_k,Wk→∞⟶λ_s,t. To formulate (<ref>) in terms of the asymptotics of moduli of curve families we let I_0= ∂(W∩{ z >0}), J_0 = {0}×(-∞,-1), J_1={3}× (-∞,-1) and denote ^ =V_((I_0,J_0;W))=(I_0,{0}×(-∞,-);W) ^ =V_((I_0,J_1;W))=(I_0,{3}×(-∞,-);W). Just like in Section <ref> to prove (<ref>) it is enough to show that if Σ∩ I× J = ∅ then lim_k→∞𝕄_I,J(_n_k)=0, while if #(Σ∩ I× J) =1 then lim_k→∞𝕄_I,J(_n_k) = s, Σ∩ I× J = g_1, t, Σ∩ I× J = g_2, 2/3, . The third case in (<ref>) follows just like in <cit.>. Moreover, arguing like in Section <ref> we see that instead of general boxes I× J it is enough to consider the limits of 𝕄_I_0,J_0(_n_k) and 𝕄_I_0,J_1(_n_k). Therefore (<ref>) follows from the following lemma. For every (s,t)∈[m_p,M_p]×[m_q,M_q] there is a sequence _n_k→0 such that lim_k→∞𝕄_I_0,J_0(_n_k) = s, lim_k→∞𝕄_I_0,J_1(_n_k) = t. From Propositions <ref> and <ref> for n≥ 1 we have 0 ≤𝕄_I_0,J_0()-[2-logB_n/A_n/log1/]≤ o(1), ∈[b_n+1,a_n] 0 ≤𝕄_I_0,J_0()-[ 1+logA_n-1/B_n/log1/] ≤ o(1), ∈[a_n,b_n]. For α∈[1,p^2] we consider _n=b_n^α. From the definitions we have _n∈[a_n,b_n] for α∈[1,p] and _n∈[b_n+1,a_n] for α∈[p,p^2]. Therefore, lim_n→∞^_n/1/πlog1/_n = lim_n→∞1+log A_n-1/B_n/αlog1/b_n, α∈[1,p] lim_n→∞2-logB_n/A_n/αlog1/b_n, α∈[p,p^2]. From equations (<ref>) and (<ref>) in Example <ref>, it follows that lim_n→∞𝕄_I_0,J_0(b_n^α) = Φ_p(α), where Φ_p(α) = 1+p/α(p+1), α∈[1,p] 2-p^2/α(p+1), α∈[p,p^2]. Observe that Φ_p(α) decreases from Φ_p(1) = M_p to Φ_p(p) = m_p on α∈[1,p] and then increases from m_p to Φ_p(p^2)=Φ_p(1)=M_p on α∈[p,p^2]. Therefore, for every s∈[m_p,M_p] we can find α∈[1,p] (or α∈[p,p^2]) so that Φ_p(α)=s, and with this choice of α we have that 𝕄_I_0,J_0(b_n^α) → s as n→∞. In particular, for every subsequence _n_k of _n=b_n^α the first equation in (<ref>) holds, but not necessarily the second equation. Next, observe that for every n there is a unique m=m_n∈ℕ such that _n=b_n^α = a^α p^2n-1∈ (v_m+1,v_m]=(a^q^2m+1,a^q^2m-1]. Therefore, ∃ β_n∈[0,1) such that _n=v_m^q^2β_n=a^q^2m-1+2β_n, or p^2n-1+log_p α = q^2m-1+2β_n. Solving for β_n we obtain β_n = (2n -1+ logα/log p) log p/2log q +1/2 - m= θ n + σ -m, where θ: = log_q p and σ := 1/2(1+ log_q α/p). Hence β_n ≡θ n + σ (mod 1) ≡ T^∘ n_θ(σ) is the n-th iterate of σ under the map T_θ:x↦ x+θ ( mod 1). Since θ∈ℝ∖ℚ the map T_θ is an irrational rotation, and every orbit {T_θ^∘ n(σ)}_n∈ℕ is dense in [0,1], see for instance <cit.>. Therefore for every β∈[0,1] there is a sequence n_k such that β_n_kk→∞⟶β. Moreover, β_n_k can be chosen to be a monotone sequence. We claim that lim_k→∞𝕄_I_0,J_1(_n_k)=Φ_q(q^2β). As in the proofs of Lemma <ref>, Propositions <ref> and <ref>, we have the following estimates: 0 ≤𝕄_I_0,J_1()-[2-logV_n/U_n/log1/]≤ o(1), ∈[v_n+1,u_n] 0 ≤𝕄_I_0,J_1()-[ 1+logU_n-1/V_n/log1/]≤ o(1), ∈[u_n, v_n]. Therefore, since _n_k=(a^q^2m-1)^q^2β_n_k=(v_m)^q^2β_n_k we have 0 ≤𝕄_I_0,J_1(_n_k)- [2-logV_m/U_m/q^2β_n_klog1/v_m]≤ o(1), _n_k∈[v_m^q^2,v_m^q] 0 ≤𝕄_I_0,J_1(_n_k) -[ 1+logU_m-1/V_m/q^2β_n_klog1/v_m]≤ o(1), _n_k∈[v_m^q, v_m]. Note that _n_k=(v_m)^q^2β_n_k∈[v_m^q,v_m] if and only if β_n_k∈[0,1/2]. Therefore, for any β∈[0,1/2] choosing β_n_k∈[0,1/2] so that β_n_k→β we obtain lim_k→∞𝕄_I_0,J_1(_n_k) =lim_k→∞ 1+logU_m-1/V_m/q^2β_n_klog1/v_m =1+1/q^2βlim_m→∞(1/q-1)log V_m/log 1/v_m. =1+1/q^2β(1/q-1) lim_m→∞q^2m+2-1/(q^2-1)q·1/-q^2m-1 =1+1/q^2βq/q+1=Φ_q(q^2β) . Similarly, _n_k=(v_m)^q^2β_n_k∈[v_m^q^2,v_m^q] if and only if β_n_k∈[1/2,1]. Therefore, for any β∈[1/2,1] choosing β_n_k∈[1/2,1] so that β_n_k→β we obtain lim_k→∞𝕄_I_0,J_1(_n_k) =lim_k→∞ 2-logV_m/U_m/q^2β_n_klog1/v_m = 2-1/q^2βq^2/q+1=Φ_q(q^2β), which proves (<ref>). Note that Φ_q(q^2β) continously decreases from Φ_q(1)=M_q=1+q/q+1 to Φ_q(q)=m_q=1+1/q+1 as β∈[0,1/2]. Similarly, Φ_q(q^2β) increases from m_q to M_q in β∈[1/2,1]. Therefore, for every t∈[m_q,M_q] there is a β∈[0,1/2] (or β∈[1/2,1]) such that Φ_q(q^2β)=t. For this choice of β choosing n_k so that (<ref>) holds gives the second line in (<ref>), thus proving Lemma <ref>. To complete the proof of Theorem <ref> we need to show that if λ is a geodesic measured lamination such that τ__n,Wn→∞⟶λ for some sequence _n→0 then λ=λ_s,t with (s,t)∈[m_p,M_p]×[m_q,M_q]. Recall that just like in <cit.> we have that 𝕄_I,J()→2/3 as →0 whenever Σ∩ (I× J) = for some ∈Σ∖(g_1∪ g_2). Hence it follows from Corollary <ref>, Lemma <ref> and Lemma <ref> that to prove Theorem <ref> it is enough to show that the following inequalities hold: lim sup_→0𝕄_I_0,J_i()≤ M_p, i=0, M_q, i=1, lim inf_→0𝕄_I_0,J_i()≥ m_p, i=0, m_q, i=1. However, these follow from inequalities (<ref>) and (<ref>) applied to the Example <ref>. §.§ Limit sets of dimension k∈[2,∞] Let 2≤ k <∞. For all p,q∈(1,∞) let W_p,q⊂{0<(z)<3} denote the domain constructed in Section <ref> above. Given a domain W⊂ℂ and z∈ℂ we denote W+z={w+z | w∈ W}, i.e. the Minkowski sum of W and z, or equivalently z translate of W. Suppose p_j∈(1,∞) for j∈{1,…,k} and a∈(0,1). For n≥ 1, let a_j,n = a^p_j^2n, b_j,n=a_j,n^1/p = a^p_j^2n-1, c_j,n=3-a^p_j^2n, d_j,n=3-a^p_j^2n-1. Also, let C_j,n=(a_j,n,b_j,n)× (-∞,∞) and C'_j,n=(d_j,n,c_j,n)× (-∞,∞). We define the domains W_j:=W_p_j, p_j +3j, where W_p_j, p_j=⋃_n=0^∞ (C_j,n∪ C_j,n') ∩{ z | 0<(z)<3, (z)<0} Note that the domains W_j are pairwise disjoint, but W_j∩W_j+1 = { z∈ℂ | z =3j}. Define the domain 𝒲_k as follows: 𝒲_k = ⋃_j=1^k W_j ∪ ((0,3k)×(-1,0)). Thus 𝒲_k is obtained from ∪_j=1^k W_j by “adding” the vertical intervals {3j}×(-1,0) which can be thought of as “channels” connecting W_j and W_j+1 in 𝒲_k. Let ψ_k:𝔻→𝒲_k be a conformal map such that lim_z→0ψ_k^-1 (z) =-1, lim_z→3kψ_k^-1 (z) = 1, lim_(z)→ -∞ψ_k^-1 (z) = -i. We define the geodesics _n,j^± and _n,j^± for j∈{1,…,k} and n≥0 analogously to Section <ref>, i.e., for every chimney in 𝒲_k∩ W_j we have two corresponding geodesics in the unit disk with a common endpoint. We let λ_𝒲_k = ∑_j=1^k (∑_n=0^∞ (δ__n,j^+ + δ__n,j^-) + ∑_n=0^∞ (δ__n,j^+ + δ__n,j^-)). Letting ξ_j:= lim_w→3jψ_k^-1(w) for j∈{0,…,k}, and ζ_j = lim_ (z) →∞ z∈ W_j ψ_k^-1(w), for j∈{1,…, k} we also denote by g_j and g_j' the hyperbolic geodesics in 𝔻 connecting ζ_j to ξ_j-1 and ξ_j, respectively. Recall that real numbers θ_1,…,θ_k are rationally independent if the equation r_1θ_1+…+r_kθ_k=0 with integer coefficients r_j holds only if r_1=…=r_k=0. Suppose the collection of numbers log p_1,…, log p_k is rationally independent. If the domain 𝒲_k is defined as in (<ref>) then the generalized Teichmüller geodesic ↦τ_,𝒲_k diverges and the accumulation set Λ in ∂_∞ T(𝔻) can be described as follows: Λ = {λ_s_1,…,s_k : (s_1,…,s_k)∈∏_j=1^k [m_p_j,M_p_j]}, where λ_s_1,…,s_k= ∑_j=1^k s_j(δ_g_j + δ_g_j') + 2/3λ_𝒲_k. The proof of Theorem <ref> is a generalization of the proof of Theorem <ref>. The key difference is that we use the classical Kronecker approximation theorem about the density of the sequence ({θ_1n},…,{θ_k n}) in the k-dimensional unit cube when θ_1,…,θ_k are rationally independent, instead of the the fact that every orbit {T^∘ n_θ(x)} of an irrational rotation is dense in 𝕊^1. For this reason we provide a detailed sketch of the proof Theorem <ref>, and leave the verification of the few missing details to the reader. The inclusion ⊂ in (<ref>) follows from the inequalities m_j≤lim inf_→0M_I_j,J_j() and lim inf_→0M_I_j,J_j()≤ M_j just like in Theorem <ref>. Let Σ_k denote the support of a measured lamination λ_s_1,…,s_k, which is independent of the choice of the weights s_j. The proof of the inclusion ⊃ in (<ref>), just like for Theorem <ref>, comes down to showing that for every choice of (s_1,…,s_k)∈∏_j=1^k [m_p_j,M_p_j] there exists a sequence _n_k so that if the only geodesics from Σ_k contained in the box I× J are either g_j or g_j' for some j∈{1,…,k} then lim_n→∞𝕄_I,J(_n_k) = s_j. In the cases when Σ_k ∩ (I× J)=g∉{g_j, g_j'} and Σ_k ∩ (I× J)=∅. In these cases we have that lim_n→∞𝕄_I,J(_n) is equal to either 2/3 or 0, respectively, which is proved just like in Theorem <ref> or rather in <cit.>. Let I_j=∂𝒲_k ∩{3j< z < 3j+1}∩{ z>0}, for j∈{0,…, k-1}, J_j={3j}× (-∞,-1), for j∈{0,…, k}. Then, like in the proof of Theorem <ref> it is enough to show that for j∈{0,…,k-1} we have lim_n→∞𝕄_I_j,J_j(_n) = lim_n→∞𝕄_I_j,J_j+1(_n)= s_j. As in the proof of Theorem <ref> we choose α_1 so that Φ_p_1(α_1)=s_1, let _n=b_1,n^α_1, and observe that for every j∈{1,…,k} and n≥ 1 there is a unique m_j=m_j,n∈ℕ such that _n=b_1,n^α_1 = a^α_1 p_j^2n-1∈ (a^p_j^2m_j+1,a^p_j^2m_j-1]. Therefore, ∃ β_j,n∈[0,1) such that _n=a^q^2m_j-1+2β_j,n, or p_1^2n-1+log_p_1α_1 = q^2m_j-1+2β_j,n. Solving for β_j,n we obtain β_j,n = (2n -1+ logα/log p_1) log p_1/2log p_j +1/2 - m_j= θ_j n + σ_j -m_j, where θ_j: = log_p_j p_1 and σ_j:= 1/2(1+ log_p_jα_1/p_1). Hence, for j∈{1,…,k} we have β_j,n≡θ_j n + σ_j (mod 1). Next we use the classical theorem of Kronecker, see <cit.>. If θ_1, θ_2,…,θ_k are rationally independent, x_1,…,x_k are arbitrary, and N and are positive, then there are integers n>N, and p_1,…,p_k such that |nθ_j - p_j-x_j| <, for all j∈{1,…,k}. Therefore, for every choice of β_1,…,β_k there are sequences of natural numbers n_k and p_j,k such that |n_kθ_j - p_j,k-σ_j-β_j| <1/k for all j∈{1,…,k}. Hence, for this choice of n_k we have that β_j,n_k→β_j as k→∞ for all j∈{1,…,k}. Moreover, arguing as in the proof of Lemma <ref> (the part after equation (<ref>)), we can see that for an arbitrrary choice of s_j we can choose the sequence n_k so that for every j∈{1,…,k} we have lim_k→∞𝕄_I_j,J_j(_n_k)=lim_k→∞Φ_p_j(p_j^2β_j,n_k)= Φ_p_j(p_j^2β_j)=s_j. This proves the inclusion ⊃ in (<ref>) and completes the proof of the theorem. BLMR2023 [Ahl06]Ahlfors:QClectures Lars V. Ahlfors, Lectures on quasiconformal mappings. University Lecture Series, 38. American Mathematical Society, Providence, RI, 2006. [Bon88]Bon1 F. Bonahon, The geometry of Teichmüller space via geodesic currents, Invent. Math. 92 (1988), no. 1, 139 – 162. [BŠ21]BonahonSaric F. Bonahon and D. Šarić. A Thurston boundary for infinite-dimensional Teichmüller spaces. Math. Ann. 380 (2021), no. 3-4, 1119–1167. [BS02]BrinStuck M. Brin and G. Stuck Introduction to dynamical systems. Cambridge University Press, Cambridge, 2002. xii+240 pp. [BLMR1]BLMR-wp J. Brock, Ch. Leininger, B. Modami, and K. Rafi. Limit sets of Weil-Petersson geodesics. Int. Math. Res. Not. IMRN 2019, no. 24, 7604–7658. [BLMR2]BLMR-wp-non-minimal J. Brock, Ch. Leininger, B. Modami, and K. Rafi. Limit sets of Weil-Petersson geodesics with nonminimal ending laminations. J. Topol. Anal. 12 (2020), no. 1, 1–28. [BLMR3]BLMR-t2 J. Brock, Ch. Leininger, B. Modami, and K. Rafi. Limit sets of Teichmüller geodesics with minimal nonuniquely ergodic vertical foliation, II. J. Reine Angew. Math. 758 (2020), 1–66. [CMW19]CMW J. Chaika, H. Masur, and M. Wolf. Limits in 𝒫ℳℱ of Teichmüller geodesics. J. Reine Angew. Math. 747 (2019), 1–44. [FM07]FM A. Fletcher and V. Markovic, Quasiconformal Maps and Teichmüller Theory Oxford Graduate Texts in Mathematics, 11. Oxford University Press, Oxford, 2007. viii+189 pp. [GL00]GLF. Gardiner and N. Lakic, Quasiconformal Teichmüller theory. Mathematical Surveys and Monographs, 76. American Mathematical Society, Providence, RI, 2000. xx+372 pp. [GM05]GM J. Garnett and D. Marshall, Harmonic measure. New Mathematical Monographs, 2. Cambridge University Press, Cambridge, 2005. xvi+571 pp. [HŠ16]HakSar-vertical H. Hakobyan and D. Šarić. Vertical limits of graph domains. Proc. Amer. Math. Soc. 144 (2016), no. 3, 1223–1234. [HŠ18]HakSar-limits H. Hakobyan and D. Šarić. Limits of Teichmüller geodesics in the universal Teichmüller space. Proc. Lond. Math. Soc. (3) 116 (2018), no. 6, 1599–1628. [HŠ21]HakSar-visual H. Hakobyan and D. Šarić. A Thurston boundary and visual sphere of the universal Teichmüller space. J. Anal. Math. 143 (2021), no. 2, 681–721. [HW79]HardyWright G. Hardy and E. Wright. An introduction to the theory of numbers. Fifth edition. The Clarendon Press, Oxford University Press, New York, 1979. xvi+426 pp. [LV73]LV O. Lehto and K. I. Virtanen, Quasiconformal mappings in the plane., Second edition. Translated from the German by K. W. Lucas. Die Grundlehren der mathematischen Wissenschaften, Band 126. Springer-Verlag, New York-Heidelberg, 1973. [LLR18]LLR Ch. Leininger, A. Lenzhen, and K. Rafi. Limit sets of Teichmüller geodesics with minimal non-uniquely ergodic vertical foliation. J. Reine Angew. Math. 737 (2018), 1–32. [Len08]Lenzhen A. Lenzhen, Teichmüller geodesics that do not have a limit in 𝒫ℳℱ. Geom. Topol. 12 (2008), no. 1, 177–197. [LMR18]LMR A. Lenzhen, B. Modami, and K. Rafi. Teichmüller geodesics with d-dimensional limit sets. J. Mod. Dyn. 12 (2018), 261–283. [MŠ12]MiSar H. Miyachi and D. Šarić, Uniform weak^* topology and earthquakes in the hyperbolic plane. Proc. Lond. Math. Soc. (3) 105 (2012), no. 6, 1123–1148. [Šar05]Saric-currents D. Šarić. Geodesic currents and Teichmüller space. Topology 44 (2005), no. 1, 99–130. [Šar15]Sa3 D. Šarić, A Thurston boundary for infinite-dimensional Teichmüller spaces: geodesic currents, Preprint, arXiv:1505.01099. [Str64]Str64 K.Strebel, Zur Frage der Eindeutigkeit extremaler quasikonformer Abbildungen des Einheitskreiser. II, Comment. Math. Helv. 39(1964), 77–89. [Väi71]Vaisala:lectures J. Väisälä, Lectures on n-dimensional quasiconformal mappings. Lecture Notes in Mathematics, 229. Springer-Verlag, Berlin-New York, 1971.
http://arxiv.org/abs/2307.03384v2
20230707044612
Modular flavor symmetric models
[ "Tatsuo Kobayashi", "Morimitsu Tanimoto" ]
hep-ph
[ "hep-ph" ]
0.2 Modular flavor symmetric models [The contribution to a special book dedicated to the memory of Professor Harald Fritzsch] Tatsuo Kobayashi^ 1 and  Morimitsu Tanimoto^ 2 ^1Department of Physics, Hokkaido University, Sapporo 060-0810, Japan ^2Department of Physics, Niigata University, Ikarashi 2, Niigata 950-2181, Japan Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty § INTRODUCTION One of important mysteries in particle physics is the origin of the flavor structure, i.e., fermion mass hierarchies, their mixing angles, and CP violation. Various studies have been done to understand their origin. One of traditional approaches is the texture zeros proposed by Weinberg and Fritzsch, where zero entries are put in the fermion mass matrices <cit.>. Indeed, the Fritzsch Ansatz <cit.> gave significant prediction power for the flavor mixing although the origin of zeros are unclear. This approach leads to the texture zero analysis where some elements of mass matrices are required to be zero to reduce the degrees of freedom in mass matrices. Some famous works have been made in the texture zeros <cit.>. Those zeros can be related with certain symmetries. Flavor symmetries are interesting approaches to attack the origin of fermion mass hierarchies and mixing angles. Froggatt-Nielsen have taken the U(1) symmetry to explain the observed masses and mixing angles for quarks <cit.>. Furthermore, the S_3 symmetry was used for quark mass matrices <cit.>. It was also discussed to understand the large mixing angle <cit.> in the oscillation of atmospheric neutrinos <cit.>. For the last twenty years, non-Abelian discrete flavor symmetries have been developed, that is motivated by the precise observation of flavor mixing angles of leptons <cit.>. The standard model (SM) is the low-energy effective field theory from the viewpoint of underlying theory, and it is referred to as the SM effective field theory (SMEFT) <cit.>. The SMEFT includes many higher dimensional operators, and they contribute to flavor changing processes and muon (g-2). Flavor symmetries are useful not only to derive realistic fermion masses and their mixing angles, but to control higher dimensional operators in the SMEFT. Indeed, the U(3)^5 and U(2)^5 symmetries control the SMEFT operators <cit.>. The U(3)^5 symmetry <cit.> allows us to apply the Minimal Flavor Violation (MFV) hypothesis <cit.>, which is the most restrictive hypothesis consistent with the SMEFT. In the U(2)^5 symmetry <cit.>, it retains most of the MFV virtues and allows us to have a much richer structure as far as the dynamics of third family is concerned. Thus, flavor symmetries are useful to connect between the low-energy physics and high-energy physics such as superstring theory. Superstring theory is a promising candidate for unified theory of all the interactions including gravity and matters such as quarks and leptons, and Higgs modes. Superstring theory must have six-dimensional (6D) compact space in addition to four-dimensional (4D) space times. Geometrical symmetries of 6D compact space control 4D effective field theory. For example, in certain compactifications there appears non-Abelian discrete symmetries such as D_4 and Δ(54) <cit.>. The modular symmetry is a geometrical symmetry of T^2 and T^2/Z_2, and corresponds to change of their cycle basis. Matter modes transform non-trivially under the modular symmetry. (See for hetetrotic string theory on orbifolds Refs. <cit.> and magnetized D-brane models Refs. <cit.> . [Calai-Yau manifolds have larger symplectic modular symmetries of many moduli <cit.> .]) That is, the modular symmetry is a flavor symmetry. Indeed, finite modular groups include S_3, A_4, S_4, A_5, which have been used in 4D flavor models so far, while Δ(98) and Δ(384) are also included as subgroups. The well-known finite groups S_3, A_4, S_4 and A_5 are isomorphic to the finite modular groups Γ_N for N=2,3,4,5, respectively<cit.>. The lepton mass matrices have been presented in terms of Γ_3≃ A_4 modular forms <cit.>. Modular forms have also been obtained for Γ_2≃ S_3 <cit.>, Γ_4 ≃ S_4 <cit.> and Γ_5 ≃ A_5 <cit.>, respectively. By using them, the viable lepton mass matrices have been obtained for Γ_4 ≃ S_4 <cit.>, and then Γ_5 ≃ A_5 <cit.>. The 4D CP symmetry can be embedded into a proper Lorentz symmetry in higher dimensional theory such as superstring theory <cit.>. From this viewpoint, CP violation in 4D effective field theory would originate from the compactification, that is, the moduli stabilization. (See for early studies on the CP violation through the moduli stabilization Refs. <cit.> .) Recently, the spontaneous breaking of the CP symmetry was studied through the moduli stabilization due to 3-form fluxes Refs. <cit.>. In modular flavor symmetric models, the CP symmetry is combined with the modular symmetry as well as other symmetries, and is enlarged <cit.> [See for the CP symmetry in the Calabi-Yau compactification Refs. <cit.> .]. The CP-invariant vacua and CP-preserving modulus values increase by the modular symmetry. It is important to study the CP violation in such models with the enlarged symmetry <cit.>. Higher dimensional operators can be computed within the framework of superstring theory. Allowed couplings are controlled by stringy symmetries and n-point couplings are written by products of 3-point couplings. The modular flavor symmetry also control these higher dimensional operators <cit.>. In addition to the above aspects, the modular flavor symmetries were recently extended to models for dark matter <cit.>, soft supersymmetry breaking terms <cit.>, matter parity <cit.>, the strong CP problem<cit.>, etc. The paper is organized as follows. In section 2, we give a brief review on modular symmetry. In section 3, we study modular flavor symmetric models. As an illustrating example, we explain A_4 modular symmetric models. In section 4, we study texture structure at the fixed point τ = ω. In section 5, we study CP violation in modular symmetric models. In section 6, we study SMEFT with modular flavor symmetry. Section 7 is devoted to conclusion. In Appendix <ref>, we review modular forms of A_4. In Appendix <ref>, the tensor product decomposition is given in the A_4 group. § MODULAR SYMMETRY The modular symmetry is a geometrical symmetry of the two-dimensional torus, T^2. The two-dimensional torus is constructed as division of the two-dimensional Euclidean space R^2 by a lattice Λ, T^2=R^2/Λ. Instead of R^2, one can use the one-dimensional complex plane. As shown in Fig. <ref>, the lattice is spanned by two basis vectors, e_1 and e_2 as m_1e_1+m_2e_2, where m_1 and m_2 are integer. Their ratio, τ =e_2/e_1, in the complex plane, represents the shape of T^2, and the parameter τ is called the modulus. The same lattice can be spanned by other basis vectors such as ( [ e'_2; e'_1 ]) =( [ a b; c d ]) ( [ e_2; e_1 ]), where a,b,c,d are integer satisfying ad-bc=1. That is the SL(2,Z). Under the above transformation, the modulus τ transforms as follows, τ⟶τ'=γτ = aτ +b/cτ +d . That is the modular symmetry <cit.>. For the element -e in SL(2,Z), -e=( [ -1 0; 0 -1 ]), the modulus τ is invariant, τ⟶τ'=(-τ)/(-1)=τ. Thus, the modular group is Γ̅=PSL(2,Z)=SL(2,Z)/{e,-e }. It is sometimes called the inhomogeneous modular group. On the other hand, the group, Γ=SL(2,Z) is called the homogeneous modular group or the full modular group. The generators of Γ≃ SL(2,Z) are written by S and T, S=( [ 0 1; -1 0 ]), T=( [ 1 1; 0 1 ]). They satisfy the following algebraic relations, S^4=(ST)^3=e. Note that S^2=-e. On Γ̅=PSL(2,Z), they satisfy S^2=(ST)^3=e. These relations are also confirmed explicitly by the following transformations: S: τ⟶ -1/τ, T: τ⟶τ +1 , which are shown on the lattice Λ in Figs. <ref> and <ref>. In addition to the above algebraic relations of Γ̅=PSL(2,Z), we can require T^N=e, i.e. S^2=(ST)^3=T^N=e. They can correspond to finite groups such as S_3, A_4, S_4, A_5 for N=2,3,4,5. In practice, we define the principal congruence subgroup Γ(N) as Γ(N) ={( [ a b; c d ]) ∈Γ | . ( [ a b; c d ]) = ( [ 1 0; 0 1 ])  ( mod N) }. It includes T^N, but not S or T. Then, we define the quotient Γ_N=Γ̅/Γ̅(N), where the above algebraic relations are satisfied. It is found that Γ_N with N=2,3,4,5 are isomorphic to S_3, A_4, S_4, A_5, respectively <cit.>. We define SL(2,Z_N) by SL(2,Z_N) = . {( [ a b; c d ]) | a,b,c,d ∈ Z_N, ad-bc=1 }, where Z_N denotes integers modulo N. The group Γ_N is isomorphic to PSL(2,Z_N) = SL(2,Z_N)/{ e,-e } for N>2, while Γ_2 is isomorphic to SL(2,Z_2), because e=-e in SL(2,Z_2). Similar to Γ_N, we can define Γ'_N=SL(2,Z)/Γ(N), and it is the double cover of Γ_N. That is, the groups Γ'_N for N=3,4,5 are isomorphic to the double covering groups of A_4,S_4,A_5, i.e. T',S'_4, A'_5, respectively, although Γ_2' is isomorphic to S_3. The upper half-plane of the modulus space τ is mapped onto itself. For example, Γ does not include the basis change. (e_1,e_2) ⟶ (e_1,-e_2), i.e. τ→ -τ. In practice, we find Im(γτ) = |cτ +d|^-2 Im(τ). Thus, the modular group is represented on the upper half-plane of τ. Obviously one can map any value of τ on the upper half-plane into the region, -1/2≤ Re(τ) ≤1/2 by T^n. Furthermore, by the modular transformation one can map any value of τ on the upper half-plane into the following region: -1/2≤ Re(τ) ≤1/2, |τ|>1, which is called the fundamental domain. Suppose that Im(γτ) is a maximum value among all of γ for a fixed value of τ. If |γτ|<1, we map it by S, and we find Im(Sγτ)= Im(γτ)/|γτ|^2> Im(γτ). That is inconsistent with the assumption that Im(γτ) is a maximum value among all of γ. That is, we find |γτ| >1. Thus, we can map τ on the upper half-plane into the fundamental region by the modular transformation. The point τ =i is the fixed point under S because S:i→ -1/i=i, where Z_2 symmetry remains. Similarly, the point τ=e^2π i/3 is the fixed point under ST, where Z_3 symmetry remains. Modular forms f_i(τ) of weight k are the holomorphic functions of τ and transform as f_i(τ) ⟶ (cτ +d)^k ρ(γ)_ijf_j( τ) , γ∈Γ̅ , under the modular symmetry, where ρ(γ)_ij is a unitary matrix under Γ_N. Under the modular transformation, chiral superfields ψ_i (i denotes flavors) with weight -k transform as <cit.> ψ_i⟶ (cτ +d)^-k_iρ(γ)_ijψ_j . We study global SUSY models. The superpotential which is built from matter fields and modular forms is assumed to be modular invariant, i.e., to have a vanishing modular weight. For given modular forms, this can be achieved by assigning appropriate weights to the matter superfields. The kinetic terms are derived from a Kähler potential. The Kähler potential of chiral matter fields ψ_i with the modular weight -k is given simply by 1/[i(τ̅- τ)]^k∑_i|ψ_i|^2, where the superfield and its scalar component are denoted by the same letter, and τ̅=τ^* after taking vacuum expectation value (VEV) of τ. The canonical form of the kinetic terms is obtained by changing the normalization of parameters. The general Kähler potential consistent with the modular symmetry possibly contains additional terms <cit.>. However, we consider only the simplest form of the Kähler potential. § MODULAR FLAVOR SYMMETRIC MODELS In this section, we discuss the flavor model of quark and lepton mass matrices. There is a difference between the modular symmetry and the usual flavor symmetry. Coupling constants such as Yukawa couplings also transform non-trivially under the modular symmetry and are written as functions of the modulus called modular forms, which are holomorphic functions of the modulus τ. On the other hand, coupling constants are invariant under the traditional flavor symmetries. The flavor model of lepton mass matrices have been proposed based on the finite modular group Γ_3 ≃ A_4 <cit.>. This approach based on modular invariance opened up a new promising direction in the studies of the flavor physics and correspondingly in flavor model building. §.§ Modular A_4 invariance and neutrino mixing We present a phenomenological discussion of the modular invariant lepton mass matrix by using the finite modular group Γ_3 ≃ A_4, where a simple model was proposed by Feruglio <cit.>. We have shown that it can predict a clear correlation between the neutrino mixing angle θ_23 and the CP violating Dirac phase <cit.>. The mass matrices of neutrinos and charged leptons are essentially given by fixing the expectation value of the modulus τ, which is the only source of the breaking of the modular invariance. Since there are freedoms for the assignment of irreducible representations and modular weights to leptons, suppose that three left-handed lepton doublets are of a triplet of the A_4 group. The three right-handed neutrinos are also of a triplet of A_4. On the other hand, the Higgs doublets are supposed to be a trivial singlet of A_4 for simplicity (In the next section, we modify this assumption.). We also assign three right-handed charged leptons for three different singlets of A_4, (1,1',1”), respectively. Therefore, there are three independent couplings in the superpotential of the charged lepton sector. Those coupling constants can be adjusted to the observed charged lepton masses. The assignments of representations and modular weights to the MSSM fields as well as right-handed neutrino superfields are presented in Table <ref>. In terms of modular forms of A_4 triplet, Y_3 in Eq.(<ref>) of Appendix <ref>, the modular invariant Yukawa coupling and Majorana mass terms of the leptons are given by the following superpotentials: w_e =α_e H_d(L Y_3)e^c+β_e H_d(L Y_3)μ^c+γ_e H_d(L Y_3)τ^c , w_D =g_i( H_u L ν^c Y_3)_ 1 , w_N =Λ(ν^cν^c Y_3)_ 1 , where the sums of the modular weights vanish. The parameters α_e, β_e, γ_e, g_i(i=1,2), and Λ are constant coefficients. VEVs of the neutral component of H_u and H_d are written as v_u and v_d, respectively. Then, the mass matrix of charged leptons is given by the superpotential Eq. (<ref>) as follows: M_E =v_d [ Y_1 Y_2 Y_3; Y_3 Y_1 Y_2; Y_2 Y_3 Y_1 ][ α_e 0 0; 0 β_e 0; 0 0 γ_e ]_LR . The coefficients α_e, β_e, and γ_e are taken to be real positive by rephasing right-handed charged lepton fields without loss of generality. Since the tensor product of 3⊗ 3 is decomposed into the symmetric triplet and the antisymmetric triplet as seen in Appendix <ref>, the superpotential of the Dirac neutrino mass in Eq. (<ref>) is expressed by introducing additional two parameters g_1 and g_2 as: w_D= v_u[ g_1[ 2ν_eY_1-ν_μ Y_3-ν_τ Y_2; 2ν_τ Y_3-ν_e Y_2-ν_μ Y_1; 2ν_μ Y_2-ν_τ Y_1-ν_eY_3 ]⊕ g_2[ ν_μ Y_3-ν_τ Y_2; ν_e Y_2-ν_μ Y_1; ν_τ Y_1-ν_e Y_3 ]] ⊗[ ν^c_1; ν^c_2; ν^c_3 ] = v_ug_1[ (2ν_eY_1-ν_μ Y_3-ν_τ Y_2)ν^c_1+ (2ν_μ Y_2-ν_τ Y_1-ν_eY_3)ν^c_2 . .+(2ν_τ Y_3-ν_e Y_2-ν_μ Y_1)ν^c_3] +v_ug_2[ (ν_μ Y_3-ν_τ Y_2)ν^c_1+ (ν_τ Y_1-ν_e Y_3)ν^c_2+ (ν_e Y_2-ν_μ Y_1)ν^c_3]. The Dirac neutrino mass matrix is given as M_D=v_u[ 2g_1Y_1 -(g_1-g_2)Y_3 - (g_1+g_2)Y_2; -(g_1+g_2)Y_3 2g_1Y_2 -(g_1-g_2)Y_1; -(g_1-g_2)Y_2 -(g_1+g_2)Y_1 2g_1Y_3 ]_LR. On the other hand, since the Majorana neutrino mass terms are symmetric, the superpotential in Eq. (<ref>) is expressed simply as w_N= Λ[ 2ν^c_1ν^c_1-ν^c_2ν^c_3-ν^c_3ν^c_2; 2ν^c_3ν^c_3-ν^c_1ν^c_2-ν^c_2ν^c_1; 2ν^c_2ν^c_2-ν^c_3ν^c_1-ν^c_1ν^c_3 ]⊗[ Y_1; Y_2; Y_3 ] = Λ[(2ν^c_1ν^c_1-ν^c_2ν^c_3- ν^c_3ν^c_2)Y_1+ (2ν^c_3ν^c_3-ν^c_1ν^c_2-ν^c_2ν^c_1)Y_3. .+(2ν^c_2ν^c_2-ν^c_3ν^c_1-ν^c_1ν^c_3)Y_2]. Then, the modular invariant right-handed Majorana neutrino mass matrix is given as M_N=Λ[ 2Y_1 -Y_3 -Y_2; -Y_3 2Y_2 -Y_1; -Y_2 -Y_1 2Y_3 ]_RR. Finally, the effective neutrino mass matrix is obtained by the type I seesaw as follows: M_ν=-M_D^M_N^-1M_D^ T . When τ is fixed, the modular invariance is broken, and then the lepton mass matrices give the mass eigenvalues and flavor mixing numerically. In order to determine the value of τ, we use the result of NuFIT 5.1 <cit.>. By inputting the data of Δ m_ atm^2 ≡ m_3^2-m_1^2, Δ m_ sol^2 ≡ m_2^2-m_1^2, and three mixing angles θ_23, θ_12, and θ_13 with 3 σ error-bar, we can severely constraint values of the modulus τ and the other parameters, and then we can predict the CP violating Dirac phases δ_CP. We consider both the normal hierarchy (NH) of neutrino masses m_1<m_2<m_3 and the inverted hierarchy (IH) of neutrino masses m_3<m_1<m_2, where m_1, m_2, and m_3 denote three light neutrino masses. Since the sum of three neutrino masses ∑ m_i is constrained by the recent cosmological data <cit.>, we exclude the predictions with ∑ m_i≥ 200 meV even if mixing angles are consistent with observed one. The coefficients α_e/γ_e and β_e/γ_e in the charged lepton mass matrix are given only in terms of τ by the input of the observed values m_e/m_τ and m_μ/m_τ. As the input charged lepton masses, we take Yukawa couplings of charged leptons at the GUT scale 2× 10^16 GeV, where tanβ=5 is taken as a bench mark <cit.>: y_e=(1.97± 0.024) × 10^-6, y_μ=(4.16± 0.050) × 10^-4, y_τ=(7.07± 0.073) × 10^-3. Lepton masses are given by m_ℓ=y_ℓ v_H with v_H=174 GeV. Then, we have two free complex parameters, g_2/g_1 and the modulus τ apart from the overall factors in the neutrino sector. The value of τ is scanned in the fundamental domain of SL(2,Z). In Fig.<ref>, we show allowed regions in the Re[τ]-Im[τ] plain for NH of neutrino masses. Those are given in the sum of neutrino masses ∑ m_i=140–150 meV and ∑ m_i=170–200 meV. The 2 σ and 3 σ regions are presented by black and grey points, respectively. If the cosmological observation confirms ∑ m_i< 160 meV, the region of τ is severely restricted in this model. We present the prediction of the Dirac CP violating phase δ_CP versus sin^2θ_23 for NH of neutrino masses in Fig.<ref>. The predicted regions correspond to regions of τ in Fig.<ref>. It is emphasized that the predicted sin^2θ_23 is larger than 0.535, and δ_CP=± (60^∘–180^∘) in the region of ∑ m_i< 160 meV. Since the correlation of sin^2θ_23 and δ_CP is clear, this prediction is testable in the future experiments of neutrinos. On the other hand, predicted sin^2θ_12 and sin^2θ_13 cover observed full region with 3 σ error-bar, and there are no correlations with δ_CP. The prediction of the effective mass ⟨ m_ee⟩, which is the measure of the neutrinoless double beta decay, is around 20–22 meV and 45–60 meV. We present a sample set of parameters and outputs for NH in Table <ref>. We have also scanned the parameter space for the case of IH of neutrino masses. We have found parameter sets which fit the data of Δ m_ sol^2 and Δ m_ atm^2 reproduce the observed three mixing angles sin^2θ_23, sin^2θ_12, and sin^2θ_13. However, the predicted ∑ m_i is around 200 meV, which may be excluded. It is helpful to comment on the effects of the supersymmetry breaking and the radiative corrections because we have discussed our model in the limit of exact supersymmetry. The supersymmetry breaking effect can be neglected if the separation between the supersymmetry breaking scale and the supersymmetry breaking mediator scale is sufficiently large. In our numerical results, the corrections by the renormalization are very small as far as we take the relatively small value of tanβ. §.§ Other modular invariant flavor models In addition of this Γ_3 ≃ A_4 flavor model, other viable models have been also presented for Γ_3 ≃ A_4 <cit.>, Γ_4 ≃ S_4 <cit.> and for Γ_2 ≃ S_3<cit.>. The double covering groups, T' <cit.>, S_4' <cit.> and A_5' <cit.> have also discussed in the modular symmetry. Subsequently these groups have been used for flavor model building <cit.>. Furthermore, modular forms for Δ(96) and Δ(384) have been constructed<cit.>, and the level 7 finite modular group Γ_7≃ PSL(2,Z_7) as well as the level 6 group has been examined for the lepton mixing<cit.>. On the other hand, the quark mass matrix has been also studied in the Γ_3 ≃ A_4 flavor symmetries<cit.>. Hence, the unification of quarks and leptons has been applied in the framework of the SU(5) or SO(10) GUT <cit.>. There are also another important physics, the baryon asymmetry in the universe, which is discussed with the modular symmetry. Indeed, the A_4 modular flavor symmetry has been examined in the leptogenesis<cit.>. The modular symmetry keeps a residual symmetry at the fixed points even if the modular symmetry is broken. The Z_3^ST symmetry generated by ST remains at τ = ω, while the symmetry generated by S remains at τ =i, and it corresponds to the Z_4^S symmetry in SL(2,Z) and the Z_2^S symmetry in PSL(2,Z). Furthermore, the Z_N^T symmetry in Γ_N remains in the limit τ→ i ∞. That gives interesting lepton mass matrices for the flavor mixing <cit.>. In the modular invariant flavor model of A_4, the hierarchical structure of lepton and quark flavors has been examined at nearby fixed points <cit.>. It is also remarked that the hierarchical structure of quark and lepton mass matrices could be derived without fine-tuning of parameters at the nearby fixed points of the modular symmetry <cit.>. (See also Refs.<cit.>.) For example, the modular forms Y_2 and Y_3 among the A_4 triplet of weight 2 vanish in the limit τ = i ∞. When Imτ is large, but finite, the A_4 triplet modular forms of weight 2 behave Y_1 ∼ 1, Y_2 ∼ε^1/3, Y_3 ∼ε^2/3 where ε = e^-2 π Imτ. In general, the modular form f(τ) of Γ_N behaves as f(τ) ∼ε^r/N, when Imτ is large, where r denotes Z_N^T charges. These can lead to hierarchical structures in Yukawa matrices. Similarly, certain modular forms vanish at the fixed point τ = ω. Around this fixed, it is convenient to define the following parameter<cit.>: u ≡τ - ω/τ - ω^2 . By use of this parameter, generic modular form can be approximated as f(τ) ∼ u^r, around the fixed point τ = ω, where r depends on Z_3^ST charges. We have a similar behavior around the fixed pint τ =i. These behaviors around the fixed points allow to construct models in which the fermion mass hierarchies follow solely from the properties of the modular forms. For example, one can derive mass matrices such as m_ij∼ u^r_i+r_j and m_ij∼ε^(r_i+r_j)/N depending on Z_N charges of matter fields. Indeed, viable lepton and quark mass matrices are obtained without fine-tuning of parameters <cit.>. Further phenomenology has been developed in many works <cit.>, while theoretical investigations have been also proceeded <cit.>. § TEXTURE ZEROS IN MODULAR SYMMETRY Texture zeros of the fermion mass matrix provide an attractive approach to understand the flavor mixing. Those can be related with some flavor symmetries. Indeed, zeros of the mass matrix are discussed in the modular symmetry of flavors <cit.>. The flavor structure has been investigated in magnetized orbifold models with multi-Higgs modes<cit.>, which are interesting compactification from higher dimensional theory such as superstring theory. They lead to a four-dimensional chiral theory, which has the modular symmetry <cit.>. In these models, quark and lepton masses and their mixing angles were discussed <cit.>. These magnetized orbifold models lead to multi-Higgs modes, while generic string compactification also leads more than one candidates for Higgs fields. In this section, we show that texture zeros are generally realized at the fixed points τ=ω in the modular flavor models with multi-Higgs <cit.>. §.§ Quark mass matrices with mluti-Higgs §.§.§ Three pairs of Higgs fields (1,1”) We present a simple model of quark mass matrices in the level N=3 modular flavor symmetry A_4 with the multi-Higgs at τ=ω, which is referred to as Model 1. We assign the A_4 representation and the weights k_I for the relevant chiral superfields as * quark doublet Q=(Q^1,Q^2,Q^3): A_4 triplet with weight 2 * up-type quark singlets (u,c,t): A_4 singlets (1,1',1”) with weight 0 * down-type quark singlets (d,s,b): A_4 singlets (1,1',1”) with weight 0 * up and down sector Higgs fields H_u,d^i=(H_u,d^1,H_u,d^2): A_4 singlets (1,1”) with weight 0 which are summarized in Table <ref>. Then, the superpotential terms of the up-type quark masses and down-type quark masses are written by W_u = [α^1_u ( Y^(2)Q)_1 u_1 + β^1_u ( Y^(2)Q)_1”c_1' + γ^1_u ( Y^(2)Q)_1't_1”](H_u^1)_1 + [α^2_u ( Y^(2)Q)_1' u_1 + β^2_u ( Y^(2)Q)_1c_1' + γ^2_u ( Y^(2)Q)_1”t_1”](H_u^2)_1”, W_d = [α^1_d ( Y^(2)Q)_1 d_1 + β^1_d ( Y^(2)Q)_1”s_1' + γ^1_d ( Y^(2)Q)_1'b_1”](H_d^1)_1 + [α^2_d ( Y^(2)Q)_1' d_1 + β^2_d ( Y^(2)Q)_1s_1' + γ^2_d ( Y^(2)Q)_1”b_1”](H_d^2)_1”, where the decompositions of the tensor products are ( Y^(2)Q)_1 = ( [ Y_1; Y_2; Y_3; ]_3 ⊗[ Q^1; Q^2; Q^3; ]_3 )_1 = Y_1Q^1+Y_2Q^3+Y_3Q^2, ( Y^(2)Q)_1” = ( [ Y_1; Y_2; Y_3; ]_3 ⊗[ Q^1; Q^2; Q^3; ]_3 )_1” = Y_3Q^3+Y_1Q^2+Y_2Q^1, ( Y^(2)Q)_1' = ( [ Y_1; Y_2; Y_3; ]_3 ⊗[ Q^1; Q^2; Q^3; ]_3 )_1' = Y_2Q^2+Y_1Q^3+Y_3Q^1. The superpotential terms are rewritten as: W_u = [α_u^1(Y_1Q^1+Y_2Q^3+Y_3Q^2)u+β^1_u(Y_3Q^3+Y_1Q^2+Y_2Q^1)c +γ^1_u(Y_2Q^2+Y_1Q^3+Y_3Q^1)t]H_u^1 +[α_u^2(Y_2Q^2+Y_1Q^3+Y_3Q^1)u+β^2_u(Y_1Q^1+Y_2Q^3+Y_3Q^2)c +γ^2_u(Y_3Q^3+Y_1Q^2+Y_2Q^1)t]H_u^2 = [ Q^1 Q^2 Q^3; ]( [ α_u^1Y_1 β_u^1Y_2 γ_u^1Y_3; α_u^1Y_3 β_u^1Y_1 γ_u^1Y_2; α_u^1Y_2 β_u^1Y_3 γ_u^1Y_1; ]H_u^1 + [ α_u^2Y_3 β_u^2Y_1 γ_u^2Y_2; α_u^2Y_2 β_u^2Y_3 γ_u^2Y_1; α_u^2Y_1 β_u^2Y_2 γ_u^2Y_3; ]H_u^2 ) [ u; c; t; ], W_d = [α_d^1(Y_1Q^1+Y_2Q^3+Y_3Q^2)d+β^1_d(Y_3Q^3+Y_1Q^2+Y_2Q^1)s +γ^1_d(Y_2Q^2+Y_1Q^3+Y_3Q^1)b]H_d^1 +[α_d^2(Y_2Q^2+Y_1Q^3+Y_3Q^1)d+β^2_d(Y_1Q^1+Y_2Q^3+Y_3Q^2)s +γ^2_d(Y_3Q^3+Y_1Q^2+Y_2Q^1)b]H_d^2 = [ Q^1 Q^2 Q^3; ]( [ α_d^1Y_1 β_d^1Y_2 γ_d^1Y_3; α_d^1Y_3 β_d^1Y_1 γ_d^1Y_2; α_d^1Y_2 β_d^1Y_3 γ_d^1Y_1; ]H_d^1 + [ α_d^2Y_3 β_d^2Y_1 γ_d^2Y_2; α_d^2Y_2 β_d^2Y_3 γ_d^2Y_1; α_d^2Y_1 β_d^2Y_2 γ_d^2Y_3; ]H_d^2 ) [ d; s; b; ]. Finally, the quark mass matrices are given as: M_u = [ α_u^1Y_1 β_u^1Y_2 γ_u^1Y_3; α_u^1Y_3 β_u^1Y_1 γ_u^1Y_2; α_u^1Y_2 β_u^1Y_3 γ_u^1Y_1; ]⟨ H_u^1⟩ + [ α_u^2Y_3 β_u^2Y_1 γ_u^2Y_2; α_u^2Y_2 β_u^2Y_3 γ_u^2Y_1; α_u^2Y_1 β_u^2Y_2 γ_u^2Y_3; ]⟨ H_u^2⟩, M_d = [ α_d^1Y_1 β_d^1Y_2 γ_d^1Y_3; α_d^1Y_3 β_d^1Y_1 γ_d^1Y_2; α_d^1Y_2 β_d^1Y_3 γ_d^1Y_1; ]⟨ H_u^1⟩ + [ α_d^2Y_3 β_d^2Y_1 γ_d^2Y_2; α_d^2Y_2 β_d^2Y_3 γ_d^2Y_1; α_d^2Y_1 β_d^2Y_2 γ_d^2Y_3; ]⟨ H_u^2⟩, where the chiralities of the mass matrix, L and R are defined as [M_u(d)]_LR. At the fixed point τ =ω, modular forms are given as [ Y_1(ω); Y_2(ω); Y_3(ω) ] =Y_1(ω) [ 1; ω; -1/2ω^2 ] . §.§.§ ST-eigenstate base at τ=ω Let us discuss the mass matrices at τ=ω in the ST-eigenstates. The ST-transformation of the A_4 triplet of the left-handed quarks Q is [ Q^1; Q^2; Q^3; ] (-ω-1)^-2ρ(ST) [ Q^1; Q^2; Q^3; ] = ω^-41/3[ -1 2ω 2ω^2; 2 -ω 2ω^2; 2 2ω -ω^2; ][ Q^1; Q^2; Q^3; ], where representations of S and T are given explicitly for the triplet in Appendix A. The ST-eigenstate Q' is obtained by using the unitary matrix U_L as follows: U_L = 1/3[ 2 -ω 2ω^2; -ω 2ω^2 2; 2ω^2 2 -ω; ], U_L^†ω^-4ρ(ST) U_L = [ 1 0 0; 0 ω^2 0; 0 0 ω; ]. The ST-eigenstates are Q'≡ U_L^† Q. On the other hand, right-handed quarks, which are singlets (1,1',1”), are the eigenstates of ST; that is, the ST-transformation is [ u; c; t; ][ 1 0 0; 0 ω^2 0; 0 0 ω; ][ u; c; t; ], [ d; s; b; ][ 1 0 0; 0 ω^2 0; 0 0 ω; ][ d; s; b; ]. Higgs fields are also the ST-eigenstates since they are singlets (1,1”). Therefore, ST-transformation of them is [ H_u,d^1; H_u,d^2; ][ 1 0; 0 ω; ][ H_u,d^1; H_u,d^2; ]. In the ST-eigenstates, the quark mass matrices are transformed by Eq. (<ref>). It is given as: U_L^T M_u = c [ α_u^1 0 0; 0 0 γ_u^1; 0 β_u^1 0; ]⟨ H_u^1⟩ + c [ 0 β_u^2 0; α_u^2 0 0; 0 0 γ_u^2; ]⟨ H_u^2⟩, U_L^T M_d = c [ α_d^1 0 0; 0 0 γ_d^1; 0 β_d^1 0; ]⟨ H_d^1⟩ + c [ 0 β_d^2 0; α_d^2 0 0; 0 0 γ_d^2; ]⟨ H_d^2⟩, where c=√(|Y_1|^2+|Y_2|^2+|Y_3|^2). Thus, some zeros appear for quark mass matrices. Now, we impose α^1_u,d=0, we obtain the nearest neighbor interaction (NNI) form,[The NNI form of three families has vanishing entries of (1,1), (2,2), (1,3), (3,1), but is not necessary to be Hermitian.] which is considered as a general form of both up- and down-types quark mass matrices because this form is achieved by the transformation that leaves the left- handed gauge interaction invariant <cit.>. The NNI form is a desirable base to derive the Fritzsch-type quark mass matrix while the NNI form is a general form of quark mass matrices. Therefore, the quark masses and the CKM matrix are reproduced taking relevant values of parameters. It is noticed that the flavor mixing is not realized in the case of one Higgs doublets for up- and down-type quark sectors. Thus, the NNI forms at τ=ω are simply obtained unless the VEVs of two-Higgs vanish. The CP symmetry is not violated at τ = ω in modular flavor symmetric models with a pair of Higgs fields because of the T symmetry <cit.>. However, the models with multi-Higgs fields can break the CP symmetry at the fixed point τ = ω even if all of the Higgs VEVs are real <cit.>. Thus, the CP phase appears in our models, in general. Our models are interesting from the viewpoint of the CP violation, too. The non-vanishing VEVs of both Higgs fields H^1_u,d and H^2_u,d are important to realize the NNI forms. We expect the scenario that these Higgs fields have a μ-matrix to mix them, W_μ = μ_ijH^i_u H^j_d . Then, a light linear combination develops its VEV, which includes H^1_u,d and H^2_u,d. However, the above assignment of A_4 representations (1, 1”) for the Higgs fields allows the μ-term of only μ_11, and the others vanish. That is, the mixing does not occur. When we assume the singlet S with the A_4 1' representation develops its VEV, the (1,2) and (2,1) elements appear as μ_12=μ_21=λ⟨ S ⟩ like the next-to-minimal supersymmetric standard model. It is noted that the alternative assignment of weights for the Higgs and the left-handed quarks also gives desirable μ term <cit.>. §.§.§ Three pairs of Higgs fields (1,1”,1') We also study three pairs of Higgs fields with the A_4 (1,1”,1') representations. We add another pair of Higgs fields H^3_u,d with the A_4 1' representation of the modular weight 0. Then, we easily obtain the mass matrices as follows: U_L^T M_u = c [ α_u^1 0 0; 0 0 γ_u^1; 0 β_u^1 0; ]⟨ H_u^1⟩ + c [ 0 β_u^2 0; α_u^2 0 0; 0 0 γ_u^2; ]⟨ H_u^2⟩ + c [ 0 0 γ_u^3; 0 β_u^3 0; α_u^3 0 0; ]⟨ H_u^3⟩, U_L^T M_d = c [ α_d^1 0 0; 0 0 γ_d^1; 0 β_d^1 0; ]⟨ H_d^1⟩ + c [ 0 β_d^2 0; α_d^2 0 0; 0 0 γ_d^2; ]⟨ H_d^2⟩ + c [ 0 0 γ_d^3; 0 β_d^3 0; α_d^3 0 0; ]⟨ H_d^3⟩. This model can lead to a quite generic mass matrix. For example, by setting some of α^i_u,d, β^i_u,d, γ^i_u,d to be zero, we can drive some of texture zero structures including the NNI form. In addition, we can assume β^i_u,d=γ^i_u,d or β^i_u,d=(γ^i_u,d)^* to reduce the number of free parameters and realize a certain form of mass matrices. Thus, the different assignment of the A_4 singlets (1,1”,1') for Higgs leads to different texture zeros. §.§ Extensions of models In this section, the quark mass matrices are discussed in the specific modular symmetry of N=3 in order to show the derivation of NNI forms clearly. It is noted that one can obtain flavor models leading to the NNI forms in the S_4 and A_5 modular flavor symmetries. Such texture zero structure originates from the ST charge of the residual symmetry Z_3 of SL(2,Z). The NNI form can be realized at the fixed point τ = ω in A_4 and S_4 modular flavor models with two pairs of Higgs fields, when we assign properly modular weights to Yukawa couplings and A_4 and S_4 representations to three generations of quarks. It is found that four pairs of Higgs fields to realize the NNI form in A_5 modular flavor models. Thus, the modular flavor models with multi-Higgs fields at the fixed point τ = ω leads to successful quark mass matrices <cit.>. Texture zeros have been studied phenomenologically in the lepton sector <cit.>. We can extend our formula of the quark mass matrices to the lepton sector. Extension to the charged lepton mass matrix is straightforward, and we obtain the same results. On the other hand, there is some freedoms for the neutrino mass matrix, depending on the mechanism of producing tiny masses, for example, seesaw mechanism. § CP SYMMETRY In this section, we study CP violation in modular symmetric flavor models. The 4D CP symmetry can be embedded into a proper Lorentz transformation in a higher dimensional theory. Here, we concentrate on 6D theory, that is, extra two dimensions in addition to our 4D space-time. T^2 is one of examples of two-dimensional compact space. We denote the coordinate on extra dimension, e.g. T^2, by z. Then, we consider the following transformation z → -z^*, at the same time as the 4D CP transformation. Such a combination is included in a 6D proper Lorentz symmetry. Because of the above coordinate transformation, the modulus τ on T^2 transforms τ→ -τ^*, under the CP symmetry <cit.>. Note that the upper half plane of τ maps onto itself by this transformation. Another transformation such as z → z^* can also correspond to a 6D proper Lorentz symmetry, but such a transformation maps the upper half plane onto the lower half plane. Thus, we do not use such a transformation. Obviously, we find that the line Reτ = 0 is CP invariant. Other values are also CP invariant up to the modular symmetry. For example, τ=ω=e^2π i/3 transforms τ = ω = -1 + √(3) i /2→ -τ^*=1 + √(3) i /2 , under the CP transformation Eq. (<ref>). However, these values are related with each other by the T-transformation. Thus, the fixed point τ=ω is also CP invariant point. Similarly, the line Reτ = ± 1/2 is CP invariant. The typical Kähler potential of the modulus field τ is written by K=-ln [2 Imτ]. The Kähler potential is invariant under the transformation, τ→ -τ^*. In addition, the superpotential |Ŵ|^2 is invariant if it transforms W(τ) → W(-τ^*)=e^iχW(τ), under the CP symmetry with τ→ -τ^* including the CP transformation of chiral matter fields. We study the CP violation through the modulus stabilization. One of the moduli stabilization scenarios is due to the three-form fluxes <cit.>. Indeed, the moduli stabilization due to the three-from fluxes was studied in modular flavor models in Ref. <cit.>. Its result shows that the fixed point τ = ω is favored statistically with highest probability. The above discussions implies that the CP violation does not occur at this fixed point. In Ref. <cit.> the moduli stabilization was studied by one-loop induced Fayet-Illiopoulos terms, and the modulus τ is stabilized at the same fixed point[See also for recent studies on moduli stabilization in modular flavor models Refs. <cit.>.]. In addition, we study another mechanism of the moduli stabilization by assuming non-perturbative effects. We start with the superpotential W=m(τ)Q Q̅ with the A_4 modular flavor symmetry. Then, we assume the condensation ⟨ Q Q̅⟩≠ 0. The superpotential is A_4 trivial singlet. We assume the following superpotential: W= Λ_dY^(4)_ 1(τ), where Λ_d corresponds to ⟨ Q Q̅⟩ and must have a proper modular weight. The minimum of the supergravity scalar potential with the above superpotential is obtained as τ_min=1.09 i + p/2, where p is odd integer<cit.>. The above discussion implies that the CP violation does not occur at this point. On the other hand, we assume the following superpotential: W= Λ_d(Y^(4)_ 1(τ))^-1, where Λ_d must have a proper modular weight. The minimum of the supergravity scalar potential with the above superpotential is obtained as τ_min=1.09 i + n, where n is integer<cit.>. Obviously, this is CP invariant point. Similarly, we can study other modular flavor models such as S_3 and S_4 modular symmetries, and the potential minimum corresponds to either Re τ = 0 or 1/2 (mod 1) <cit.>. In both cases, CP violation does not occur. We examine explicitly mass matrices at Re τ = 0 and 1/2 in order to understand that the CP symmetry is not violated at these lines. We study the flavor model with the Γ_N modular flavor symmetry. We use the basis that ρ(T) is diagonal and satisfies ρ(T)^N=1. Then, the chiral fields Φ_i such as left-handed quarks Q_i, up-sector and down-sector right-handed quarks u_i, d_i, and the Higgs field H_u,d as well as lepton fields transform Φ_i → e^2π i P[Φ_i]/N Φ_i, under the T-transformation, where P[Φ_i] is integer. That is the Z_N^(T) rotation. Here, we assume one pair of Higgs fields H^u and H^d, which are trivial singlets under the Γ_N modular symmetry. Then, the quark Yukawa terms in the superpotential can be written by Ŵ = Y^(u)_ij(τ) Q_iu_jH^u + Y^(d)_ijQ_i d_j H^d. We replace the Higgs fields by their VEVs so as to obtain the mass terms, Ŵ=M^u_ij(τ) Q_i u_j + M^d_ij(τ) Q_i d_j . Note that the Yukawa couplings are modular forms. Then, the above mass matrices can also be written by modular forms after replacing the Higgs fields by their VEVs. Since these mass terms must be invariant under the T-transformation, the mass matrix must transform as M^u_ij(τ) → e^-2π i(P[Q_i]+P[u_j])/NM^u_ij, M^d_ij(τ) → e^-2π i(P[Q_i]+P[d_j])/NM^d_ij. That implies that the mass matrices can be written by M^u_ij(τ)=m^u_ij(q)q^-(P[Q_i]+P[u_j])/N= m^u_ij(q)e^-2π i(P[Q_i]+P[u_j])τ/N, M^d_ij(τ)=m^d_ij(q)q^-(P[Q_i]+P[d_j])/N= m^d_ij(q)e^-2π i(P[Q_i]+P[d_j])τ/N, in terms of q=e^2π i τ, where m^u,d_ij(q) include series of integer powers of q as m^u,d_ij(q)=a_0^u,d+a_1^u,dq+a_2^u,dq^2+⋯. It is obvious that all of the entries in M^u,d_ij(τ) are real when Re τ=0. CP is not violated. On the other hand, it seems that the mass matrix has phases for other values of Re τ. For example, when Re τ = 1/2, the phase structure of the mass matrix can be written by M^u_ij=m̃^u_ije^-π i(P[Q_i]+P[u_j])/N, M^d_ij=m̃^d_ije^-π i(P[Q_i]+P[d_j])/N, where m̃^u_ij=m^u_ije^-2π (P[Q_i]+P[u_j]) Imτ /N, m̃^d_ij=m^d_ije^-2π (P[Q_i]+P[d_j]) Imτ /N, and they are real. However, such phases can be canceled by rephasing Φ_i →Φ e^π i P[Φ_i]/NΦ_i, and there is no physical CP phase for Re τ =1/2. That is the Z_2N^(T) rotation. Note that m_ij(q) can have a physical CP phase, which can not be canceled, except Re τ =0, 1/2. Similarly, we can discuss the lepton sector, and the CP phase does not appear when Re τ =0, 1/2. The fixed point τ = ω is statistically favored with highest probability, and phenomenological interesting because there remains Z_3 symmetry. However, the CP violation does not occur in modular flavor models with one pair of Higgs fields. That suggests extension to models with multi-Higgs fields. Indeed many string compactifications lead more than one candidates of Higgs fields, which have the same quantum numbers of SU(3)× SU(2) × U(1)_Y and can couple with quarks and leptons. We extend the above discussion to modular flavor models with multi-Higgs fields H^u,d_ℓ. The quark Yukawa terms in the superpotential can be written by Ŵ = Y^(u)_ijℓ(τ) Q_iu_jH^u_ℓ + Y^(d)_ijℓQ_i d_j H^d_ℓ. Since these terms are invariant under the T-transformation, Yukawa couplings must transform as Y^(u)_ijℓ(τ) → e^2 π i P[Y^u_ijℓ]Y^(u)_ijℓ(τ), Y^(d)_ijℓ(τ) → e^2 π i P[Y^d_ijℓ]Y^(d)_ijℓ(τ), under the T-transformation, where P[Y^u_ijℓ]=-(P[Q_i]+P[u_j]+P[H^u_ℓ]), P[Y^d_ijℓ]=-(P[Q_i]+P[d_j]+P[H^d_ℓ]). That implies that the modular forms of Yukawa couplings can be written by Y^(u)_ijℓ(τ) =a_0q^P[Y^u_(i j ℓ)]/N + a_1 qq^P[Y^u_(i j ℓ)]/N +a_2 q^2q^P[Y^u_(i j ℓ)]/N + ⋯ =Ỹ^(u)_ijℓ(q) q^P[Y^u_(i j ℓ)]/N, Y^(d)_ijℓ(τ) =b_0q^P[Y^d_(i j ℓ)]/N + b_1 qq^P[Y^d_(i j ℓ)]/N +b_2 q^2q^P[Y^d_(i j ℓ)]/N + ⋯ =Ỹ^(d)_ijℓ(q) q^P[Y^u_(i j ℓ)]/N, where the functions Ỹ^(u)_ijℓ(q) and Ỹ^(d)_ijℓ(q) are series of positive integer powers of q. We denote Higgs VEVs by v^u_ℓ =|v^u_ℓ|e^2π iP[v^u_ℓ]/N=⟨ H^u_ℓ⟩, v^d_ℓ =|v^d_ℓ|e^2π iP[v^d_ℓ]/N=⟨ H^d_ℓ⟩, where P[v^u_ℓ] or P[v^d_ℓ] is not integer for a generic VEV. Then, the mass matrices can be written by M^u,d_ij=∑_ℓ Y^(u,d)_ij ℓ v^u,d_ℓ. When Re τ=0, all of the Yukawa coupligs Y^(u,d)_ij ℓ are real. In this case, the non-trivial CP phase appears only if the VEVs v^u,d_ℓ have phases different relatively from each other. When Re τ=-1/2, e.g. τ = ω, the Yukawa coupligs Y^(u,d)_ij ℓ have different phases. Thus, the non-trivial CP phase appears for generic values of VEVs. However, if they satisfy -1/2(P[Y^u_(ijℓ)] + P[Q_i]+P[u_j] ) + P[v^u_ℓ]= constant independent of ℓ, -1/2(P[Y^d_(ijℓ)] + P[Q_i]+P[d_j] ) + P[v^d_ℓ]= constant independent of ℓ, for all of allowed Yukawa couplings with i,j fixed, one can cancel phases of mass matrix elements up to an overall phase by Z_2N^(T) rotation. We can compare this condition with the relations Eq. (<ref>), where the factor -1/2 originates from Re τ=-1/2. Thus, the T-symmetry determines the VEV direction v^u,d_ℓ, where the CP symmetry remains. CP violation was also studied in an explicit magnetized orbifold model <cit.>. § SMEFT So far, we have studied renormalizable coupligs such as Yukawa couplings. Since the SM is effective theory of underlying theory, it may include higher dimensional operators and they may lead to flavor and CP violating processes. Here, we study higher dimensional operators. The SM with renormalizable couplings has the U(3)^5 flavor symmetry in the limit that all of the Yukawa couplings vanish, where the U(3)^5 symmetry is explicitly written by U(3)_Q× U(3)_u × U(3)_d × U(3)_L × U(3)_e and they correspond to the symmetries of three generations of left-handed quarks, up-sector and down-sector right-handed quarks, left-handed leptons, and right-handed charged leptons. Even for non-vanishing Yukawa couplings, the SM can have the U(3)^5 flavor symmetry by assuming that Yukawa couplings are spurion fields, which transform non-trivially under the U(3)^5 flavor symmetry. That is, the up-sector and down-sector Yukawa couplings transform as ( 3, 3̅, 1, 1, 1) and ( 3, 1,3̅, 1, 1) under the symmetry U(3)_Q× U(3)_u × U(3)_d × U(3)_L × U(3)_e while the lepton Yukawa couplings transform as ( 1, 1, 1, 3, 3̅). We require that higher dimensional operators also satisfy the U(3)^5 flavor symmetry. Then coefficients of higher dimensional operators can be written in terms of Yukawa couplings, which are spurion fields. That is the MFV scenario <cit.>. We can compute n-point couplings within the framework of superstring theory. For example, n-point couplings were calculated in intersecting D-brane models <cit.>, magnetized D-brane models<cit.>, and heterotic orbifold models <cit.>. These computations are carried out by two-dimensional conformal field theory (CFT) and integral of products of wave functions in compact space. Massless modes in string theory correspond to vertex operators V_i(z) in CFT, where w denotes the complex coordinate on the world-sheet. These vertex operators satisfy the operator product expansion, V_i (w) V_j (0) ∼∑_k y_ijk/w^h_k -h_i -h_jV_k(0), where h_i denote the conformal dimensions of vertex operators V_i. The coefficients y_ijk provide us with 3-point couplings among massless modes corresponding vertex operators, V_i, V_j, V_k in low-energy effective field theory. Furthermore, 4-point couplings y_ijkℓ can be written by products of 3-point couplings, y_ijkℓ=∑_m y_ijmy_mkℓ. Similarly, generic n-point couplings can be written by products of 3-point couplings. That implies that when the 3-point couplings y_ijk have the modular symmetry, 4-point couplings and higher order couplings are also controlled by the modular symmetry. Indeed, these couplings can be written by modular forms, which depend on the moduli fields. In this sense, these couplings are spurion fields. Thus, this theory can provide us with the stringy origin of minimal flavor violation, where the flavor symmetry is the modular symmetry instead of U(3)^5. Similarly, various classes of 4D low-energy effective field theory derived string theory satisfy the requirement of minimal flavor violation hypothesis at the compactification scale. However, several physical stages may occur between the compactification scale and low energy scale, (i) some modes gain masses and (ii) some scalar fields develop their VEVs. At the stage (i), we just integrate out massive modes. Effective field theory after such an integration also satisfies the above structure. At the stage (ii), new operators appear. For example, suppose that we have the coupling, y_ijkℓϕ_i ϕ_j ϕ_k ϕ_ℓ and ϕ_i develops its VEV. Then, the new operator y'_jkℓϕ_j ϕ_k ϕ_ℓ appears, where y'_jkℓ=y_ijkℓ⟨ϕ_i ⟩. Both y'_jkℓ and y_ijkℓ are spurion fields, and the transformation behavior of y'_jkℓ is the same as y_ijkℓ⟨ϕ_i ⟩. Thus, the minimal flavor violation structure with the modular symmetry is not violated. One of non-trivial symmetry breaking is the supersymmetry breaking. The supersymmetry breaking can occur by non-vanishing F-terms. If all of the F-terms are trivial singlets under the modular symmetry, obviously all of the soft terms are modular invariant. The supersymmetry breaking due to modulus F-term is non-trivial from the viewpoint of the modular symmetry. Detailed study was done in Ref. <cit.>. It was found that all of the soft terms except the B-term are modular invariant. If the generation mechanism of the μ-term is modular invariant, the B-term is also modular invariant. If the above scenario holds true, the low-energy effective field theory around the weak scale has the minimal flavor violation structure with the modular symmetry. That is, the SMEFT can have the modular symmetry. For example, there appear the four-fermi operators and dipole operators, y_ijkℓ/Λ^2 (Ψ̅_i ΓΨ_j)(Ψ̅_k ΓΨ_ℓ), c_ijv/Λ^2 (Ψ̅_i σ^μνΨ_j)F_μν, where Γ denotes a generic combination of gamma matrices. These operators must be modular invariant and their coefficients y_ijkℓ and c_ij are modular forms. Furthermore, the coefficients y_ijkℓ can be written by productions of 3-point couplings as Eq. (<ref>), where the mode m may correspond to known modes like the Higgs field or unknown modes. The cut-off scale Λ depends on the scenario with the stages (i) and (ii), that is, mass scales and symmetry breaking scales including the supersymmetry breaking scale. Phenomenological implications of modular symmetric SMEFT were studied, e.g. flavor violations and lepton (g-2) processes <cit.>. § CONCLUSION We have reviewed on modular flavor symmetric models from several viewpoints, realization of fermion mass matrices, the texture structure, the CP violation and higher dimensional operators in SMEFT. Indeed many works have been done recently, in particular in realization of quark and lepton masses and mixing angles as well as the CP violation. In addition, the modular flavor symmetry have been used for dark matter, inflation models, and leptogenesis in bottom-up approach. The modular flavor symmetry may originate from compactification of higher dimensional theory such as superstring theory. Also the modular flavor symmetry have been studied in top-down approach. Thus, the modular flavor symmetry can become a bridge to connect the low-energy physics and high-energy physic such as superstring theory and would provide us with a missing piece to solve the flavor puzzle in particle physics. § ACKNOWLEDGEMENT The authors would like to thank Y. Abe, T. Higaki, K. Ishiguro, J. Kawamura, S. Kikuchi, S. Nagamoto, K. Nasu, T. Nomura, H. Okada, N. Omoto, Y. Orisaka, H. Otsuka, S.T. Petcov, Y. Shimizu, T. Shimomura, S. Takada, K. Takagi, S. Tamba, K. Tanaka, T.H. Tatsuishi, H. Uchida, S. Uemura, K. Yamamoto, T. Yoshida for useful discussions. § APPENDIX § MODULAR FORMS OF A_4 The modular forms of weight 2 transforming as a triplet of A_4 can be written in terms of η(τ) and its derivative <cit.>: Y_1 = i/2π( η'(τ/3)/η(τ/3) +η'((τ +1)/3)/η((τ+1)/3) +η'((τ +2)/3)/η((τ+2)/3) - 27η'(3τ)/η(3τ)), Y_2 = -i/π( η'(τ/3)/η(τ/3) +ω^2η'((τ +1)/3)/η((τ+1)/3) +ωη'((τ +2)/3)/η((τ+2)/3)) , Y_3 = -i/π( η'(τ/3)/η(τ/3) +ωη'((τ +1)/3)/η((τ+1)/3) +ω^2 η'((τ +2)/3)/η((τ+2)/3)) , which satisfy also the constraint <cit.>: Y_2^2+2Y_1Y_3=0 . They have the following q-expansions: Y^(2)_3 =[ Y_1; Y_2; Y_3 ]= [ 1+12q+36q^2+12q^3+…; -6q^1/3(1+7q+8q^2+…); -18q^2/3(1+2q+5q^2+…) ] , where q=exp (2π i τ) . The five modular forms of weight 4 are given as: Y^( 4)_1=Y_1^2+2 Y_2 Y_3 , Y^( 4)_1'=Y_3^2+2 Y_1 Y_2 , Y^( 4)_1”=Y_2^2+2 Y_1 Y_3=0 , Y^( 4)_3= [ Y_1^(4); Y_2^(4); Y_3^(4) ] = [ Y_1^2-Y_2 Y_3; Y_3^2 -Y_1 Y_2; Y_2^2-Y_1 Y_3 ] , where Y^( 4)_1” vanishes due to the constraint of Eq. (<ref>). For weigh 6, there are seven modular forms as: Y^( 6)_1=Y_1^3+ Y_2^3+Y_3^3 -3Y_1 Y_2 Y_3 , Y^( 6)_3≡[ Y_1^(6); Y_2^(6); Y_3^(6) ] =(Y_1^2+2 Y_2 Y_3) [ Y_1; Y_2; Y_3 ] , Y^( 6)_3'≡[ Y_1^'(6); Y_2^'(6); Y_3^'(6) ] =(Y_3^2+2 Y_1 Y_2 ) [ Y_3; Y_1; Y_2 ] . For weigh 8, there are nine modular forms as: Y^( 8)_1=(Y_1^2+2Y_2 Y_3)^2 , Y^( 8)_1'=(Y_1^2+2Y_2 Y_3)(Y_3^2+2Y_1 Y_2) , Y^( 8)_1"=(Y_3^2+2Y_1 Y_2)^2 , Y^( 8)_3≡[ Y_1^(8); Y_2^(8); Y_3^(8) ] =(Y_1^2+2 Y_2 Y_3) [ Y_1^2-Y_2 Y_3; Y_3^2 -Y_1 Y_2; Y_2^2-Y_1 Y_3 ] , Y^( 8)_3'≡[ Y_1^'(8); Y_2^'(8); Y_3^'(8) ] =(Y_3^2+2 Y_1 Y_2 ) [ Y_2^2-Y_1 Y_3; Y_1^2-Y_2 Y_3; Y_3^2 -Y_1 Y_2 ]. For weigh 10, there are eleven modular forms as: Y^( 10)_1=(Y_1^2+2Y_2 Y_3) (Y_1^3+ Y_2^3+Y_3^3 -3Y_1 Y_2 Y_3) Y^( 10)_1'=(Y_3^2+2Y_1 Y_2) (Y_1^3+ Y_2^3+Y_3^3 -3Y_1 Y_2 Y_3) , Y^( 10)_3, 1≡[ Y_1,1^(10); Y_2,1^(10); Y_3,1^(10) ] =(Y_1^2+2 Y_2 Y_3)^2 [ Y_1; Y_2; Y_3 ] , Y^( 10)_3, 2≡[ Y_1,2^(10); Y_2,2^(10); Y_3,2^(10) ] =(Y_3^2+2 Y_1 Y_2 )^2 [ Y_2; Y_3; Y_1 ] , Y^( 10)_3, 3≡[ Y_1,3^(10); Y_2,3^(10); Y_3,3^(10) ] =(Y_1^2+2 Y_2 Y_3 )(Y_3^2+2 Y_1 Y_2 ) [ Y_3; Y_1; Y_2 ] . At the fixed point τ=ω, they are given as: Y^( 2)_3 =Y_0 [ 1; ω; -1/2ω^2 ] , Y^( 4)_3= 3/2 Y_0^2 [ 1; -1/2ω; ω^2 ] , Y^( 4)_1=0 , Y^( 4)_1'=9/4 Y_0^2 ω , Y^( 6)_3=0 , Y^( 6)_3'= 9/8Y_0^3 [ -1; 2ω; 2ω^2 ] , Y^( 6)_1=27/8 Y_0^3 , Y^( 8)_3=0 , Y^( 8)_3'= 27/8Y_0^4 [ 1; ω; -1/2ω^2 ] , Y^( 8)_1=0, Y^( 8)_1'=0 , Y^( 8)_1”=9/4 ω Y_0^4 , Y^( 10)_3, 1=0 , Y^( 10)_3, 2= 81/16ω^2 Y_0^5 [ ω; -1/2ω^2; 1 ] , Y^( 10)_3, 3=0, Y^( 10_1=0, Y^( 10)_1'=243/32 ω Y_0^5 . § TENSOR PRODUCT OF A_4 GROUP We take the generators of A_4 group for the triplet as follows: S=1/3[ -1 2 2; 2 -1 2; 2 2 -1 ], T= [ 1 0 0; 0 ω 0; 0 0 ω^2 ], where ω=e^i2/3π for a triplet. In this base, the multiplication rule is [ a_1; a_2; a_3 ]_ 3⊗[ b_1; b_2; b_3 ]_ 3 = (a_1b_1+a_2b_3+a_3b_2 )_ 1⊕ (a_3b_3+a_1b_2+a_2b_1 )_ 1' ⊕ (a_2b_2+a_1b_3+a_3b_1 )_ 1” ⊕1/3[ 2a_1b_1-a_2b_3-a_3b_2; 2a_3b_3-a_1b_2-a_2b_1; 2a_2b_2-a_1b_3-a_3b_1 ]_ 3⊕1/2[ a_2b_3-a_3b_2; a_1b_2-a_2b_1; a_3b_1-a_1b_3 ]_ 3 , 1⊗ 1 = 1 , 1'⊗ 1' = 1” , 1”⊗ 1” = 1' , 1'⊗ 1” = 1 , where T( 1')=ω , T( 1”)=ω^2. More details are shown in the review  <cit.>. 99 Weinberg:1977hb S. Weinberg, Trans. New York Acad. Sci. 38 (1977), 185-201. Fritzsch:1977za H. Fritzsch, Phys. Lett. B 70 (1977), 436-440. Fritzsch:1977vd H. Fritzsch, Phys. Lett. B 73 (1978), 317-322. Fritzsch:1979zq H. Fritzsch, Nucl. Phys. B 155 (1979), 189-207. Georgi:1979df H. Georgi and C. Jarlskog, Phys. Lett. B 86 (1979), 297-300. Branco:1988iq G. C. Branco, L. Lavoura and F. Mota, Phys. Rev. D 39 (1989), 3443. Dimopoulos:1991za S. Dimopoulos, L. J. Hall and S. Raby, Phys. Rev. D 45 (1992), 4192-4200. Ramond:1993kv P. Ramond, R. G. Roberts and G. G. Ross, Nucl. Phys. B 406 (1993), 19-42. [arXiv:hep-ph/9303320 [hep-ph]]. Frampton:2002yf P. H. Frampton, S. L. Glashow and D. Marfatia, Phys. Lett. B 536 (2002), 79-82 [arXiv:hep-ph/0201008 [hep-ph]]. Froggatt:1978nt C. D. Froggatt and H. B. Nielsen, Nucl. Phys. B 147 (1979), 277-298 Pakvasa:1977in S. Pakvasa and H. Sugawara, Phys. Lett. 73B (1978) 61. Wilczek:1977uh F. Wilczek and A. Zee, Phys. Lett. 70B (1977) 418 Erratum: [Phys. Lett. 72B (1978) 504]. Fukugita:1998vn M. Fukugita, M. Tanimoto and T. Yanagida, Phys. Rev. D 57 (1998) 4429 [hep-ph/9709388]. Fukuda:1998mi Y. Fukuda et al. [Super-Kamiokande Collaboration], Phys. Rev. Lett. 81 (1998) 1562 [hep-ex/9807003]. Altarelli:2010gt G. Altarelli and F. Feruglio, Rev. Mod. Phys. 82 (2010) 2701 [arXiv:1002.0211 [hep-ph]]. Ishimori:2010au H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada and M. Tanimoto, Prog. Theor. Phys. Suppl. 183 (2010) 1 [arXiv:1003.3552 [hep-th]]. Ishimori:2012zz H. Ishimori, T. Kobayashi, H. Ohki, H. Okada, Y. Shimizu and M. Tanimoto, Lect. Notes Phys. 858 (2012) 1, Springer. Kobayashi:2022moq T. Kobayashi, H. Ohki, H. Okada, Y. Shimizu and M. Tanimoto, Lect. Notes Phys. 995 (2022) 1, Springer doi:10.1007/978-3-662-64679-3. Hernandez:2012ra D. Hernandez and A. Y. Smirnov, Phys. Rev. D 86 (2012) 053014 [arXiv:1204.0445 [hep-ph]]. King:2013eh S. F. King and C. Luhn, Rept. Prog. Phys. 76 (2013) 056201 [arXiv:1301.1340 [hep-ph]]. King:2014nza S. F. King, A. Merle, S. Morisi, Y. Shimizu and M. Tanimoto, New J. Phys. 16, 045018 (2014) [arXiv:1402.4271 [hep-ph]]. Tanimoto:2015nfa M. Tanimoto, AIP Conf. Proc. 1666 (2015) 120002. King:2017guk S. F. King, Prog. Part. Nucl. Phys. 94 (2017) 217 [arXiv:1701.04413 [hep-ph]]. Petcov:2017ggy S. T. Petcov, Eur. Phys. J. C 78 (2018) no.9, 709 [arXiv:1711.10806 [hep-ph]]. Feruglio:2019ktm F. Feruglio and A. Romanino, arXiv:1912.06028 [hep-ph]. Buchmuller:1985jz W. Buchmuller and D. Wyler, Nucl. Phys. B 268 (1986), 621-653. Grzadkowski:2010es B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek, JHEP 10 (2010), 085 [arXiv:1008.4884 [hep-ph]]. Alonso:2013hga R. Alonso, E. E. Jenkins, A. V. Manohar and M. Trott, JHEP 04 (2014), 159 [arXiv:1312.2014 [hep-ph]]. Faroughy:2020ina D. A. Faroughy, G. Isidori, F. Wilsch and K. Yamamoto, JHEP 08 (2020), 166 [arXiv:2005.05366 [hep-ph]]. Gerard:1982mm J. M. Gerard, Z. Phys. C 18 (1983), 145. Chivukula:1987py R. S. Chivukula and H. Georgi, Phys. Lett. B 188 (1987), 99-104. DAmbrosio:2002vsn G. D'Ambrosio, G. F. Giudice, G. Isidori and A. Strumia, Nucl. Phys. B 645 (2002), 155-187 [arXiv:hep-ph/0207036 [hep-ph]]. Barbieri:2011ci R. Barbieri, G. Isidori, J. Jones-Perez, P. Lodone and D. M. Straub, Eur. Phys. J. C 71 (2011), 1725 [arXiv:1105.2296 [hep-ph]]. Barbieri:2012uh R. Barbieri, D. Buttazzo, F. Sala and D. M. Straub, JHEP 07 (2012), 181 [arXiv:1203.4218 [hep-ph]]. Blankenburg:2012nx G. Blankenburg, G. Isidori and J. Jones-Perez, Eur. Phys. J. C 72 (2012), 2126 [arXiv:1204.0688 [hep-ph]]. Kobayashi:2004ya T. Kobayashi, S. Raby and R. J. Zhang, Nucl. Phys. B 704, 3-55 (2005) [arXiv:hep-ph/0409098 [hep-ph]]. Kobayashi:2006wq T. Kobayashi, H. P. Nilles, F. Ploger, S. Raby and M. Ratz, Nucl. Phys. B 768, 135-156 (2007) [arXiv:hep-ph/0611020 [hep-ph]]. Ko:2007dz P. Ko, T. Kobayashi, J. h. Park and S. Raby, Phys. Rev. D 76, 035005 (2007) [erratum: Phys. Rev. D 76, 059901 (2007)] [arXiv:0704.2807 [hep-ph]]. Abe:2009vi H. Abe, K. S. Choi, T. Kobayashi and H. Ohki, Nucl. Phys. B 820, 317-333 (2009) [arXiv:0904.2631 [hep-ph]]. Beye:2014nxa F. Beye, T. Kobayashi and S. Kuwakino, Phys. Lett. B 736, 433-437 (2014) [arXiv:1406.4660 [hep-th]]. Ferrara:1989qb S. Ferrara, D. Lust and S. Theisen, Phys. Lett. B 233 (1989), 147-152. Lerche:1989cs W. Lerche, D. Lust and N. P. Warner, Phys. Lett. B 231 (1989), 417-424. Lauer:1989ax J. Lauer, J. Mas and H. P. Nilles, Phys. Lett. B 226, 251-256 (1989) doi:10.1016/0370-2693(89)91190-8. Lauer:1990tm J. Lauer, J. Mas and H. P. Nilles, Nucl. Phys. B 351, 353 (1991). Kobayashi:2018rad T. Kobayashi, S. Nagamoto, S. Takada, S. Tamba and T. H. Tatsuishi, Phys. Rev. D 97, no. 11, 116002 (2018) [arXiv:1804.06644 [hep-th]]. Kobayashi:2018bff T. Kobayashi and S. Tamba, Phys. Rev. D 99, no.4, 046001 (2019) [arXiv:1811.11384 [hep-th]]. Ohki:2020bpo H. Ohki, S. Uemura and R. Watanabe, Phys. Rev. D 102, no.8, 085008 (2020) [arXiv:2003.04174 [hep-th]]. Kikuchi:2020frp S. Kikuchi, T. Kobayashi, S. Takada, T. H. Tatsuishi and H. Uchida, Phys. Rev. D 102, no.10, 105010 (2020) [arXiv:2005.12642 [hep-th]]. Kikuchi:2020nxn S. Kikuchi, T. Kobayashi, H. Otsuka, S. Takada and H. Uchida, JHEP 11, 101 (2020) [arXiv:2007.06188 [hep-th]]. Kikuchi:2021ogn S. Kikuchi, T. Kobayashi and H. Uchida, Phys. Rev. D 104, no.6, 065008 (2021) [arXiv:2101.00826 [hep-th]]. Almumin:2021fbk Y. Almumin, M. C. Chen, V. Knapp-Perez, S. Ramos-Sanchez, M. Ratz and S. Shukla, JHEP 05 (2021), 078 [arXiv:2102.11286 [hep-th]]. Strominger:1990pd A. Strominger, Commun. Math. Phys. 133 (1990), 163-180. Candelas:1990pi P. Candelas and X. de la Ossa, Nucl. Phys. B 355 (1991), 455-481. Ishiguro:2020nuf K. Ishiguro, T. Kobayashi and H. Otsuka, Nucl. Phys. B 973, 115598 (2021) [arXiv:2010.10782 [hep-th]]. Ishiguro:2021ccl K. Ishiguro, T. Kobayashi and H. Otsuka, JHEP 01, 020 (2022) [arXiv:2107.00487 [hep-th]]. deAdelhartToorop:2011re R. de Adelhart Toorop, F. Feruglio and C. Hagedorn, Nucl. Phys. B 858, 437 (2012) [arXiv:1112.1340 [hep-ph]]. Feruglio:2017spp F. Feruglio, in From My Vast Repertoire ...: Guido Altarelli's Legacy, A. Levy, S. Forte, Stefano, and G. Ridolfi, eds., pp.227–266, 2019, arXiv:1706.08749 [hep-ph]. Kobayashi:2018vbk T. Kobayashi, K. Tanaka and T. H. Tatsuishi, Phys. Rev. D 98 (2018) no.1, 016004 [arXiv:1803.10391 [hep-ph]]. Penedo:2018nmg J. T. Penedo and S. T. Petcov, Nucl. Phys. B 939 (2019) 292 [arXiv:1806.11040 [hep-ph]]. Novichkov:2018nkm P. P. Novichkov, J. T. Penedo, S. T. Petcov and A. V. Titov, JHEP 1904, 174 (2019) [arXiv:1812.02158 [hep-ph]]. Ding:2019xna G. J. Ding, S. F. King and X. G. Liu, Phys. Rev. D 100 (2019) no.11, 115005 [arXiv:1903.12588 [hep-ph]]. Green:1987mn M. B. Green, J. H. Schwarz and E. Witten, Cambridge, Uk: Univ. Pr. ( 1987) 596 P. ( Cambridge Monographs On Mathematical Physics) Strominger:1985it A. Strominger and E. Witten, Commun. Math. Phys. 101, 341 (1985). Dine:1992ya M. Dine, R. G. Leigh and D. A. MacIntire, Phys. Rev. Lett. 69, 2030 (1992) [hep-th/9205011]. Choi:1992xp K. w. Choi, D. B. Kaplan and A. E. Nelson, Nucl. Phys. B 391, 515 (1993) [hep-ph/9205202]. Lim:1990bp C. S. Lim, Phys. Lett. B 256, 233 (1991). Kobayashi:1994ks T. Kobayashi and C. S. Lim, Phys. Lett. B 343, 122 (1995) [hep-th/9410023]. Acharya:1995ag B. S. Acharya, D. Bailin, A. Love, W. A. Sabra and S. Thomas, Phys. Lett. B 357, 387 (1995) Erratum: [Phys. Lett. B 407, 451 (1997)] [hep-th/9506143]. Dent:2001cc T. Dent, Phys. Rev. D 64, 056005 (2001) [hep-ph/0105285]. Khalil:2001dr S. Khalil, O. Lebedev and S. Morris, Phys. Rev. D 65, 115014 (2002) [hep-th/0110063]. Giedt:2002ns J. Giedt, Mod. Phys. Lett. A 17, 1465 (2002) [hep-ph/0204017]. Kobayashi:2020uaj T. Kobayashi and H. Otsuka, Phys. Rev. D 102, no.2, 026004 (2020) [arXiv:2004.04518 [hep-th]]. Baur:2019kwi A. Baur, H. P. Nilles, A. Trautner and P. K. S. Vaudrevange, Phys. Lett. B 795, 7 (2019) [arXiv:1901.03251 [hep-th]]; Novichkov:2019sqv P. P. Novichkov, J. T. Penedo, S. T. Petcov and A. V. Titov, JHEP 1907, 165 (2019) [arXiv:1905.11970 [hep-ph]]. Baur:2019iai A. Baur, H. P. Nilles, A. Trautner and P. K. S. Vaudrevange, Nucl. Phys. B 947 (2019), 114737 [arXiv:1908.00805 [hep-th]]. Baur:2020jwc A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sanchez and P. K. S. Vaudrevange, JHEP 02 (2021), 018 [arXiv:2008.07534 [hep-th]]. Nilles:2020gvu H. P. Nilles, S. Ramos–Sánchez and P. K. S. Vaudrevange, Nucl. Phys. B 966 (2021), 115367 [arXiv:2010.13798 [hep-th]]. Bonisch:2022slo K. Bönisch, M. Elmi, A. K. Kashani-Poor and A. Klemm, [arXiv:2204.06506 [hep-th]]. Kobayashi:2019uyt T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, T. H. Tatsuishi and H. Uchida, Phys. Rev. D 101, no.5, 055046 (2020) [arXiv:1910.11553 [hep-ph]]. Kikuchi:2022geu S. Kikuchi, T. Kobayashi, M. Tanimoto and H. Uchida, PTEP 2022, no.11, 113B07 (2022) [arXiv:2206.08538 [hep-ph]]. Kobayashi:2021uam T. Kobayashi and H. Otsuka, Eur. Phys. J. C 82, no.1, 25 (2022) [arXiv:2108.02700 [hep-ph]]. Kobayashi:2021pav T. Kobayashi, H. Otsuka, M. Tanimoto and K. Yamamoto, Phys. Rev. D 105, no.5, 055022 (2022) [arXiv:2112.00493 [hep-ph]]. Kobayashi:2022jvy T. Kobayashi, H. Otsuka, M. Tanimoto and K. Yamamoto, JHEP 08, 013 (2022) [arXiv:2204.12325 [hep-ph]]. Nomura:2019jxj T. Nomura and H. Okada, Phys. Lett. B 797, 134799 (2019) [arXiv:1904.03937 [hep-ph]]. Kobayashi:2021ajl T. Kobayashi, H. Okada and Y. Orikasa, Phys. Dark Univ. 37, 101080 (2022) [arXiv:2111.05674 [hep-ph]]. Kobayashi:2021jqu T. Kobayashi, T. Shimomura and M. Tanimoto, Phys. Lett. B 819, 136452 (2021) [arXiv:2102.10425 [hep-ph]]. Tanimoto:2021ehw M. Tanimoto and K. Yamamoto, JHEP 10, 183 (2021) [arXiv:2106.10919 [hep-ph]]. Kikuchi:2022pkd S. Kikuchi, T. Kobayashi, K. Nasu, H. Otsuka, S. Takada and H. Uchida, PTEP 2022, no.12, 123B02 (2022) [arXiv:2203.14667 [hep-ph]]. Kobayashi:2022sov T. Kobayashi, S. Nishimura, H. Otsuka, M. Tanimoto and K. Yamamoto, [arXiv:2207.14014 [hep-ph]]. Feruglio:2023uof F. Feruglio, A. Strumia and A. Titov, [arXiv:2305.08908 [hep-ph]]. Gunning:1962 R. C. Gunning, Lectures on Modular Forms (Princeton University Press, Princeton, NJ, 1962). Schoeneberg:1974 B. Schoeneberg, Elliptic Modular Functions (Springer-Verlag, 1974). Koblitz:1984 N. Koblitz, Introduction to Elliptic Curves and Modular Forms (Springer-Verlag, 1984). Bruinier:2008 J.H. Bruinier, G.V.D. Geer, G. Harder, and D. Zagier, The 1-2-3 of Modular Forms (Springer, 2008). Ferrara:1989bc S. Ferrara, D. Lust, A. D. Shapere and S. Theisen, Phys. Lett. B 225, 363 (1989). Chen:2019ewa M. Chen, S. Ramos-Sánchez and M. Ratz, Phys. Lett. B 801 (2020), 135153 [arXiv:1909.06910 [hep-ph]]. Kobayashi:2018scp T. Kobayashi, N. Omoto, Y. Shimizu, K. Takagi, M. Tanimoto and T. H. Tatsuishi, JHEP 11 (2018), 196 [arXiv:1808.03012 [hep-ph]]. Esteban:2020cvm I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, JHEP 09 (2020), 178 [arXiv:2007.14792 [hep-ph]]. Vagnozzi:2017ovm S. Vagnozzi, E. Giusarma, O. Mena, K. Freese, M. Gerbino, S. Ho and M. Lattanzi, Phys. Rev. D 96 (2017) no.12, 123503 [arXiv:1701.08172 [astro-ph.CO]]. Aghanim:2018eyx N. Aghanim et al. [Planck], Astron. Astrophys. 641 (2020), A6 [arXiv:1807.06209 [astro-ph.CO]]. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01. Antusch:2013jca S. Antusch and V. Maurer, JHEP 1311 (2013) 115 [arXiv:1306.6879 [hep-ph]]. Bjorkeroth:2015ora F. Björkeroth, F. J. de Anda, I. de Medeiros Varzielas and S. F. King, JHEP 1506 (2015) 141 [arXiv:1503.03306 [hep-ph]]. Criado:2018thu J. C. Criado and F. Feruglio, SciPost Phys. 5 (2018) no.5, 042 [arXiv:1807.01125 [hep-ph]]. Ding:2019zxk G. J. Ding, S. F. King and X. G. Liu, JHEP 1909 (2019) 074 [arXiv:1907.11714 [hep-ph]]. Okada:2020brs H. Okada and M. Tanimoto, JHEP 03 (2021), 010 [arXiv:2012.01688 [hep-ph]]. Yao:2020qyy C. Y. Yao, J. N. Lu and G. J. Ding, JHEP 05 (2021), 102 [arXiv:2012.13390 [hep-ph]]. Novichkov:2018ovf P. P. Novichkov, J. T. Penedo, S. T. Petcov and A. V. Titov, JHEP 1904 (2019) 005 [arXiv:1811.04933 [hep-ph]]. Kobayashi:2019mna T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto and T. H. Tatsuishi, JHEP 02 (2020), 097 [arXiv:1907.09141 [hep-ph]]. Wang:2019ovr X. Wang and S. Zhou, JHEP 05 (2020), 017 [arXiv:1910.09473 [hep-ph]]. Gehrlein:2020jnr J. Gehrlein and M. Spinrath, JHEP 03 (2021), 177 [arXiv:2012.04131 [hep-ph]]. Kobayashi:2019rzp T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto and T. H. Tatsuishi, PTEP 2020, no.5, 053B05 (2020) [arXiv:1906.10341 [hep-ph]]. Meloni:2023aru D. Meloni and M. Parriciatu, [arXiv:2306.09028 [hep-ph]]. Liu:2019khw X. G. Liu and G. J. Ding, JHEP 1908 (2019) 134 [arXiv:1907.01488 [hep-ph]]. Novichkov:2020eep P. P. Novichkov, J. T. Penedo and S. T. Petcov, Nucl. Phys. B 963 (2021), 115301 [arXiv:2006.03058 [hep-ph]]. Liu:2020akv X. G. Liu, C. Y. Yao and G. J. Ding, Phys. Rev. D 103 (2021) no.5, 056013 [arXiv:2006.10722 [hep-ph]]. Wang:2020lxk X. Wang, B. Yu and S. Zhou, Phys. Rev. D 103 (2021) no.7, 076005 [arXiv:2010.10159 [hep-ph]]. Yao:2020zml C. Y. Yao, X. G. Liu and G. J. Ding, [arXiv:2011.03501 [hep-ph]]. Okada:2022kee H. Okada and Y. Orikasa, Chin. Phys. C 46 (2022) no.12, 123108 [arXiv:2206.12629 [hep-ph]]. Ding:2022nzn G. J. Ding, X. G. Liu and C. Y. Yao, [arXiv:2211.04546 [hep-ph]]. Ding:2022aoe G. J. Ding, F. R. Joaquim and J. N. Lu, [arXiv:2211.08136 [hep-ph]]. Benes:2022bbg P. Beneš, H. Okada and Y. Orikasa, [arXiv:2212.07245 [hep-ph]]. Ding:2020msi G. J. Ding, S. F. King, C. C. Li and Y. L. Zhou, JHEP 08 (2020), 164 [arXiv:2004.12662 [hep-ph]]. Li:2021buv C. C. Li, X. G. Liu and G. J. Ding, JHEP 10 (2021), 238 [arXiv:2108.02181 [hep-ph]]. Okada:2018yrn H. Okada and M. Tanimoto, Phys. Lett. B 791 (2019) 54 [arXiv:1812.09677 [hep-ph]]. Okada:2019uoy H. Okada and M. Tanimoto, Eur. Phys. J. C 81 (2021) no.1, 52 [arXiv:1905.13421 [hep-ph]]. deAnda:2018ecu F. J. de Anda, S. F. King and E. Perdomo, Phys. Rev. D 101 (2020) no.1, 015028 [arXiv:1812.05620 [hep-ph]]. King:2021fhl S. F. King and Y. L. Zhou, JHEP 04 (2021), 291 [arXiv:2103.02633 [hep-ph]]. Chen:2021zty P. Chen, G. J. Ding and S. F. King, JHEP 04 (2021), 239 [arXiv:2101.12724 [hep-ph]]. Du:2020ylx X. Du and F. Wang, JHEP 02, 221 (2021) [arXiv:2012.01397 [hep-ph]]. Ding:2021zbg G. J. Ding, S. F. King and C. Y. Yao, [arXiv:2103.16311 [hep-ph]]. Ding:2021eva G. J. Ding, S. F. King and J. N. Lu, JHEP 11 (2021), 007 [arXiv:2108.09655 [hep-ph]]. Ding:2022bzs G. J. Ding, S. F. King, J. N. Lu and B. Y. Qu, JHEP 10 (2022), 071 [arXiv:2206.14675 [hep-ph]]. Asaka:2019vev T. Asaka, Y. Heo, T. H. Tatsuishi and T. Yoshida, JHEP 2001 (2020) 144 [arXiv:1909.06520 [hep-ph]]. Okada:2021qdf H. Okada, Y. Shimizu, M. Tanimoto and T. Yoshida, JHEP 07 (2021), 184 [arXiv:2105.14292 [hep-ph]]. Qu:2021jdy B. Y. Qu, X. G. Liu, P. T. Chen and G. J. Ding, Phys. Rev. D 104 (2021) no.7, 076001 [arXiv:2106.11659 [hep-ph]]. Novichkov:2018yse P. P. Novichkov, S. T. Petcov and M. Tanimoto, Phys. Lett. B 793 (2019) 247 [arXiv:1812.11289 [hep-ph]]. Gui-JunDing:2019wap G. J. Ding, S. F. King, X. G. Liu and J. N. Lu, JHEP 1912 (2019) 030 [arXiv:1910.03460 [hep-ph]]. Okada:2020ukr H. Okada and M. Tanimoto, Phys. Rev. D 103 (2021) no.1, 015005 [arXiv:2009.14242 [hep-ph]]. Novichkov:2021evw P. P. Novichkov, J. T. Penedo and S. T. Petcov, JHEP 04, 206 (2021) [arXiv:2102.07488 [hep-ph]]. Feruglio:2021dte F. Feruglio, V. Gherardi, A. Romanino and A. Titov, JHEP 05 (2021), 242 [arXiv:2101.08718 [hep-ph]]. Feruglio:2023bav F. Feruglio, Phys. Rev. Lett. 130 (2023) no.10, 101801 doi:10.1103/PhysRevLett. 130.101801. Petcov:2022fjf S. T. Petcov and M. Tanimoto, [arXiv:2212.13336 [hep-ph]]. Petcov:2023vws S. T. Petcov and M. Tanimoto, [arXiv:2306.05730 [hep-ph]]. Kikuchi:2023cap S. Kikuchi, T. Kobayashi, K. Nasu, S. Takada and H. Uchida, Phys. Rev. D 107, no.5, 055014 (2023) [arXiv:2301.03737 [hep-ph]]. Abe:2023ilq Y. Abe, T. Higaki, J. Kawamurab and T. Kobayashi, [arXiv:2301.07439 [hep-ph]]. Kikuchi:2023jap S. Kikuchi, T. Kobayashi, K. Nasu, S. Takada and H. Uchida, [arXiv:2302.03326 [hep-ph]]. Abe:2023qmr Y. Abe, T. Higaki, J. Kawamura and T. Kobayashi, [arXiv:2302.11183 [hep-ph]]. Abe:2023dvr Y. Abe, T. Higaki, J. Kawamura and T. Kobayashi, [arXiv:2307.01419 [hep-ph]]. Kobayashi:2018wkl T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, T. H. Tatsuishi and H. Uchida, Phys. Lett. B 794 (2019) 114 [arXiv:1812.11072 [hep-ph]]. Kobayashi:2019xvz T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto and T. H. Tatsuishi, Phys. Rev. D 100 (2019) no.11, 115045 [erratum: Phys. Rev. D 101 (2020) no.3, 039904] [arXiv:1909.05139 [hep-ph]]. Asaka:2020tmo T. Asaka, Y. Heo and T. Yoshida, Phys. Lett. B 811 (2020), 135956 [arXiv:2009.12120 [hep-ph]]. Behera:2020sfe M. K. Behera, S. Mishra, S. Singirala and R. Mohanta, Phys. Dark Univ. 36 (2022), 101027 [arXiv:2007.00545 [hep-ph]]. Mishra:2020gxg S. Mishra, [arXiv:2008.02095 [hep-ph]]. Okada:2019xqk H. Okada and Y. Orikasa, Phys. Rev. D 100, no.11, 115037 (2019) [arXiv:1907.04716 [hep-ph]]. Kariyazono:2019ehj Y. Kariyazono, T. Kobayashi, S. Takada, S. Tamba and H. Uchida, Phys. Rev. D 100, no.4, 045014 (2019) [arXiv:1904.07546 [hep-th]]. Nomura:2019yft T. Nomura and H. Okada, Nucl. Phys. B 966 (2021), 115372 [arXiv:1906.03927 [hep-ph]]. Okada:2019lzv H. Okada and Y. Orikasa, arXiv:1908.08409 [hep-ph]. Nomura:2019lnr T. Nomura, H. Okada and O. Popov, Phys. Lett. B 803 (2020) 135294 [arXiv:1908.07457 [hep-ph]]. Criado:2019tzk J. C. Criado, F. Feruglio and S. J. D. King, JHEP 2002 (2020) 001 [arXiv:1908.11867 [hep-ph]]. King:2019vhv S. F. King and Y. L. Zhou, Phys. Rev. D 101 (2020) no.1, 015001 [arXiv:1908.02770 [hep-ph]]. Zhang:2019ngf D. Zhang, Nucl. Phys. B 952 (2020) 114935 [arXiv:1910.07869 [hep-ph]]. Lu:2019vgm J. N. Lu, X. G. Liu and G. J. Ding, Phys. Rev. D 101 (2020) no.11, 115020 [arXiv:1912.07573 [hep-ph]]. Nomura:2019xsb T. Nomura, H. Okada and S. Patra, Nucl. Phys. B 967 (2021), 115395 [arXiv:1912.00379 [hep-ph]]. Kobayashi:2019gtp T. Kobayashi, T. Nomura and T. Shimomura, Phys. Rev. D 102 (2020) no.3, 035019 [arXiv:1912.00637 [hep-ph]]. Wang:2019xbo X. Wang, Nucl. Phys. B 957 (2020), 115105 [arXiv:1912.13284 [hep-ph]]. King:2020qaj S. J. D. King and S. F. King, JHEP 09 (2020), 043 [arXiv:2002.00969 [hep-ph]]. Abbas:2020qzc M. Abbas, Phys. Rev. D 103 (2021) no.5, 056016 [arXiv:2002.01929 [hep-ph]]. Okada:2020oxh H. Okada and Y. Shoji, Phys. Dark Univ. 31 (2021), 100742 [arXiv:2003.11396 [hep-ph]]. Okada:2020dmb H. Okada and Y. Shoji, Nucl. Phys. B 961 (2020), 115216 [arXiv:2003.13219 [hep-ph]]. Ding:2020yen G. J. Ding and F. Feruglio, JHEP 06 (2020), 134 [arXiv:2003.13448 [hep-ph]]. Nomura:2020opk T. Nomura and H. Okada, JCAP 09 (2022), 049 doi:10.1088/1475-7516/2022/09/049 [arXiv:2007.04801 [hep-ph]]. Nomura:2020cog T. Nomura and H. Okada, arXiv:2007.15459 [hep-ph]. Okada:2020rjb H. Okada and M. Tanimoto, Phys. Dark Univ. 40 (2023), 101204 [arXiv:2005.00775 [hep-ph]]. deMedeirosVarzielas:2020kji I. de Medeiros Varzielas, M. Levy and Y. L. Zhou, JHEP 11 (2020), 085 [arXiv:2008.05329 [hep-ph]]. Nagao:2020azf K. I. Nagao and H. Okada, JCAP 05 (2021), 063 [arXiv:2008.13686 [hep-ph]]. Nagao:2020snm K. I. Nagao and H. Okada, Nucl. Phys. B 980 (2022), 115841 [arXiv:2010.03348 [hep-ph]]. Abbas:2020vuy M. Abbas, Phys. Atom. Nucl. 83 (2020) no.5, 764-769. Kuranaga:2021ujd H. Kuranaga, H. Ohki and S. Uemura, JHEP 07 (2021), 068 [arXiv:2105.06237 [hep-ph]]. Okada:2021aoi H. Okada and Y. h. Qi, [arXiv:2109.13779 [hep-ph]]. Dasgupta:2021ggp A. Dasgupta, T. Nomura, H. Okada, O. Popov and M. Tanimoto, [arXiv:2111.06898 [hep-ph]]. Nomura:2021ewm T. Nomura and H. Okada, Chin. Phys. C 46 (2022) no.5, 053101 [arXiv:2109.04157 [hep-ph]]. Nagao:2021rio K. I. Nagao and H. Okada, Phys. Dark Univ. 36 (2022), 101039 [arXiv:2108.09984 [hep-ph]]. Nomura:2021yjb T. Nomura, H. Okada and Y. Orikasa, Eur. Phys. J. C 81 (2021) no.10, 947 [arXiv:2106.12375 [hep-ph]]. Nomura:2021aep T. Nomura and H. Okada, Phys. Rev. D 105 (2022) no.7, 075010 [arXiv:2106.10451 [hep-ph]]. Zhang:2021olk X. Zhang and S. Zhou, JCAP 09 (2021), 043 [arXiv:2106.03433 [hep-ph]]. Wang:2021mkw X. Wang and S. Zhou, JHEP 07 (2021), 093 [arXiv:2102.04358 [hep-ph]]. Wang:2020dbp X. Wang, Nucl. Phys. B 962 (2021), 115247 [arXiv:2007.05913 [hep-ph]]. Ko:2021lpx P. Ko, T. Nomura and H. Okada, JHEP 05 (2022), 098 [arXiv:2110.10513 [hep-ph]]. Nomura:2021pld T. Nomura, H. Okada and Y. h. Qi, [arXiv:2111.10944 [hep-ph]]. Nomura:2022hxs T. Nomura and H. Okada, [arXiv:2201.10244 [hep-ph]]. Otsuka:2022rak H. Otsuka and H. Okada, [arXiv:2202.10089 [hep-ph]]. Ding:2021iqp G. J. Ding, F. Feruglio and X. G. Liu, SciPost Phys. 10 (2021) no.6, 133 [arXiv:2102.06716 [hep-ph]]. Charalampous:2021gmf G. Charalampous, S. F. King, G. K. Leontaris and Y. L. Zhou, Phys. Rev. D 104 (2021) no.11, 115015 [arXiv:2109.11379 [hep-ph]]. Liu:2021gwa X. G. Liu and G. J. Ding, JHEP 03 (2022), 123 [arXiv:2112.14761 [hep-ph]]. Novichkov:2022wvg P. P. Novichkov, J. T. Penedo and S. T. Petcov, JHEP 03 (2022), 149 [arXiv:2201.02020 [hep-ph]]. Kikuchi:2022txy S. Kikuchi, T. Kobayashi, H. Otsuka, M. Tanimoto, H. Uchida and K. Yamamoto, Phys. Rev. D 106 (2022) no.3, 035001 [arXiv:2201.04505 [hep-ph]]. Behera:2020lpd M. K. Behera, S. Singirala, S. Mishra and R. Mohanta, J. Phys. G 49 (2022) no.3, 035002 [arXiv:2009.01806 [hep-ph]]. Ahn:2022ufs Y. H. Ahn, S. K. Kang, R. Ramos and M. Tanimoto, Phys. Rev. D 106 (2022) no.9, 095002 [arXiv:2205.02796 [hep-ph]]. Gunji:2022xig Y. Gunji, K. Ishiwata and T. Yoshida, JHEP 11 (2022), 002 [arXiv:2208.10086 [hep-ph]]. Kim:2023jto J. Kim and H. Okada, [arXiv:2302.09747 [hep-ph]]. Nomura:2023dgk T. Nomura, H. Okada and Y. Shoji, PTEP 2023 (2023) no.2, 023B04. Kang:2022psa D. W. Kang, J. Kim, T. Nomura and H. Okada, JHEP 07 (2022), 050 [arXiv:2205.08269 [hep-ph]]. Abe:2023ylh Y. Abe, T. Higaki, F. Kaneko, T. Kobayashi and H. Otsuka, [arXiv:2303.02947 [hep-ph]]. deMedeirosVarzielas:2022fbw I. de Medeiros Varzielas, S. F. King and M. Levy, JHEP 02 (2023), 143 [arXiv:2211.00654 [hep-ph]]. Ding:2023ynd G. J. Ding, S. F. King, C. C. Li, X. G. Liu and J. N. Lu, [arXiv:2303.02071 [hep-ph]]. deAnda:2023udh F. J. de Anda and S. F. King, [arXiv:2304.05958 [hep-ph]]. Nomura:2023kwz T. Nomura and H. Okada, [arXiv:2304.13361 [hep-ph]]. Ahn:2023iqa Y. H. Ahn and S. K. Kang, [arXiv:2306.14467 [hep-ph]]. Kobayashi:2016ovu T. Kobayashi, S. Nagamoto and S. Uemura, PTEP 2017, no.2, 023B02 (2017) [arXiv:1608.06129 [hep-th]]. deMedeirosVarzielas:2019cyj I. de Medeiros Varzielas, S. F. King and Y. L. Zhou, Phys. Rev. D 101 (2020) no.5, 055033 [arXiv:1906.02208 [hep-ph]]. Ishiguro:2020tmo K. Ishiguro, T. Kobayashi and H. Otsuka, JHEP 03, 161 (2021) [arXiv:2011.09154 [hep-ph]]. Abe:2020vmv H. Abe, T. Kobayashi, S. Uemura and J. Yamamoto, Phys. Rev. D 102, no.4, 045005 (2020) [arXiv:2003.03512 [hep-th]]. Kikuchi:2021yog S. Kikuchi, T. Kobayashi, Y. Ogawa and H. Uchida, PTEP 2022, no.3, 033B10 (2022) [arXiv:2112.01680 [hep-ph]]. Nilles:2020nnc H. P. Nilles, S. Ramos-Śanchez and P. K. S. Vaudrevange, JHEP 02 (2020), 045 [arXiv:2001.01736 [hep-ph]]. Nilles:2020kgo H. P. Nilles, S. Ramos-Sánchez and P. K. S. Vaudrevange, Nucl. Phys. B 957 (2020), 115098 [arXiv:2004.05200 [hep-ph]]. Nilles:2020tdp H. P. Nilles, S. Ramos–Sánchez and P. K. S. Vaudrevange, Phys. Lett. B 808 (2020), 135615 [arXiv:2006.03059 [hep-th]]. Baur:2020yjl A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sanchez and P. K. S. Vaudrevange, Phys. Lett. B 816 (2021), 136176 [arXiv:2012.09586 [hep-th]]. Ding:2020zxw G. J. Ding, F. Feruglio and X. G. Liu, JHEP 01 (2021), 037 [arXiv:2010.07952 [hep-th]]. Baur:2021mtl A. Baur, M. Kade, H. P. Nilles, S. Ramos-Sanchez and P. K. S. Vaudrevange, JHEP 06 (2021), 110 [arXiv:2104.03981 [hep-th]]. Nilles:2021ouu H. P. Nilles, S. Ramos-Sanchez and P. K. S. Vaudrevange, [arXiv:2105.02984 [hep-th]]. Nilles:2021glx H. P. Nilles, S. Ramos-Sanchez, A. Trautner and P. K. S. Vaudrevange, Nucl. Phys. B 971 (2021), 115534 [arXiv:2105.08078 [hep-th]]. Baur:2021bly A. Baur, H. P. Nilles, S. Ramos-Sanchez, A. Trautner and P. K. S. Vaudrevange, Phys. Rev. D 105 (2022) no.5, 055018 [arXiv:2112.06940 [hep-th]]. Baur:2022hma A. Baur, H. P. Nilles, S. Ramos-Sanchez, A. Trautner and P. K. S. Vaudrevange, JHEP 09 (2022), 224 [arXiv:2207.10677 [hep-ph]]. Kikuchi:2023clx S. Kikuchi, T. Kobayashi, K. Nasu, H. Otsuka, S. Takada and H. Uchida, JHEP 04, 003 (2023) [arXiv:2301.10356 [hep-th]]. Feruglio:2023mii F. Feruglio, JHEP 03 (2023), 236 [arXiv:2302.11580 [hep-ph]]. Knapp-Perez:2023nty V. Knapp-Perez, X. G. Liu, H. P. Nilles, S. Ramos-Sanchez and M. Ratz, [arXiv:2304.14437 [hep-th]]. Abe:2008fi H. Abe, T. Kobayashi and H. Ohki, JHEP 09 (2008), 043 [arXiv:0806.4748 [hep-th]]. Abe:2013bca T. H. Abe, Y. Fujimoto, T. Kobayashi, T. Miura, K. Nishiwaki and M. Sakamoto, JHEP 1401, 065 (2014) [arXiv:1309.4925 [hep-th]]. Abe:2017gye H. Abe, T. Kobayashi, K. Sumita and S. Uemura, Phys. Rev. D 96, no.2, 026019 (2017) [arXiv:1703.03402 [hep-th]]. Abe:2012fj H. Abe, T. Kobayashi, H. Ohki, A. Oikawa and K. Sumita, Nucl. Phys. B 870, 30-54 (2013) [arXiv:1211.4317 [hep-ph]]. Abe:2014vza H. Abe, T. Kobayashi, K. Sumita and Y. Tatsuta, Phys. Rev. D 90, no.10, 105006 (2014) [arXiv:1405.5012 [hep-ph]]. Fujimoto:2016zjs Y. Fujimoto, T. Kobayashi, K. Nishiwaki, M. Sakamoto and Y. Tatsuta, Phys. Rev. D 94, no.3, 035031 (2016) [arXiv:1605.00140 [hep-ph]]. Buchmuller:2017vho W. Buchmuller and J. Schweizer, Phys. Rev. D 95, no.7, 075024 (2017) [arXiv:1701.06935 [hep-ph]]. Buchmuller:2017vut W. Buchmuller and K. M. Patel, Phys. Rev. D 97, no.7, 075019 (2018) [arXiv:1712.06862 [hep-ph]]. Hoshiya:2022qvr K. Hoshiya, S. Kikuchi, T. Kobayashi and H. Uchida, Phys. Rev. D 106 (2022) no.11, 115003 [arXiv:2209.07249 [hep-ph]]. Kikuchi:2022svo S. Kikuchi, T. Kobayashi, M. Tanimoto and H. Uchida, [arXiv:2207.04609 [hep-ph]]. Fukugita:2003tn M. Fukugita, M. Tanimoto and T. Yanagida, Phys. Lett. B 562 (2003), 273-278 doi:10.1016/S0370-2693(03)00568-9 [arXiv:hep-ph/0303177 [hep-ph]]. Xing:2004xu Z. z. Xing and S. Zhou, Phys. Lett. B 593 (2004), 156-164 [arXiv:hep-ph/0403261 [hep-ph]]. Obara:2006ny M. Obara and Z. z. Xing, Phys. Lett. B 644 (2007), 136-146 doi:10.1016/j.physletb. 2006.11.010 [arXiv:hep-ph/0608280 [hep-ph]]. Fukugita:2012jr M. Fukugita, Y. Shimizu, M. Tanimoto and T. T. Yanagida, Phys. Lett. B 716 (2012), 294-297 doi:10.1016/j.physletb.2012.06.049 [arXiv:1204.2389 [hep-ph]]. Fukugita:2016hzf M. Fukugita, Y. Kaneta, Y. Shimizu, M. Tanimoto and T. T. Yanagida, Phys. Lett. B 764 (2017), 163-166 doi:10.1016/j.physletb.2016.11.024 [arXiv:1609.01864 [hep-ph]]. Fritzsch:2012rg H. Fritzsch and S. Zhou, Phys. Lett. B 718 (2013), 1457-1464 [arXiv:1212.0411 [hep-ph]]. Gukov:1999ya S. Gukov, C. Vafa and E. Witten, Nucl. Phys. B 584, 69-108 (2000) [erratum: Nucl. Phys. B 608, 477-478 (2001)] [arXiv:hep-th/9906070 [hep-th]]. Ishiguro:2022pde K. Ishiguro, H. Okada and H. Otsuka, JHEP 09 (2022), 072 [arXiv:2206.04313 [hep-ph]]. Cvetic:2003ch M. Cvetic and I. Papadimitriou, Phys. Rev. D 68, 046001 (2003) [erratum: Phys. Rev. D 70, 029903 (2004)] [arXiv:hep-th/0303083 [hep-th]]. Abel:2003vv S. A. Abel and A. W. Owen, Nucl. Phys. B 663, 197-214 (2003) [arXiv:hep-th/0303124 [hep-th]]. Abel:2003yx S. A. Abel and A. W. Owen, Nucl. Phys. B 682, 183-216 (2004) [arXiv:hep-th/0310257 [hep-th]]. Cremades:2004wa D. Cremades, L. E. Ibanez and F. Marchesano, JHEP 05 (2004), 079 [arXiv:hep-th/0404229 [hep-th]]. Abe:2009dr H. Abe, K. S. Choi, T. Kobayashi and H. Ohki, JHEP 06, 080 (2009) [arXiv:0903.3800 [hep-th]]. Hamidi:1986vh S. Hamidi and C. Vafa, Nucl. Phys. B 279, 465-513 (1987). Dixon:1986qv L. J. Dixon, D. Friedan, E. J. Martinec and S. H. Shenker, Nucl. Phys. B 282, 13-73 (1987). Atick:1987kd J. J. Atick, L. J. Dixon, P. A. Griffin and D. Nemeschansky, Nucl. Phys. B 298, 1-35 (1988). Burwick:1990tu T. T. Burwick, R. K. Kaiser and H. F. Muller, Nucl. Phys. B 355, 689-711 (1991). Stieberger:1992bj S. Stieberger, D. Jungnickel, J. Lauer and M. Spalinski, Mod. Phys. Lett. A 7, 3059-3070 (1992) [arXiv:hep-th/9204037 [hep-th]]. Choi:2007nb K. S. Choi and T. Kobayashi, Nucl. Phys. B 797, 295-321 (2008) [arXiv:0711.4894 [hep-th]].
http://arxiv.org/abs/2307.01522v1
20230704070330
Immersive Media and Massive Twinning: Advancing Towards the Metaverse
[ "Wassim Hamidouche", "Lina Bariah", "Merouane Debbah" ]
cs.MM
[ "cs.MM" ]
Exploiting Richness of Learned Compressed Representation of Images for Semantic Segmentation Wassim Hamidouche, Lina Bariah, and Mérouane Debbah ============================================================================================= The advent of the Metaverse concept has further expedited the evolution of haptic, tactile internet, and multimedia applications with their VR/AR/XR services, and therefore, fully-immersive sensing is most likely to define the next generation of wireless networks as a key to realize the speculative vision of the Metaverse. In this magazine, we articulate different types of media that we envision will be communicated between the cyber and physical twins in the Metaverse. In particular, we explore the advantages grasped by exploiting each kind, and we point out critical challenges pertinent to 3D data processing, coding, transporting, and rendering. We further shed light on the role of future wireless networks in delivering the anticipated quality of immersion through the reliable streaming of multimedia signals between the digital twin and its physical counterpart. Specifically, we explore emergent communication paradigms, including semantic, holographic, and goal-oriented communication, which we expect to realize energy and spectrally efficient Metaverse while ensuring ultra-low latency. Exploiting Richness of Learned Compressed Representation of Images for Semantic Segmentation Wassim Hamidouche, Lina Bariah, and Mérouane Debbah ============================================================================================= Metaverse, Digital Twins, Wireless Communication, 3D Media, Virtual Reality, Immersive Media, Streaming. § INTRODUCTION The acceleration witnessed in the maturing of the Metaverse paradigm is fueled by the emergence of the dt technology, where the latter constitutes the cornerstone in realizing the Metaverse. The dt can be defined as an identical digital replica of a physical environment, decoupling the static and moving objects and mimicking close-to-real interactions among them <cit.>. On the other hand, the Metaverse is designed through enabling a massive twinning process, i.e., a massive-scale digital representation of several geographical areas and networks interconnected in harmony, ultimately paving the way to the global Metaverse. The Metaverse can be characterized by three main properties: immersion, interaction, and persistence. These properties are made possible through advancements in immersive audio and visual media, tactile internet, and wireless communication using 5G networks, which offer ultra-low latency and high throughput communication. Additionally, hardware improvements at various levels of the communication chain, including cloud infrastructure, edge devices, and sensors, contribute to enhanced computing capabilities. The immersion and interaction properties of the Metaverse enable users to engage with virtual environments and experience a seamless blend of real and virtual information. This continuous interaction and development within the xr environment promote innovation and new applications across consumer, enterprise, and industrial sectors. The potential economic impact of the Metaverse is significant, with the total addressable market projected to reach between $8 trillion and $13 trillion by 2030. A recent market study conducted by Deloitte in 2023 suggests that Europe alone could benefit from a Metaverse economy of $259 billion to $489 billion by 2035. Furthermore, Gartner, a leading technological research and consulting firm, predicts that 25% of people will spend at least one hour per day on the Metaverse platform for personal and professional purposes. In the consumer domain, the Metaverse encompasses a wide range of applications such as goods, mobility and transport, entertainment, financial services, and healthcare. Enterprises can leverage the Metaverse to provide virtual training courses, conduct virtual meetings, and facilitate remote collaboration. Moreover, the industrial sector can benefit from the Metaverse in areas including manufacturing, transport, travel, energy, utilities, aerospace, defense, financial services, healthcare, and education. In conclusion, the Metaverse's properties of immersion, interaction, and persistence are poised to revolutionize existing applications and introduce novel ones across various sectors. The economic potential is substantial, and its impact on both consumer and enterprise domains is expected to be transformative. By embracing the possibilities of the Metaverse, individuals and industries can unlock new opportunities for growth and innovation in the digital era. The Metaverse relies on the seamless integration of real and virtual realms, where various forms of multimedia data are exchanged. The increasing availability of virtual and augmented reality devices, along with advancements in computing resources and high-resolution equipment, has paved the way for immersive communication as a means of accessing the Metaverse. Immersive communication involves the exchange of multimedia data and natural haptic signals with remote devices. The effectiveness of immersive communication depends on the ability of wireless nodes to interact with remote environments and accurately perceive and quantify this interaction using all senses. To achieve the desired qoe, it is crucial that participating nodes in remote environments have high-resolution haptic and multimedia perception. In order to attain ultimate immersion, future wireless networks must adhere to several kpi, including low communication latency (sub-millisecond) and high throughput. These kpi ensure a high-quality experience for multimedia data, considering factors such as spatial resolution, color depth, dynamic range, frame rate, and glass-to-glass latency. Consequently, it is necessary to reevaluate existing wireless techniques and develop novel communication protocols capable of handling various types of multimedia data to deliver the required level of immersion for a high-fidelity Metaverse. This paper aims to provide an overview of different types of multimedia data that contribute to the realization of a fully-immersive Metaverse. The requirements of each type of multimedia are elucidated, highlighting how they can enhance the overall immersion. Furthermore, the latest advancements in compression and transport protocols for efficient transmission and storage of 3D visual modalities are presented. The article also delves into the wireless technologies that play a pivotal role in enabling reliable, fast, and energy-efficient multimedia communication. Figure <ref> gives a comprehensive summary of the main topics covered in this article. While numerous papers have discussed the Metaverse from various perspectives, to the best of our knowledge, this article is the first to explore the immersive media within the context of wireless networks. Table <ref> serves as a comprehensive summary of the existing related literature in the field. In this paper, we make the following key contributions: * Present visual 3D media modalities enabling immersive quality of experience, including 360^∘, point cloud, light field, and volumetric video. * Review encoding standards and transport protocols enabling efficient transmission and storage of 3D visual content. * Investigate enabling wireless technologies for reliable, real-time, and energy-efficient multimedia communication. * Assess the coding efficiency and latency of conventional and AI-based video encoding of volumetric video and dynamic point clouds. * Discuss open challenges and future research directions for the development and the massive deployment of the Metaverse. The rest of this paper is organized as follows. Section <ref> overview the different types of media technologies, with emphasis on their basic principles, coding schemes, and standardization efforts. Section <ref> puts a forward-looking vision on the enabling wireless technologies and paradigms anticipated to be employed for successfully transmitting immersive media data. Further, Section <ref> identifies several limitations and challenges encountered in immersive media communications towards the dt and the Metaverse. Finally, Section <ref> concludes the paper. § IMMERSIVE MEDIA STREAMING The Metaverse may benefit from a realistic representation of 3D natural scenes in high quality, especially for close-to-real visualization of objects and humans. In this section, we overview visual media modalities developed to enable high-quality streaming of 3D scenes from acquisition to display, emphasizing on standardization efforts to ensure the interoperability of devices for immersive services. Table <ref> summarizes the acquisition, used compression standards and formats, display, and enabled dof by these visual 3D modalities. §.§ 3D visual signal Various modalities, such as 360^∘ imagery, lf data, volumetric visual signals, and dh, have the capability to represent both natural and synthetic 3D scenes. This section provides a detailed description of these modalities, emphasizing their advantages and disadvantages when employed to depict static or dynamic 3D scenes. However, due to its current limited maturity in the industrial sphere, the discussion regarding the dh modality will be further explored in the subsequent section that addresses open challenges. Omnidirectional visual signal. An omnidirectional visual signal is presented in a spherical space with angular coordinates: the azimuth angle ϕ∈ [ π, - π ], and the elevation or polar angle θ∈ [ 0, π ], assuming a unit sphere (radius r=1) for acquisition and rendering. The sphere's origin represents the viewing reference that captures the light coming from all directions. The omnidirectional image allows the user to visualize the scene in 3dof by interacting with the scene through head rotations: roll, yaw, and pitch. In practice, an omnidirectional visual signal is acquired by a multi-view wide-angle capture. The wide-angle capture relies, for instance, on fish-eye lenses. One fish-eye camera enables only a partial sphere capture, while multiple fish-eye camera acquisitions are combined to cover the whole sphere. This operation is performed by stitching images from different cameras into the sphere. The omnidirectional visual signal in spherical representation is then mapped into a 2D texture signal in the pre-processing stage before being encoded by conventional 2D video coding standards. erp is the most commonly used bijective mapping technique, particularly adapted for production. Other mapping techniques, such as cmp and tsp, achieve a more efficient coding estimated respectively to 25% and 80% <cit.> superior to erp, and thus are more suitable for distribution[360^∘ projection methods: <https://map-projections.net/>]. The widespread deployment of odv in consumer products can be attributed to several factors. Firstly, the availability of 360^∘ cameras, hmd displays, and hardware 2D codecs has contributed to its adoption. Additionally, application standards such as the 3gpp and dvb have embraced odv, further enhancing its popularity. However, delivering high-resolution (8K) and high-frame-rate (60 to 120 frames per second) odv content requires significant throughput to ensure a satisfactory user qoe. Achieving these requirements is crucial for providing users with immersive and visually captivating content. One of the main limitations of odv is the absence of the motion parallax feature. Motion parallax refers to the relative position of objects changing based on the viewer's perspective in relation to those objects. This limitation can result in discomfort and motion sickness for users. To address this limitation, lf and volumetric visual presentations have emerged as alternative modalities. These modalities offer a visual experience comparable to the real world, incorporating up to 6dof capabilities that enable users to freely navigate through the scene, as illustrated in Figure <ref>. In the subsequent sections, we will explore these modalities in more detail and discuss their advantages in overcoming the limitations. Light field. The lf camera, also referred to as a plenoptic camera, captures both the intensity and direction of rays within a scene. These rays are described by a seven-dimensional (7D) plenoptic function, including the 3D coordinates (x, y, z) of the camera position, two angles (θ, ϕ) for the viewing direction, wavelength λ, and time t. To reduce this 7D function, the time dimension is typically sampled according to the capture device's frame rate, and the wavelength dimension is mapped to the three red-green-blue (RGB) color components, resulting in a 5D function. In practice, each ray's intersection is determined by its position on two parallel planes, denoted as (a, b) and (u, v) for spatial and angular coordinates, respectively. The resulting 4D lf function L(a, b, u, v) comprises a collection of perspective images of the (a, b) plane captured from different positions on the (u, v) plane. There are two main approaches to lf acquisition: camera arrays and plenoptic cameras. Plenoptic cameras, or narrow-baseline lf acquisition systems, involve adding microlens arrays (MLA) to conventional 2D cameras. The spatial resolution of the resulting micro-image depends on the number of microlenses used, while the angular resolution is determined by the number of pixels behind each microlens. Thus, the camera sensor's full resolution is shared between spatial and angular resolutions. The micro-image can be de-multiplexed to form subaperture images (SAIs), which group pixels with the same relative position in the microlens image. On the other hand, camera array-based lf acquisition systems, or wide-baseline lf, utilize camera arrays arranged in various geometries (such as a plane or a sphere at regular intervals). Each camera represents an angular sample, and the images provide spatial samples. The specific geometry employed distinguishes lf from general multi-view capture <cit.>. With known spatial and temporal positions between different cameras in lf acquisition, the underlying positions and directions of rays in space and time can be derived. The captured light field image enables various features, including viewpoint changes and high-quality depth map estimation. These features can be used to construct accurate point clouds and enable image rendering with adjustable depth of field and focus plane during post-acquisition processing. Volumetric video. Volumetric video encompasses two media modalities: point clouds and polygonal mesh. The acquisition of volumetric video typically involves a considerable number of cameras (ranging from 10 to more than sixty) placed around the scene[Volumetric capture companies: <http://volumetric-video.com/volumetric-capture-companies/>]. In addition to RGB cameras, active depth sensors are utilized to capture the geometry of the scene. Various modules process the acquired data to construct the final 3D representation of the scene, which can then be rendered at the receiver. The initial stage of pre-processing involves transforming the input data from the different camera streams into point clouds. Point clouds represent the 3D scene using unstructured points in 3D space along with their associated attributes, such as texture, reflectance, transparency, and surface normals. Color adaptation is performed to ensure consistent color distribution across the different camera streams. Next, objects within the scene are separated from the background to focus the processing on the relevant objects. Depth estimation is then carried out using a stereo camera pair. The resulting depth maps, including both predicted depth maps and depth maps obtained from RGB-D sensors, are transformed into 3D space and combined to generate a cohesive 3D point cloud. Additional processing steps can be applied to the point cloud, such as outlier removal through cleaning techniques. The point cloud representation of the 3D scene is subsequently encoded and transmitted to end users. Alternatively, in a different scenario, the point cloud representation is first converted into a consistent mesh representation before being encoded and streamed. The use of a mesh representation offers greater compatibility with existing players and hardware decoder implementations, enabling real-time rendering on various devices. §.§ Visual 3D visual signal coding One common limitation of all 3D visual media modalities is the substantial amount of data required for their digital representation. As a result, compression techniques are crucial for efficient storage and transmission, particularly over limited bandwidth wireless channels. This section provides a comprehensive review of the current coding standards available for encoding 3D visual content. Conventional 2D video standards. In practice, 2D image and video coding standards are commonly employed to encode 3D content. The first step involves projecting the 3D content onto one or several 2D planes during the pre-processing stage. These 2D planes are then encoded using conventional video standards. By utilizing well-established 2D video standards such as avc/H.264, hevc/H.265, vvc/H.266, and, AV1, and VP9 video formats, efficient and real-time coding can be achieved with existing hardware encoders. Moreover, compliant hardware decoders are widely supported by embedded devices, televisions, and web browsers. Notably, for odv content, a 2D standard is employed after projecting the sphere onto a 2D plane. Furthermore, tailored coding tools specific to odv have been integrated into hevc/H.265 and vvc/H.266, enhancing coding efficiency and enabling advanced streaming features such as viewport-dependent streaming. Several alternative codecs, including Draco and Corto, have been specifically designed to encode 3D content in point cloud and mesh representations. These codecs offer efficient coding tools and low-complexity decoder implementations, which are widely supported by web browsers. However, the temporal redundancy present in dynamic 3D content remains untapped, as frames are typically encoded independently. Instead, hevc/H.265 extensions including mv-hevc and 3D-hevc <cit.> were proposed to encode 3D video content in mv and mv plus dept representations, respectively. Despite leveraging temporal redundancy, these extensions have not gained widespread industry adoption primarily due to their limited coding efficiency and high decoding complexity, which scales linearly with the number of views. As a result, they fail to meet the scalability requirements of volumetric video with numerous views. To overcome these shortcomings, the v3c <cit.> encompasses a group of standards (ISO/IEC 23090-xx) to encode, store and transport volumetric visual content efficiently. v3c. Several compression standards have been developed under the v3c <cit.>, including miv (ISO/IEC 23090-12) <cit.>, v-pcc (ISO/IEC 23090-5) and g-pcc (ISO/IEC 23090-9) <cit.>. These three standards are briefly described in the following. * The ISO/IEC miv standard, released in 2021, was developed to support efficient coding of 3D representation of natural or synthetic-generated 3D scene captured by multiple cameras, enabling up to 6dof viewing experience. The miv standard specifies four profiles according to the input representation to encode. The baseline profile encodes input formats including both texture and depth, while the extended profile considers, in addition to depth and texture, transparency and occupancy information. Furthermore, the extended restricted sub-profile takes a multi-plane image (MIP) with texture and transparency as input. Finally, the geometry absent profile considers inputs only with texture information. In this latter profile, the receiver can predict the depth information from the decoded texture views. The miv encoder forms a set of depth and attribute atlases as compact representations of depth and texture input views with minimal pixel redundancy. In addition, the encoder generates metadata that describes these atlases. Then, the miv encoder encodes the depth and attribute atlases with a conventional video standard and the metadata according to the miv standard specification. A decoder compliant with the miv standard will be able to parse the bitstream, decode the texture and depth atlases with a conventional video decoder and then reconstruct the 3D visual signal based on the decoded metadata. * The ISO/IEC v-pcc standard, finalized in 2020, enables efficient coding of a dense static or dynamic point cloud. The v-pcc standard follows a projection-based approach that first projects the point cloud representation into texture, depth, and occupancy 2D maps, which are then compressed with a conventional video standard. The projection operation is not part of the standard, and the encoder implementation may use a custom projection solution that will impact the coding efficiency. A decoder compliant with the v-pcc standard parses the bitstream and then decodes the three 2D maps, used then to recover a reconstructed version of the input point cloud[Real-time V-PCC decoder on mobiles phones: <https://github.com/nokiatech/vpcc>]. * The ISO/IEC g-pcc standard, released in 2021, specifies tools to encode the geometry, which is particularly convenient for sparse point clouds. The g-pcc encoder performs sequential encoding of the geometry, then the related attributes, enabling to leverage off the decoded geometry for encoding the attributes. The real value of 3D coordinates (i.e., geometry) is first quantized into integer representations on M bits per coordinate and then voxelized. The voxelized geometry is analyzed by whether octree or trisoup scheme in a second stage. In the case of sparse point clouds, approximately only 1% of the voxels are occupied, which makes the octree very convenient for a compact representation of the geometry. Finally, an arithmetic encoder performs lossless compression of the resulting geometry structure, exploiting the statistical correlation between neighbor points. Regarding the attributes, after an optional conversion from RGB to YCbCr color space, the attribute is processed by one of the three available transform tools, i.e., raht, predicting transform, or lifting transform. Finally, the resulting residuals are quantized and arithmetically encoded to form the g-pcc bitstream. Learning-based coding. The end-to-end trainable model for image compression (a.k.a, learned codecs) can be classified into implicit and explicit coding. The explicit coding relies on variational auto-encoder transforms that project the input data into a more compact latent representation, which is then encoded by an arithmetic encoder following a distribution model. In addition, hyper-priors (i.e., mean and standard deviation) of the latent representation are also encoded with a hyperprior auto-encoder to capture their spatial dependencies by the arithmetic encoder effectively. Learned codecs initially proposed for 2D images are then extended to 2D videos and 3D visual modalities, including light field and point cloud, showing outstanding coding efficiency superior to conventional codecs. More recently, implicit neural coding has emerged as a promising solution where a mlp learns the mapping from 2D coordinates to RGB colors. First, the input coordinates are mapped into high dimensional space, typically using a sequence of sine and cosine functions (frequency encoding) of dimension L ∈ℝ. Then, the mlp learn the 3D density along the 5D light field of a given 3D scene (nerf). Subsequently, the network is trained to minimize the distortion, which is usually a tradeoff between mse and ssim. Finally, the compression is performed by processing the mlp' weights with three operations: pruning, quantization, and entropy coding as illustrated in Figure <ref> to generate the bitstream. Table <ref> gives the key performance of emerging codecs for 3D visual contents. §.§ 3D video streaming For streaming 3D content in live and offline use cases, different transport protocols can be used according to the application requirements regarding glass-to-glass latency, visual quality, and supported features. This section will focus on two widely used streaming protocols, namely omaf <cit.> and webrtc <cit.> along with the ISO/IEC 23090-10 standard <cit.> that specifically describes how to store and deliver v3c compressed volumetric video content. Figure <ref> illustrates the timeline of ITU-T and IEC/ISO standards specifically designed for coding and carriage of 3D visual content. omaf. The ISO/IEC 23090-2 omaf is a mpeg system standard developed to ensure the interoperability of devices and services targeting storage and streaming of omnidirectional media, including 360^∘ images and video, spatial audio, and associated text. The first version of the standard, finalized in October 2017, provides the basic tools for streaming 360^∘ images and video, enabling 3dof viewing experience. Additional tools have been integrated into the second version of the standard, released in October 2020, for more advanced features such as enhancing the viewport-dependent streaming, enabling overlays, and multiple viewpoints streaming as the first step towards 6dof viewing experience. The omaf specifications fall within three main modules: content authoring, delivery, and player. These specifications are extensions to the isobmff and dash, ensuring backward compatibility with conventional 2D media formats. WebRTC. WebRTC is an open-source framework designed for real-time and low-latency video transmission. At the WebRTC transmitter, the video collector module encodes the video and encapsulates the encoded video frames in RTP packets, which are then transmitted through the srtp. The receiver collects information on the received RTP packets and sends back information to the video collector in the transport-wide feedback message of the rtcp. Based on these control messages, the bandwidth controller module of the video collector computes network metrics such as inter-packet delay variation, queuing delay, and packet loss. These metrics are then exploited to calculate the target bit rate used by the rate control module of the video encoder that adapts the encoding parameters (quantization parameter, resolution, etc.) according to the target bit rate. However, vanilla WebRTC does not specify tools for transmitting immersive video, limiting its usage to 2D video. Nevertheless, WebRTC was widely adopted for real-time and low latency odv transmission by considering the 360^∘ video representation as a traditional 2D video. In addition, viewport-dependent can also be supported by mixing high-resolution tiles and low-resolution 360^∘ video for efficient bandwidth usage while ensuring high quality in the field of view area along with ultra-low motion-to-photon latency[Intel Advanced 360^∘ Video: <https://www.intel.com/content/dam/www/central-libraries/us/en/documents/advanced-360video-implementation-summary-final.pdf>]. ISO/IEC 23090-10. The storage and carriage of v3c data are specified by the mpeg system ISO/IEC 23090-10 standard. Like omaf, ISO/IEC 23090-10 leverages existing system standards designed for 2D video. More specifically, the ISO/IEC 23090-10 standard defines how v3c data is stored on isobmff containers and specifies extensions to the dash for delivery over the network. The standard defines three ways for storing v3c data on isobmff (ISO/IEC 14496-12) containers, including single-track storage, multi-track storage, and non-timed storage. This latter enables the storage of static v3c objects, which can also be used as thumbnails of the volumetric video track. The single-track storage enables carriage of v3c data on an isobmff container with limited functionalities. In contrast, multi-track storage encapsulates v3c components on different tracks enabling advanced features such as preventing bitstream demultiplexing prior decoding. In addition to isobmff boxes, the ISO/IEC 23090-10 standard introduces new boxes specifically for v3c data, such as v3c decoder configuration box and unit header box. This latter stores four bytes v3c unit header used to identify v3c components properly and maps them to the active vps signaled in the v3c decoder configuration box. For distribution, the ISO/IEC 23090-10 specification supports the live streaming of v3c content based on the mpeg-dash. Furthermore, the standard specifies how v3c segments are signaled in the mpd in both single and multi-track encapsulations. In the former configuration, the v3c content is described in the mpd by a single adaptation set with one or multiple representations encoded by the same codec. On the other hand, the multi-track encapsulation offers more flexibility since each v3c component is represented by an adaptation set. Therefore, this representation allows the client to perform adaptive streaming by requesting only a set of components. In addition, the adaptation sets can be encoded with different codecs or bitrates, enabling adaptive bitrate streaming. With the different types of multimedia data and transport protocols, that are essential for the development of an immersive Metaverse, it is essential to ensure that future wireless networks can provide resilient multimedia transmission between the cyber and physical realms. Therefore, in the following section, we articulate multiple wireless technologies that we envision will be key pillars in the Metaverse paradigm. § 6G: EMERGING TECHNOLOGIES FOR IMMERSIVE COMMUNICATION The ultimate goal of realizing an immersive, interoperable, and holistic Metaverse can be accomplished through enabling the massive twinning paradigm, in which a large number of distributed dt are linked in order to create an inclusive, comprehensive digital realm that comprises all objects-related details and functionalities of its physical counterpart. The multiple twins can be distributed in a geographical manner or in a function-oriented manner, i.e., twins are categorized according to the functionalities taking place in the respective twin. Within this context, each dt is envisioned to model, monitor, and control a single (or a multiple) network segment/s or function/s, and ideally, a massive twinning paradigm will pave the way to a global dt that covers the whole geographical area of interest, and that comprises all network nodes and their interactions with each other and with the environment <cit.>. Accordingly, the inter-twin communication concept emerges as a mean to bridge multiple twins, which refers to the communications happening at servers (cloud- or edge-servers) between two or more twins. Inter-twin communication aims to allow sharing of models, knowledge, and feedback among the twins and, subsequently, contribute to further reducing the overhead from available computing resources, as will be discussed later in this section. Another essential communication paradigm, intra-twin communication, is required to ensure a real-time operational dt through reliable links between the physical and cyber twins. Although both communication paradigms share some challenging requirements, each has unique demands attributed to the distinct goals associated with each paradigm. While intra-twin communication necessitates powerful computing capabilities and high capacity, the performance of inter-twin communication is characterized by the links' reliability, rate, and latency. In Table <ref>, we highlight the key requirements of each communication paradigm. It is worth highlighting that distributed dt can be placed at cloud servers, edge serves, or a combination of both. In this section, we will shed lights on the main communication paradigms that are identified as the key to enabling immersive multimedia communication in the dt and the Metaverse. Given that intra-twin communication is aimed at models' parameters exchange and knowledge transfer, we will focus on enabling paradigms of inter-twin communication, where the latter is aimed at immersive media transfer between the physical and cyber twins. §.§ Beyond Shannon Communication Future wireless generations are envisioned to enable more intelligent services, particularly concerning machines/ai agents communications, while coping with the available network resources, including energy, spectrum, and computing resources. We strongly believe such a vision cannot be realized solely by relying on higher frequency bands or conventional energy-efficient mechanisms. Within this context, upcoming wireless generations are anticipated to allow machines to communicate meaningfully while satisfying particular rate and energy constraints, by extracting the useful semantics to be communicated with the receiver instead of transmitting the whole message (which lacks in most scenarios the semantic aspect that allows the receiver to understand the purpose/meaning of the message). It is further envisaged that machines will exploit extracted semantics in order to perform particular predefined goals pertinent to parameters' estimation, optimization, classification, etc., in order to allow self-optimizing and self-configuring networks. Motivated by this, in this subsection, we shed lights on the essential role of goal-oriented, semantic, and goal-oriented semantic communication paradigms (See Figure <ref>) in enabling immersive multimedia communications. §.§.§ Goal-Oriented Communication Multimedia data communication can be effectively achieved by establishing particular goals to be fulfilled by the interacting nodes between the physical environment and the dt, in which these goals impose specific requirements pertinent to latency, resolution, field-of-view, etc. This scheme is regarded as goal-oriented communication. By defining a set of goals to be achieved when communicating particular media, the system specifications and constraints are then linked to this set, and interacting entities will aim to deliver the kpi with relevance to the identified goals. In goal-oriented communication, communication overhead is reduced through dedicating network resources to transmitting only essential information that directly contributes to the fulfillment of the goals' objectives. Meanwhile, information that doesn't have implications on the defined set of goals is considered less significant and, therefore, will be neglected in this process <cit.>. The key concept of goal-oriented communication in multimedia data is that transmitting nodes at the physical twin put more focus on reliably transferring data related to particular features within the user's field-of-view, thereby readily affecting the quality of the compression and contributing to the spectral efficiency enhancement of multimedia communication between the physical and cyber twins. In <cit.>, it was demonstrated that goal-oriented communication could be effectively applied to deploy data compression for high-dimensional data, including goal-oriented signal processing, data quantization, and clustering. However, one of the common limitations facing goal-oriented communication within the dt paradigm is the high possibility of experiencing heterogeneous tasks with diverse requirements, rendering network re-configuration a time-consuming and challenging process. Within this regard, ai-enabled solutions should be designed for more generalized and adaptive goals, with the aim to fulfill the requirements of multiple tasks within the network, given unified network setup and configuration. §.§.§ Semantic Communications As demonstrated earlier, wireless communications in the dt paradigm are anticipated to support distributed multimedia exchange, particularly continuous multi-modal media such as graphical animations, high-quality audio and video, haptics, and interactive images. Such real-time multimedia communication demands a large bandwidth in order to deliver the promised high qoi. Semantic communication alleviates the need for highly reliable and low-latency wireless channels, and therefore, real-time multimedia streaming with optimum synchronization between the cyber and physical twins can be achieved by relying on the available computing resources <cit.>. Although immersive multimedia data are multi-modal in nature, there should be a sort of correlation among different modalities. Accordingly, the role of semantic communication is not limited to extracting the unique meanings from each data stream but also understanding cross-modality features in different multimedia signals and then communicating the cross-modal semantics with the dt servers <cit.>. Such an approach can further reduce the bandwidth needed to stream multimedia signals with the cloud while ensuring an improved qoe for cross-modal applications at the dt network. Nevertheless, it is essential to quantify to what extent cross-modal semantic communication can deliver the needed immersive experience while satisfying particular rate requirements. Further, it was demonstrated that cross-modal semantics could help overcome the polysemy and ambiguity problems experienced in single-modal scenarios, primarily due to a lack of understanding of the context from which the semantics are extracted. Accordingly, efficient semantic communication systems constitute a promising approach toward extracting cross-modal features that interpret the needed information within the overall context, thereby introducing enhanced throughput and reliability performance. §.§.§ The Interplay of Goal- & Semantic-Aware Communication Driven by their inherent nature of communicating an abstracted version of wireless messages, the intertwined advantages of goal-oriented and semantic communications can be ultimately combined with the aim to achieve reliable, yet spectrally efficient multimedia communication. From one perspective, network goals can be set in order to maximize the effectiveness of the transmitted semantics, and thereby, enhancing the reliability performance of semantic communication systems. On the other hand, semantic communication plays an important role in further enhancing the throughput performance of goal-oriented communication, in which semantic encoder/decoder are employed in order to extract the useful information that are necessary for the achievement of the intended network goals, and hence, enhance the overall spectral efficiency of the network. §.§ Communication Protocols Supporting multimedia-on-demand traffic necessitates revisiting current communication protocols in order to design new protocols that can strike a balance between qoi, energy, latency, and rate. It is recalled that classical mac protocols are generic and uninterpretable, and their control signaling messages are unadaptable and hence, cannot be optimized for a particular task in order to deliver a task-specific qos <cit.>. With the diverse requirements of different tasks within the dt paradigm, efficient emerging communication protocols should be designed to handle the large volume of multimedia data to be shared over the wireless medium in a latency- and reliability-sensitive manner. Emphasizing on this, conventional approaches rely on granting early slots for high-priority data to guarantee a particular latency threshold, while reliability is ensured through redundancy and re-transmission schemes. However, within multimedia immersive communication for dt, such approaches are not well-suited for provisioning the needed qoi in terms of delay and reliability. In addition, the definition of high priority and reliability is different when tackling multi-modal data, where each stream, and each packet within the stream, has different qos requirements. For instance, a packet with I-frames from video data has higher latency and reliability requirements than packets with P- or B-frames. Therefore, a cross-design between multiple layers defines the new concept of multimedia communication, where data encoding, compression, and communication are performed through learned goal-oriented protocols, which are optimized to achieve identified qos requirements. Within this context, ai based protocols have emerged to realize optimum signaling and medium access policies. marl has demonstrated a promising reliability performance when ue employ marl to learn how to communicate without aprior knowledge of the mac protocol <cit.>. In specific, such an approach enables interpretable protocols, in which receivers can interpret various information related to control messages, timing advance, power headroom report, buffer status report, etc. It is envisioned that ai-based communication protocols will introduce a tangible performance enhancement for multimedia communication through realizing overhead reduction and medium access coordination optimization, whether ai agents are utilized for allowing learning-to-communicate through standardized protocols or as a paradigm shift towards more innovative protocols that are designed by ai. §.§ Holographic-Type Communication The future vision of htc, that is capable of providing the required level of immersion at the Metaverse, is to realize an efficient integration between 3D capturing, hologram encoding, compression and generation, and 3D hologram transportation and display in sub-millisecond latency and ultra-high frame-rate. In particular, such technology is anticipated to allow network users to enjoy a full-sense interactive experience with the dt of interest and the Metaverse. Ensuring high-definition holograms necessitates that the acquired data at the capturing stage be inclusive to all senses, and these senses should be transported in a synchronized manner. This is particularly for vision-based data. Accordingly, a tracker is employed to continuously ensure synchronized streams among movements, gestures, and other visual data <cit.>. htc relies mainly on depth sensors (i.e., LiDAR 3D cameras) and ar services to guarantee reliable communication of 3D holograms with parallax and haptics featuring 6dof (incorporating transnational and rotational movements), with the aim to deliver the needed sensory experience <cit.>. However, in order to enable high-fidelity holograms in the Metaverse, ultra-high bandwidth (Gbps) and close-to-zero delay between the physical object and its hologram constitute key limiting factors. The streaming overhead of holograms can be reduced by developing sophisticated prioritization mechanisms in which multiple streams are prioritized according to their impact on the user's perception. Specifically, several metrics, including frame rate, angular resolution, depth, dynamic range, etc., can be optimized in an adaptive fashion, taking into account the field-of-view of the user. In this way, resources are allocated to streams and frames that contribute to enhancing the hologram's qoi <cit.>. § OPEN CHALLENGES & FUTURE RESEARCH DIRECTIONS The Metaverse will need to be seamlessly integrated with the real world in order to be fully realized, which could pose technical and logistical challenges. In this section, we shed light on challenges experienced in wireless multimedia communication and lay down the foundation for potential future research directions for a successful Metaverse. §.§ Rendering 3D visual content Rendering volumetric video content on mobile devices with constrained computing and energy resources is still challenging, especially when multiple volumetric objects are present in the scene, in addition to the interaction of the user with the scene. Remote or interactive rendering reduces the processing load on the user and enables efficient bandwidth usage by sending only a 2D view of the volumetric object according to the user's position. However, this solution increases both the motion-to-photon latency and the glass-to-glass latency requiring additional transcoding in the cloud. To minimize the latency, the rendering can be performed in a geographic area close to the client by leveraging multi-access edge computing (MEC) along with the low latency offered by 5G/6G networks and the WebRTC communication protocol. In addition, accurate user pose prediction algorithms can be used to reduce the latency further and increase the perceived quality, preventing users' discomfort for a better viewing qoe. §.§ AI for the Metaverse The breakthrough advancements in ai pave the way for exploiting various ai models for improved qoe of Metaverse users. In particular, nerf <cit.> represents an innovative ai concept for the Metaverse. nerf has emerged as an ai-based solution that implicitly encodes the radiance field of the 3D scene in a mlp network. This mlp takes as input continuous 5D coordinates and predicts the volume density and view-dependent emitted radiance at the input spatial location. The nerf model trained on a sparse set of input views achieves state-of-the-art performance for synthesizing a novel view through a continuous representation of the scene. The success of nerf has revolutionized the 3D representation and rendering based on a lightweight mlp network. Several papers have then been published addressing nerf shortcomings: 1) reducing the memory access and computational complexity of the training, 2) real-time rendering on mobile devices, and 3) extension to 3D dynamic scenes. Therefore, the recent developments of nerf will shape the Metaverse with compact representation and real-time rendering of static and dynamic 3D scenes in high fidelity. §.§ Large language model (LLM)-empowered Metaverse The recent advancements in generative ai, and llm, show a promising potential to reshape the different types of experiences within the Metaverse. The main benefits of leveraging llm in the Metaverse is their capabilities to perform on-demand content generation, in a versatile and adaptive manner, fulfilling the needs of several scenarios within the Metaverse. This is particularly pronounced with the emergence of visual generative models, e.g., DALLE-2 (text-to-image), Phenaki (text-to-video), and DreamFusion (text-to-3D image), which constitute a clear path towards generating high-fidelity visual content, while ensuring efficient bandwidth usage, lower latency, and enhanced functionalities. From a different perspective, ChatGPT3/4 is a powerful conversational system released recently by OpenIA[<https://openai.com/blog/chatgpt/>], based on the gpt3 model. The outstanding conversational capabilities of ChatGPT3/4 make it a promising candidate to provide the Metaverse with many compelling features in high fidelity. Integrating such a model with intelligent agents will enable more natural and immersive interactions with the virtual world through understanding and responding to human language. Furthermore, the ChatGPT3/4 model will allow personalized experience through fine-tuning the model to a specific application or language. Therefore, generative ai models will offer the Metaverse a more natural way of communication, collaboration, and creativity, and thereby, enhancing the user's qoe. §.§ Cloud and mobile-edge computing Realizing massive twinning at cloud or edge servers impose high computing overhead on cloud and edge resources, particularly, to process multimedia data. In particular, computing entities in the Metaverse paradigm is expected to process a vast amount of high-dimensional data in order to allow the user to experience the full immersion in the virtual world. In addition to the data volume, novel challenges pertinent to data 3D rendering, which rely on converting raw multimedia data into displayable objects, has emerged with the Metaverse paradigm, demanding further computing resources from the dt servers. Accordingly, it is necessary to develop efficient multimedia computing services, not only for Metaverse providers but also for end users who wish to access the Metaverse through their resource-constrained devices. This includes effective task-offloading and 3D rendering approaches. Several npu available on embedded devices can run ai models in real-time and with low energy consumption. For example, the npu integrated on the latest Snapdragon Qualcomm mobile devices enables up to 32 Tera operations per second in 8-int, 16-int or 32-float precision. In addition, edge computing devices such as the NVIDIA Nano board enables up to 472 Gflops in 16-float precision with 128 cores. §.§ Standardization Several standardization bodies have explored 3D visual content at different levels of the transmission chain, from coding and streaming at the ISO/IEC mpeg to transmission over 5G networks and broadcast at 3gpp and dvb, respectively. These standards provide tools for efficiently streaming 3D visual content while ensuring the interoperability of devices and fast deployment of immersive video services. Still, the ISO/IEC mpeg 3D graphics group launched in 2021 a call for proposals for a new 3D dynamic mesh coding standard called v-dmc <cit.>, as a new member of the v3c standards family. The v-dmc standard aims for efficient lossless and lossy compression of 3D dynamic meshes targeting various applications such as real-time 3D video streaming and rendering, low latency cloud gaming, etc. Furthermore, the 3gpp has worked on several projects related to vr and xr. For instance, the 3gpp technical report 26.918 attempts to identify interoperability issues that may require standardization activities in the 3gpp context with a focus on odv and associated audio. Further, the 3gpp technical specification 26.118 defines media and representation profiles for the interoperability of vr streaming services. Moreover, the 3gpp technical report 26.928 collects information on xr in the context of 5G radio and network services to identify potential needs for 3gpp standardization. In addition, the vrif guidelines[VR industry forum guidelines v2.3: <https://www.vr-if.org/wp-content/uploads/vrif2020.180.00-Guidelines-2.3_clean..pdf>] provide practices and recommendations for high-quality and interoperable 360^∘ video services. More recently, the vrif has released a guidelines document[Volumetric video industry forum guidelines v1.0: <https://www.vr-if.org/wp-content/uploads/Volumetric-Video-Guidelines-1.0.pdf>] covering all aspects of volumetric video delivery ecosystem targeting high-quality vr, ar and xr services. Finally, the ITU-T launched on December 2022 the fg-mg under the telecommunication standardization advisory group. This group will investigate the technical requirements of the metaverse to identify the fundamental enabling technologies from multimedia and network optimization to digital currencies, iot, dt, and environmental sustainability[fg-mg: <https://www.itu.int/en/ITU-T/focusgroups/mv/Pages/default.aspx>]. For instance, the usd NVIDIA open software provides a rich, common language for defining, packaging, assembling, and editing 3D data. Therefore, usd is a good candidate to become a standard facilitating the use of multiple digital content creation applications for their integration in Metaverse platforms. §.§ Sustainable Metaverse The Metaverse will drive sustainability by reducing several applications' energy consumption and co2 emissions. Specifically, the Metaverse will shift consumer and industrial applications by moving from harmful physical mobility towards more sustainable interactions in digital spaces. Nevertheless, the Metaverse also raises serious concerns about its software, hardware, and infrastructures that need to be eco-friendly and designed to minimize their co2 footprint. For instance, gpt3 model has 175 billion parameters, and its training energy footprint is estimated to 936MWh, which corresponds to the energy consumption of 30,632 American householders or 97,396 European householders. The ai models are becoming larger and larger to handle more complex tasks and reach the level of intelligence and adaptation of the human brain. Nevertheless, our brains are energy efficient, consuming less than 40 Watts for 100 petaflops computing power, which is faster than any existing supercomputer. Therefore, future research can get inspired by the human brain to perform efficient and sustainable computing of large ai models within the Metaverse platform. Alternative shot-term strategies involves developing more efficient generative models with less number of parameters. By focusing on model optimization and leveraging high-quality data, it is possible to achieve comparable performance with significantly lower energy consumption. Additionally, fine-tuning models for specific tasks can provide a targeted and streamlined approach, further enhancing efficiency and minimizing environmental impact. §.§ Theoretical limits The prevailing vision of future wireless communication is directly attached to breaking the Shannon-limit barrier, in which semantics and goal-oriented communications are anticipated to be the basis of multimedia data communication, and hence, the key driver behind the Metaverse, as discussed earlier in this article. Accordingly, extremely higher data rates, with tight energy constraints, are envisioned to be realized in future 6G networks. However, it is yet unclear to what extent we can go beyond the Shannon limit. Specifically, there are no theoretical underpinnings that can quantify the performance of such technologies and validate the hypothetical claims on the achievable rate by semantic- and goal-oriented communications. This is further pronounced in the dt and the Metaverse paradigms, which lack a solid mathematical representation, and therefore, the fundamental limits of multimedia communications in massive twinning through semantics or goals constitute an open research topic. §.§ Tactile internet The Metaverse is envisioned to provide a full-immersive experience to Metaverse users through the real-time exchange of the five senses, including the haptics. Tactile internet, as an enabler for haptic communication, enables real-time gathering, perceiving, and controlling virtual objects according to haptic feedback. Haptic feedback in the Metaverse is expected to offer kinesthetic and tactile perception to efficiently complement other senses towards extremely-high qoi. However, several challenges should be tackled prior to the real adoption of tactile internet in the Metaverse, including reliable haptic interfaces, joint communication and control for teleportation, and user privacy. §.§ User Experience The success of the Metaverse is mainly related to users' satisfaction with the experience provided by Metaverse platforms. Several criteria can play an essential role in the final qoe, including audio/video quality, end-to-end or photon-to-motion latency, computing resources, network throughput/latency/jitter, and finally, the viewing comfort offered by display devices. All these parameters need to be jointly optimized to reach the high comfort, and the qoi promised today by the Metaverse. Metaverse platforms also challenge the quality estimation research community to build new accurate models that estimate the user qoe in Metaverse platforms. In contrast to traditional image/video quality estimation models, additional features of the Metaverse platforms, network, and user feedback must be considered to build new datasets and accurate qoi/qoe models. §.§ Ethical & Safe Metaverse The future vision of the Metaverse relies on exploiting a massive number of sensors that are attached to everything in the physical environment in order to enable two-way control between the ct and pt. Such collected data may contain highly sensitive and private data, including biometric data, and therefore, it is anticipated that the Metaverse will be an open world subject to several attacks, compromising users’ privacy and safety. This includes the risk experienced through nodes/users interactions inside the Metaverse, which is caused by unethical behaviors conducted by Metaverse users. This issue is exacerbated by the employment of immersive media, which renders a wide range of details within the communicated data. Accordingly, activities pertinent to ensuring effective fulfillment of the privacy policies, regulations, ethical conducts, and users' accessibility of the Metaverse are essential to be initiated at the early stage of the development of the Metaverse paradigm. §.§ Digital Holography Holography technology can represent the 3D information of the scene through wavefronts of light. The digital holography acquisition system captures the wavefronts' phase and amplitude optically with a digital sensor array or numerically using hologram synthesis algorithms. The resulting patterns are processed and stored as dh. Then, the slm technology is used to represent the dh in optical setups for displaying the 3D scene in the air without requiring a physical display. Nevertheless, the resolution supported by the available slm is insufficient to display most of dh in high-quality. This latter issue is addressed by viewing the dh on legacy 2D displays. To reach this end, numerical models are used for reversing light propagation and then post-processing the complex-valued floating-point wavefield to obtain regular images. Multiple propagation models and associated pre/post-processing modules have been developed in the literature. In particular, the ISO/JPEG Pleno (ISO/IEC 21794-1...6) efforts aim to standardize coding tools for 3D static visual modalities, including light field (ISO/IEC 21794-1), point clouds (ISO/IEC 21794-6), and dh (ISO/IEC 21794-5). JPEG Pleno defined in the ctc a new dh images dataset and the nrsh. This latter is a Matlab software that enables extracting specific views from the Pleno dh dataset, providing accurate reconstructions. Nevertheless, in addition to the high computational complexity of holography reconstruction algorithms, their main limitation is the low-quality reconstructed views due to severe aliasing and speckle noise distortions, preventing their widespread deployment. § CONCLUSION In this article, we have approached the development of the Metaverse from the angle of immersive multimedia and the enabling wireless technologies that support the smooth and reliable communication of immersive media between the physical and digital worlds. Specifically, we thoroughly discussed the different media modalities, highlighting their pros and cons, standardization activities, and coding techniques. We further explored the potential of 6G networks in realizing efficient transmission of the surveyed multimedia data, where we have outlined the key wireless technologies that are envisioned to be the Metaverse underpinnings. We finally sketch the roadmap towards developing an efficient Metaverse by outlining the main limitations of wireless multimedia communication and opening new horizons of future research directions. IEEEtran § BIOGRAPHIES Wassim Hamidouche (Wassim.Hamidouche@tii.ae) is a principal researcher at the Technology Innovation Institute in Abu Dhabi, UAE. He is the co-author of 180 peer-reviewed papers in the most prestigious venues in multimedia and image processing. Lina Bariah (lina.bariah@ieee.org) is a Senior Researcher at the Technology Innovation Institute in Abu Dhabi. She is an IEEE Senior Member. She serves as an Associate Editor for the IEEE Communication Letters, and the IEEE Open Journal of the Communications Society. Mérouane Debbah (Merouane.Debbah@ku.ac.ae) is Professor at Khalifa University in Abu Dhabi. He is an IEEE Fellow, a WWRF Fellow, a Eurasip Fellow, an AAIA Fellow, an Institut Louis Bachelier Fellow, and a Membre émérite SEE. He has received more than 20 best paper awards.
http://arxiv.org/abs/2307.02371v1
20230705153845
Planning and Control for a Dynamic Morphing-Wing UAV Using a Vortex Particle Model
[ "Gino Perrotta", "Luca Scheuer", "Yocheved Kopel", "Max Basescu", "Adam Polevoy", "Kevin Wolfe", "Joseph Moore" ]
cs.RO
[ "cs.RO" ]
empty 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Multi-messenger observations support cosmic ray interactions surrounding acceleration sources Qiang Yuan August 1, 2023 ============================================================================================= Achieving precise, highly-dynamic maneuvers with Unmanned Aerial Vehicles (UAVs) is a major challenge due to the complexity of the associated aerodynamics. In particular, unsteady effects—as might be experienced in post-stall regimes or during sudden vehicle morphing—can have an adverse impact on the performance of modern flight control systems. In this paper, we present a vortex particle model and associated model-based controller capable of reasoning about the unsteady aerodynamics during aggressive maneuvers. We evaluate our approach in hardware on a morphing-wing UAV executing post-stall perching maneuvers. Our results show that the use of the unsteady aerodynamics model improves performance during both fixed-wing and dynamic-wing perching, while the use of wing-morphing planned with quasi-steady aerodynamics results in reduced performance. While the focus of this paper is a pre-computed control policy, we believe that, with sufficient computational resources, our approach could enable online planning in the future. § INTRODUCTION The unsteady aerodynamics of post-stall flight pose a challenge for model-based control of aerobatic Unmanned Aerial Vehicles (UAVs). UAV dynamics models typically represent the aerodynamic state only as a wind velocity vector; acceleration of the UAV is computed from the vehicle's velocity and orientation relative to the air around it. This approach is ideal for vehicles with high inertia flying at low incidence angles, each of which reduces the unsteady aerodynamic effects on vehicle dynamics. With some effort, though, this approach can be extended to agile UAVs at arbitrary orientations. This produces computationally efficient models, but limits expressiveness of the resulting dynamics. Transient aerodynamic effects are entirely absent from these models, and so they lack fidelity in aggressive, post-stall maneuvering. In this work, we replace the quasi-steady aerodynamics in a UAV dynamics model with a vortex particle model, capable of representing unsteady aerodynamic effects (see visualization in Fig. <ref>). Potential flow methods like the vortex particle model are dramatically more computationally efficient than grid-based computational fluid dynamics (at the expense of some physical fidelity), opening the possibility for simulating the fluid state in the control loop for real-time planning and state estimation. In this paper, we explore the performance improvements afforded by including these unsteady models in controller synthesis and execution. We evaluate our methods on aggressive, post-stall maneuvers as well as under dynamic wing-morphing. We show that including a computational model of unsteady aerodynamics to generate the control policy enables improved performance for a UAV executing a post-stall maneuver with and without wing morphing. While our control approach is not yet fast-enough for real-time re-planning, we believe that future computational advancements could allow receding-horizon methods that leverage simulation-in-the-loop. In summary, the main contributions are this paper are * A novel discrete vortex simulator capable of running faster than real-time for planarized maneuvers. * A control algorithm capable of leveraging the discrete vortex model for trajectory optimization and feedback. * Demonstration of improved perching performance in hardware on a dynamic morphing-wing vehicle. Our paper is organized as follows: Section <ref> introduces the morphing-wing UAV, Section <ref> reviews the aerodynamic model, Section <ref> describes its use in model-based control, and Section <ref> presents the experimental results of UAV performance in gliding perch maneuvers. § RELATED WORK Early work in post-stall maneuvers for autonomous UAVs aimed to hover a fixed-wing UAV by transitioning to and from a prop hang <cit.>. This was accomplished through linear feedback designed using a nonlinear UAV dynamics model which was improved by wind tunnel measurements of the test vehicle. Further development of hybrid theoretical and empirical dynamics models was done in <cit.>. Their dynamics model was purpose-built for aerobatic fixed-wing UAVs performing maneuvers at high incidence angles and was used in related works to control agile UAVs in various aggressive maneuvers <cit.>. Even these state-of-the-art models are unable to predict UAV dynamics with fidelity sufficient for long horizon planning. In some cases, agile maneuvers with fixed-wing UAVs are accomplished by improving controller robustness rather than (or in addition to) model fidelity. For instance, in <cit.>, the authors use the Linear Quadratic Regulator (LQR)-Trees algorithm to generate a library of feedback policies to improve robustness to initial conditions and model error. In <cit.>, model error is compensated for by an inner loop of Time Varying LQR (TVLQR) feedback and an outer loop of Nonlinear Model Predictive Control (NMPC) re-planning. Recent work at the limits of autonomous quadcopter maneuverability has emphasized a theoretical hurdle which is even more apparent for agile fixed-wing UAVs: quasi-steady aerodynamics models are insufficient for high fidelity flight dynamics of extremely agile UAVs <cit.>. Their improved dynamics model was produced by augmenting the quasi-steady physics-based model with a learned residual conditioned on the recent history of vehicle states. There is yet little overlap between the aerodynamic models used by roboticists in control of agile UAVs and the modeling tools developed by aerodynamicists with agile UAVs as motivation. Of the latter, recent work in potential flow models motivates the current work in applying computational aerodynamics to UAV control. A review of various potential flow models designed for unsteady aerodynamics around UAVs can be found in <cit.>. One such model was developed in <cit.>, augmented with state estimation via pressure sensors in <cit.> and <cit.>, and tested in predicting wing–gust response in <cit.>. § MORPHING-WING UAV Aerobatic fixed-wing UAVs are capable of post-stall maneuvering, leveraging large control surfaces and significant thrust-to-weight ratio to maintain control authority. However, fixed placement of aerodynamic surfaces relative to center of mass limits the agility of even these platforms. In the current work, we explore the increased flight performance of a morphing-wing UAV. While many forms of wing morphing are possible (and many are potentially beneficial), we focus on dynamic wing sweep. Actuating wing positions significantly affects roll and pitch by moving the center of pressure, but is generally less mechanically complex than wing morphing by changing size or shape of lifting surfaces. Fig. <ref> shows our morphing-wing UAV. It is a small, carbon-reinforced foam aircraft designed around a conventional planform. It has a 70 wingspan and 200 mass (including the electronics not visible in Fig. <ref>). In addition to conventional control surfaces—rudder, elevator, and two ailerons—the wing sweep angles are controlled by servos near the root of each wing. The propeller was not included for any of the experiments in this work. The two shoulder degrees of freedom of this morphing-wing UAV increase the potential agility of the platform at the expense of dynamic stability, motivating our need for greater aerodynamic model fidelity. In their un-actuated position, the wings' leading edges are orthogonal to the fuselage (as they are in Fig. <ref>). Relative to that position, each wing can swing independently up to 30 toward the UAV's nose, and up to 90 toward the tail. Traversing the full range of motion takes 0.2. The morphing wings and the ailerons were not simultaneously controlled in our experiments; flights with active wing morphing held the ailerons fixed in their neutral position. § UNSTEADY AERODYNAMICS Post-stall aerodynamics pose a problem for quasi-steady models, since the local aerodynamics vary over time even for fixed vehicle attitude. These variations are not randomness in the dynamics; they reflect a dependence of the forces and moments on conditions not represented by (or deducible from) the vehicle attitude. A flight dynamics model without aerodynamic state represented can only be accurate on average, and may still deviate significantly at any particular moment. For low inertia UAVs, neglecting the time-varying dynamics can significantly limit overall model fidelity. The goal of this work is to improve flight dynamics fidelity in post-stall conditions by explicitly modeling the local aerodynamics. Using any of the many techniques available for Computational Fluid Dynamics (CFD), the aerodynamics around the flying UAV can be computed to arbitrary precision by numerically solving the discretized Navier-Stokes equations. However, these methods are not remotely intended for real-time model-predictive control, and typically take hours or days of computation for seconds of simulation. Instead, computationally-efficient modeling of unsteady aerodynamics relies on data-driven solutions or on solving simplified governing equations. Potential Flow models simplify the Navier-Stokes equations such that flows can be expressed as sums of solutions to LaPlace's equation: ∇^2ϕ=0 where ∇ϕ=𝐯; ϕ is the scalar field of flow potential, and 𝐯 is the vector field of flow velocity. Numerical potential flow models—such as the vortex particle method used in this work—take advantage of this to represent flows using a Lagrangian discretization, a collection of fluid elements, rather than discretizing the domain using a Cartesian grid. This representation requires far less computation than grid-based methods, and so provides a promising avenue for real-time control with unsteady aerodynamics. §.§ Vortex Particle model In this work, the aerodynamic state near the UAV is represented using vortex particles, Lagrangian elements of flow carrying vorticity. Each vortex particle induces flow velocity in concentric circles around itself and is moved by the local flow velocity. The sum of velocity induced by all vortices is the flow solution modeled by the vortex particle method. In our model, the influence of the wing is also represented by a collection of vortex particles. The wing's vortices are “bound” to the surface; they do not move with the flow as the wake vortex particles do. Fig. <ref> shows the two-dimensional aerodynamic state around a thin, flat wing at 45 incidence to the oncoming wind. The boundary condition imposed by the wing causes shear against the surface which results in local vorticity in the air. At each simulated time step, the vorticity released from the wing into the air at each edge is captured as one new vortex particle. The existing particles convect with the local velocity, and in this case they roll up into larger coherent structures. Each vortex has a scalar strength, Γ, and a two dimensional position, 𝐱. The velocity induced by a vortex particle on a target location is 𝐯_target = Γ_vortex/2π r^2[[ 0 1; -1 0; ]] Δ𝐱 where r = ||Δ𝐱|| and Δ𝐱 = 𝐱_target - 𝐱_vortex. In our model, the wake vortices (but not the bound vortices) use a modified form of this influence kernel, which represents the finite core of a real vortex in viscous fluid. This is accomplished by scaling (<ref>) such that 𝐯_target = Γ_vortex/2π r^2( r/r_core)^2 /√( 1 + ( r/r_core)^4 )[[ 0 1; -1 0; ]] Δ𝐱 where r_core is the specified core radius of the model <cit.>. The wing's surface is defined by bound vorticies and + 1 control points at which the surface boundary condition is enforced. In this model, - 1 control points are centered between each pair of adjacent bound vortices and the remaining 2 are located at the leading and trailing edges of the wing. The boundary condition, “no through flow,” requires that there be zero relative velocity of fluid and surface normal to the surface. Δ𝐯·𝐮̂_normal = 0 where Δ𝐯 = 𝐯_surface - 𝐯_air and 𝐮̂_normal is the unit vector normal to the surface. The scalar strength value of each bound vortex is solved to satisfy this condition at each control point. The surface edges also shed vorticity into the fluid, creating one new wake vortex particle each at every time step. Together, the strengths of the bound vortices and new wake vortices make + 2 free variables. The final criterion which closes the system of equations is Kelvin's theorem, which states that total circulation is conserved. d/dtΓ_total≡d/dt∑Γ_i = 0 where Γ_total is the system's total circulation and Γ_i is the strength of a particular vortex particle. Every vortex influences the velocity at each control point proportional to its strength, Γ, so the boundary conditions can be solved as a system of + 2 linear equations. This solution fixes the strengths of the new wake vortices and specifies the current strength of each bound vortex. The UAV dynamics are determined from the local aerodynamics by computing the pressure difference across the surface. Following <cit.>, the pressure difference at the ith bound vortex from the leading edge is Δ p_i = ρ[ Δ𝐯_i ·𝐮̂_i, tangentΓ_i/Δ l_i + d/dt∑_j=0^iΓ_j ] where ρ is the fluid density, 𝐮̂_i, tangent is the surface-tangent unit vector at i, and Γ_i and Δ l_i are the total vortex strength and length along the surface associated with the ith bound vortex. The term d/dt∑_j=0^iΓ_j represents the path integral of circulation rate-of-change from ambient flow to the ith bound vortex along the wake and surface, including the newest leading edge wake vortex at i=0. Integrating this pressure distribution across the surface produces the forces and moments on the UAV, which in turn are used to update the vehicle's velocity and pose for the next time step. The position of each wake vortex is simultaneously updated based on the local fluid velocity. Over many time steps of this process the number of wake vortices grows to an unmanageable quantity. Computational efficiency is maintained by identifying pairs of vortices which can be merged together with minimal error introduced. Specifically, we replace pairs of vortices with one vortex when doing so changes the induced velocity on the wing less than a threshold amount. The new vortex has strength equal to the sum of the replaced vortices' strengths, and is located at their strength-weighted-average position. The model discussed so far represents two dimensional unsteady aerodynamics around thin surfaces. To model the whole UAV, we used multiple parallel slices of the two dimensional representation and added quasi-steady contributions for non-lifting surfaces. This “two-and-a-half” dimensional model is limited to small out of plane velocity; generalization to large side-slip would require further model development. § PLANNING AND CONTROL We envision this vortex particle model used as part of a receding-horizon NMPC algorithm comprised of an outer loop of nominal trajectory optimization and an inner loop of locally-linear feedback similar to <cit.>. Due to the computation time requirements of the vortex particle model, our current work only explores a single execution of that outer loop, which permits pre-planning of the maneuver. This reduces the problem to off-line trajectory optimization. However, with additional model development, these results could generalize to a full receding-horizon NMPC implementation. §.§ Model-predictive control The vortex particle model augments the UAV's state space with a variable-length, arbitrarily-large characterization of local aerodynamics. This is not easily compatible with direct optimization techniques for maneuver planning which include the state variables as decision parameters. Instead, the perch maneuver was planned using a sampling-based approach which only uses inputs rather than states as decision parameters. We adapt the Model Predictive Path Integral (MPPI) approach of <cit.>, which relaxed restrictions on the dynamics model compared to previous MPPI approaches. We specify initial and target conditions for the UAV, X_0 and X_target, and initial conditions for the vortex particle model and control sequence (both of which begin with all zeros in this work's launch–glide–perch experiment presented in section <ref>). At each iteration, many perturbations to the nominal control sequence are generated by sampling from zero-mean normal distributions. The flight dynamics are simulated using the vortex particle model to observe the state trajectory resulting from each input sequence. Each trajectory is assigned a cost, which here is simply a scaled quadratic cost computed from the closest approach to the target. cost = min_over traj.( ||[ Δ x; Δ z; Δα ]||^2 + 0.2 ||[ Δẋ; Δż; Δα̇ ]||^2 ). During path planning, the out-of-plane state components, y, ϕ, and ψ, are constrained to zero by model symmetry. The nominal control sequence is replaced by an exponential weighted average of the perturbed controls, where the weights are negative cost. A render of the converging nominal trajectory and simulated states for perturbed controls is shown in Fig. <ref>. The combination of vortex particle model and sampling-based planner is computationally demanding. Our model is implemented such that all particle interactions ((<ref>) and (<ref>)) and all parallel samples for MPPI are computed as a single vectorized operation, optionally executed on a graphics card. Additionally, there are many adjustments in the model which balance performance and computational expense. Physical fidelity of the vortex particle method requires adequate temporal and spatial resolution and can be assisted by multi-step numerical integration. Convergence of the control sequence depends on an adequate number of sampled perturbations per iteration and an adequate number of total iterations. Reducing any of these parameters increases the computational speed, but could compromise model fidelity or control convergence. We were unable to find a test configuration where the combined model and planner produce meaningful control sequences in real time for receding horizon planning. Instead, maneuvers for the presented experiments had to be planned ahead of flight time using known initial conditions. On the laptop used for those experiments, planning a 1.5 maneuver took at least 5 of computation. (Keep in mind that the start of the control sequence is executed almost immediately, so the planner must return in far less time than the planning horizon.) Once real-time control was ruled out, the model resolution and planner samples and iterations were increased for the experiments discussed in Section <ref>; the final maneuver was planned in 1. §.§ Feedback control To generate our local linear feedback policy for trajectory tracking, we employ TVLQR. Because we cannot directly compute a gradient through the vortex particle model, we simulate the nominal state trajectory and the local partial derivative of dynamics with respect to each UAV state and control variable through finite difference at intervals along that trajectory. This procedure requires careful managing of the vortex particle model state, but is otherwise identical to TVLQR for quasi-steady aerodynamics models. These partial derivatives contribute to a backwards integral over the trajectory which produces linear feedback gains for each point computed (see <cit.> for details on our implementation). In flight, the executed commands are the combination of the nominal values from MPPI and the linear feedback from TVLQR. In Fig. <ref>, the morphing wings are actuated asymmetrically for a nominally-planar maneuver; this is the TVLQR response to our intentionally-perturbed initial conditions. § PERCH MANEUVER EXPERIMENTS We experimentally tested the impact on autonomous flight performance of the morphing wings and the vortex particle model by comparing achieved target distance for a particular post-stall maneuver, launch–glide–perch. Perching maneuvers require maintaining control while intentionally stalling a UAV's wings, and are a common choice of test case for experiments in nonlinear control <cit.>. In our experiment, the UAV is launched using a guide rail and elastic cable for repeatable initial conditions. Once the UAV clears the launcher, it has 3.5 to arrest its initial velocity of 7 and arrive at the perch target. In addition to specifying the perch location, the controller aims to arrive at the perch with precise orientation and velocity: 45 pitch, no roll or yaw, 0.5 velocity forward and downward, and no velocity sideways. There is no real perch mechanism at the target location; after the maneuver, the UAV falls into the arena net. This UAV was flown in a motion capture arena using autonomous, off-board control. Fig. <ref> shows the UAV performing this maneuver in our motion capture facility. §.§ Performance results The same launch–glide–perch maneuver was repeated for four configurations of the UAV: (1) fixed-wing sweep with conventional aerodynamic model, (2) fixed-wing sweep with vortex particle model, (3) morphing wing with conventional aerodynamic model, and (4) the primary case of morphing wing with vortex particle model. This permits investigation of the individual contributions to flight performance of the hardware and software modifications. The perch maneuver was repeated 10 times for each of these four configurations. Fig. <ref> shows the trajectory of the UAV during each of the tested perch maneuvers. Each line is colored by the configuration used for that sample. Qualitatively, all configurations seem able to slow down quickly and fall near the perch target with varying precision. For a quantitative comparison of perch maneuver performance, we compute the minimum target cost achieved on each test. The quadratic cost function used in maneuver planning, (<ref>), is also used for evaluation. The resulting minimum costs are shown in Table <ref>. The absolute value of the cost is not meaningful, so each value is normalized by the mean of the baseline case. Using the unsteady aerodynamic model without wing morphing did improve over the baseline performance slightly, and using both wing morphing and the unsteady model improved performance further. However, the maneuver performance was far worse when attempting to plan for the morphing wing using the quasi-steady aerodynamic model. § CONCLUSION In this paper, model-based control of agile UAVs was extended to a morphing-wing UAV and an explicit representation of local aerodynamics. Experimental comparison of performance in perching maneuvers suggests that the wing morphing degrees of freedom add to the UAV's maneuverability and that improvement of autonomous performance was enabled by the vortex particle model, an unsteady, first-principles-based computational aerodynamics model. The wing morphing capability did not improve performance in paths planned using a conventional, quasi-steady aerodynamics model. Our results support the notions that greater agility is achievable from fixed-wing or morphing-wing UAVs and that autonomous realization of this capability benefits from a dynamics model conditioned on unsteady aerodynamics. However, our vortex particle model is not capable of real-time planning given current computational resources. Future work will explore modifications to our model formulation to improve computational speed with the goal of enabling real-time planning. This would face a remaining challenge in state estimation for the vortex particle model as discussed in <cit.>, so another future research direction is the exploration of pressure sensing on the UAV wings to increase vortex particle state observability. Finally, we plan to investigate data-driven approaches for modeling unsteady effects. As explored in <cit.>, machine learning may provide an alternate means of producing high fidelity UAV dynamics models conditioned implicitly on the local aerodynamics. IEEEtran
http://arxiv.org/abs/2307.01639v1
20230704104702
Heuristic Algorithms for the Approximation of Mutual Coherence
[ "Gregor Betz", "Vera Chekan", "Tamara Mchedlidze" ]
cs.AI
[ "cs.AI", "cs.LG", "cs.LO", "cs.SI" ]
G. Betz et al. Karlsruhe Institute of Technology, Karlsruhe, Germany gregor.betz@kit.edu Humboldt-Universität zu Berlin, Germany vera.chekan@informatik.hu-berlin.de Utrecht University, Utrecht, The Netherlands t.mtsentlintze@uu.nl Heuristic Algorithms for the Approximation of Mutual Coherence Gregor Betz1 Vera Chekan1,20000-0002-6165-1566 Tamara Mchedlidze 1,30000-0002-1545-5580 August 1, 2023 ============================================================================================= Mutual coherence is a measure of similarity between two opinions. Although the notion comes from philosophy, it is essential for a wide range of technologies, e.g., the Wahl-O-Mat system. In Germany, this system helps voters to find candidates that are the closest to their political preferences. The exact computation of mutual coherence is highly time-consuming due to the iteration over all subsets of an opinion. Moreover, for every subset, an instance of the SAT model counting problem has to be solved which is known to be a hard problem in computer science. This work is the first study to accelerate this computation. We model the distribution of the so-called confirmation values as a mixture of three Gaussians and present efficient heuristics to estimate its model parameters. The mutual coherence is then approximated with the expected value of the distribution. Some of the presented algorithms are fully polynomial-time, others only require solving a small number of instances of the SAT model counting problem. The average squared error of our best algorithm lies below 0.0035 which is insignificant if the efficiency is taken into account. Furthermore, the accuracy is precise enough to be used in Wahl-O-Mat-like systems. § INTRODUCTION AND MOTIVATION A widely studied question in Bayesian epistemology is the internal coherence of two opinions, that is how consistent the opinion is or to which degree its “parts” support each other (the interpretation depends on the used measure of internal coherence). This notion can be generalized to the mutual coherence of two opinions. The computational issues of internal and mutual coherence are the same and in this paper, we focus on mutual coherence. Consider two pairs of opinions as an example <cit.>: * Opinion A: “You should not eat animal products.” * Opinion B: “Animals have a right to life. Killing them in order to eat them is morally wrong.” * Opinion A: “You should not eat animal products.” * Opinion B: “Everyone can decide for themselves what they eat.” Intuitively, in the first pair, opinions seem to support each other whereas the second pair of opinions seem to be contradictory. For this reason, a (meaningful) measure of mutual coherence would assign a higher value to the first pair of opinions. There is no clear answer to the question “When are two opinions coherent?”. Over the past twenty years, a multitude of coherence measures has been developed and studied. Most of the existing coherence measures are probabilistic, i.e., they assume that a prior probability distribution P over the set of statements is given. For example, the first well-known coherence measure was proposed by Shogenji in 1999: C_1(A,B) = P(A | B)/P(A) = P(A B)/P(A)× P(B) where A, B ⊆ 2^S are the opinions <cit.>. This simple coherence measure has an obvious problem: once the opinions contain contradictory statements, the coherence value is equal to zero no matter how similar the remaining statements are. To resolve this problem, various coherence measures have been suggested. A wide overview of these measures can be found in <cit.>. Unfortunately, all of them have further disadvantages. It has even been proven that no “perfect” coherence measure (i.e., satisfying a set of the desired criteria) exists <cit.>. One particular disadvantage of the probabilistic coherence measures is the assumption that a probability distribution P over the set of statements is given as a part of the input. This risks to render the coherence measures utterly subjective. To avoid this problem, a measure of mutual coherence based on the #SAT problem was introduced by Betz <cit.>. The definition of this measure will be presented in the next section. Instead of assuming a probability distribution, this measure relies on the so-called argument maps. An argument map is a graph-like representation of arguments, where an argument consists of a set of premises and a conclusion. The main problem of this measure is that the exact computation is highly time-consuming. Although mutual coherence is a theoretical concept in philosophy, it finds its application in the real world. In Germany, one of the most prominent examples is “Wahl-O-Mat”. This system helps voters to find candidates that are close to their political preferences. For this, candidates and voters fill in the same survey answering whether they agree or disagree with statements in it (it is also possible to abstain). The list of answers is then a simplified representation of the individual's political position. To compute the similarity between two positions, a simple rule based on the Hamming-distance is used: for every statement, the same answer is scored with a positive value and distinct answers are scored negatively. The more points are scored, the more similar the positions are. Finally, a set of candidates whose positions are the most similar to the voter is suggested. Due to its simplicity, this metric has a significant disadvantage: it ignores possible inferential relations between single statements. With better coherence measures, a better quality of the suggestions can be achieved. Since the “Wahl-O-Mat” system is used by a multitude of voters, it has to perform highly efficiently. Especially, the mutual coherence needs to be calculated in a feasible time. The measure of mutual coherence that we study here is more sophisticated than the above-mentioned simple rule: using it in systems like “Wahl-O-Mat”, would improve their quality. But since the naive computation of this measure is very time-consuming, it can not be used in the system so far. And this motivates the goal of this project: we develop heuristics for the efficient approximation of this measure of mutual coherence. The paper proceeds as follows: In Section <ref>, we introduce required definitions and present the considered measure of mutual coherence. Next, in Section <ref>, we motivate the need for synthetic argument maps and present an algorithm to create them. Then, in Section <ref>, we present the main contribution of this project, i.e., the heuristics for the efficient approximation of mutual coherence. After that, in Section <ref>, the accuracy of the heuristics is demonstrated in experiments with synthetic data and finally, in Section <ref>, we summarize the results of the work. § PRELIMINARIES Here, we formally define the studied measure of mutual coherence and show that its naive computation is indeed highly time-consuming. First, we introduce the #SAT problem, also called the SAT model counting problem. The instance of the problem is a boolean formula or, equivalently, it is a set of clauses (a clause is a logical disjunction, e.g., a ¬ b c). In the better-known SAT problem, the question is whether there exists a truth-value assignment under which the formula becomes true. The SAT model counting problem generalizes the question and asks how many truth-value assignments make the formula true. Both problems are known to be hard in computer science. To be more precise, SAT is 𝒩𝒫-complete and #SAT is #𝒫-complete. In other words, it is conjectured that these problems admit no polynomial-time algorithms. Nevertheless, there exist diverse heuristics that are in practice efficient for many instances. The state-of-the-art #SAT-solvers combine these heuristics and take into account both theoretical (e.g., unit propagation) and technical (e.g., cache-efficiency) details. In our project, for model counting, we use the ganak model counter <cit.>. This implementation won the “Model Counting Competition 2020” and it belongs to the state-of-the-art model counters. Let S be a set of sentences (equivalent to boolean variables in terms of SAT) so that the set is closed under negation and let N := |S|/2. An argument a is a tuple (P_a, c_a) with P_a ⊆ S and c_a ∈ S where P_a denotes the set of premises and c_a is the conclusion. Let I be a set of arguments. A pair (S, I) is called a simple structured argumentation framework (SSAF). SSAFs can be visualized with Argument Maps (AMs) that represent support (green) and attack (red) relations between statements (see Figure <ref>). Every line in an AM corresponds to an argument. For example, consider the leftmost green line in Figure <ref>. The premises of the argument are “Other animals eat meat.” and “It is okay for humans to do things if other animals do them.” The conclusion is “It is okay for humans to eat meat”. Arguments represent logical relations between statements. If a (consistent) speaker agrees with all premises of the argument, then he also agrees with its conclusion. A red line denotes that agreeing with all premises requires disagreeing with the conclusion (e.g., the rightmost line in Figure <ref>). This representation is a hyper-graph but it can be easily transformed into a (simple) graph by introducing a dummy-vertex for every hyper-edge. This graph contains the whole logic of an SSAF and instead of working with the set I of tuples (P_a, c_a), it is possible to utilize the graph structure of AMs. A position is a truth-value assignment to a set S_D ⊆ S: A: S_D →{True, False} In the following, we use the terms “opinion” and “position” interchangeably. For simplicity, we sometimes define the position by the set of statements assigned the True-value and write A ⊆ S. Position A is complete if its domain is S. We say that complete position A extends position B if positions A and B agree on values assigned to the domain of B. Complete position A does not extend B otherwise, i.e., if the positions disagree on at least one statement in the domain of B. Complete position A is consistent if the following holds: * ∀ s ∈ S: A(s) ≠ A(¬ s) * ∀ a = (P_a, c_a) ∈ I: ( (∀ p ∈ P_a: A(p) = True) → A(c_a) = True) We denote the number of complete consistent positions extending (not extending) A with σ_A (σ_¬ A). Note that these values depend on the set of arguments I. Similarly, for two positions A and B, we denote the number of complete consistent positions extending both A and B with σ_A, B. Here, the #SAT model counting problem appears: for example, σ_A is the number of truth-value assignments of S such that the arguments I and statements of A are true. We write Y _I X (Y _I ¬ X) if every complete consistent position extending Y extends (does not extend) X. Thus, _I denotes the logical implication in terms of SAT. The value DOJ(A| B) def=σ_A, B / σ_B is called the degree of justification of A by B. The Kemeny-Oppenheim confirmation measure is defined as follows: Conf(X, Y) def=DOJ(Y| X) - DOJ(Y|¬ X)/DOJ(Y| X) + DOJ(Y|¬ X) if Y ⊭_I X Y ⊭_I ¬ X 1 if Y _I X -1 if Y _I ¬ X Finally, we can define the mutual coherence between positions A and B as introduced by Betz et al. <cit.>: MutCoh(A, B) def=1/2·(2^k_A - 1)∑_∅≠ X ⊆ AConf(X, B) + 1/2·(2^k_B - 1)∑_∅≠ X ⊆ BConf(X, A). In the formula, k_A and k_B denote the domain size of A and B, respectively. In the following, writing about the mutual coherence we always refer to the measure MutCoh(·, ·). Consider the straightforward computation of the mutual coherence. It requires an exponential (in the size of the opinion) number of iterations. In every iteration, we solve a constant number of #SAT instances which in turn requires exponential (in |S|) time in the worst-case. What concerns the second component of the running time, using state-of-the-art model counters would speed it up. Since this part only depends on the model counter, we will not further consider it in this paper. However, the number of required iterations does not depend on the model counter. In this project, we focus on it and reduce the number of subsets taken into account and hence, the number of needed runs of the model counter (see Section <ref>). Finally, we observe that formula for the mutual coherence consists of two symmetric summands. For this reason, we introduce the one-sided coherence: OneCoh(A, B) def=1/2^k_A - 1∑_∅≠ X ⊆ AConf(X, B) which is the average confirmation value over the non-empty subsets of A. Then MutCoh(A, B) = 1/2( OneCoh(A, B) + OneCoh(B, A) ) During the project, we have found out that it is meaningful to consider OneCoh(A, B) and OneCoh(B, A) separately. So in the following, we are only interested in the efficient approximation of OneCoh(·, ·). This yields then a still efficient approximation of MutCoh(·, ·). § SYNTHETIC ARGUMENT MAPS An important property of approximation heuristics is their scalability. In our case, this is the ability to stay accurate for growing instances. There are two parameters of interest: the size of the AM and the size of the opinion. Particularly, we are interested in AMs of arbitrary size. During the project, we encountered the problem of missing test data. Although a huge number of AMs can be found in sources like <aifbd.org>, the absolute majority of them are small (i.e., consist of at most 25 statements). For such AMs, both problems (model counting and the computation of mutual coherence) can be solved in a feasible time and hence, there is no need for the approximation. Larger AMs exist and they are very important for the research but there are fewer of them and they are difficult to find in the public domain. For this reason, we have developed an algorithm for the creation of synthetic AMs. Mutual coherence is not the only area where AMs appear, so the algorithm can also be used beyond our project. Here, we sketch the process and a pseudocode representation is provided in Appendix <ref>. The input of the algorithm n, k, α, ψ, γ,d is the set of parameters steering the size and the shape of the created AM. The algorithm is randomized: run with the same input, it produces different AMs. First of all, n is the number of statements. The parameter α determines the number of arguments α· n: we observed that due to the tree-likeness of AMs, the number of arguments can be approximated as a linear function in the number of statements. The second parameter specifying the density of the AM is d. It is a probability function mapping an integer number k to the average fraction of arguments with exactly k premises in an AM. This number needs to be computed from an AM (or a set of AMs) which will then be mimicked by the algorithm. For example, we used: d = {(2, 0.19), (3, 0.23), (4, 0.32), (5, 0.26)} This distribution was calculated from the AM of Veggie Debate[<debatelab.philosophie.kit.edu/sm_talk-daprpc_89.php>]. The remaining parameters ψ, γ, and k control the creation of an AM. Parameter k is the number of so-called key statements which initialize the AM. They model the most important statements of the AM on top of which supporting and attacking arguments are built. The number k is typically small because debates are individuated with reference to central questions and key statements that are discussed. The debates documented in the Debater's Handbook, for example, mostly evolve around 2-3 key claims <cit.>. A level of a statement in the current AM is defined as follows. For a literal l, let Var(l) be the corresponding variable, i.e., Var(v) = Var(¬ v) = v. A key statement s has level 0: Level(s) = 0. Otherwise, for a statement s, we consider the set conclusions C(s) of all arguments in which s occurs as a premise: C(s) = { c | p_1 … p_k → c ∈ I', s ∈{Var(p_1), …, Var(p_k)}} where I' denotes the current set of arguments. Then the level of s is defined by: Level(s) = min_c ∈ C(s) Level(c) + 1 In other words, the level of a statement s is the length of the shortest path from s to some key statement minus the number of arguments on this path. Note that the level might change after an argument is added to the AM and the level is only defined for statements already added to the AM. The arguments are generated one by one according to the following scheme: * To ensure that the arguments are built on top of the central statements, we pick a conclusion from the statements that are already in the AM: initially, these are only the key statements. The probability to pick statement s is proportional to ψ ^ Level(s). After the conclusion is chosen, it is negated with a probability of 0.5 to create an attacking argument. * Next, the number of premises t is picked according to d. * After that, each of t premises is picked independently. A statement s is picked with probability proportional to γ ^ Arguments(s), where Arguments(s) denotes the number of arguments in which s occurs. Especially, the statements that have not been used in arguments yet obtain the largest probability. * Finally, we check if the AM stays satisfiable with the newly created argument, that is, there exists a complete consistent position with respect to the current AM. If so, the argument is added to the AM. Otherwise, the argument is dismissed and the process is repeated. To avoid infinite loops, a termination criterion is added: for example, restart the algorithm or return an error if there were too many unsuccessful attempts to create an argument. The pseudocode can be found in Appendix <ref>. § APPROXIMATION OF THE ONE-SIDED COHERENCE In this section, we present the heuristics for the approximation of the one-sided coherence: OneCoh(A, B) = 1/2^k_A - 1∑_∅≠ X ⊆ AConf(X, B). To solve the problem, we have taken a closer look at the definition and the interpretation of the confirmation measure Conf(·, ·). A confirmation value is a real number between -1 and 1. We recall the meaning of these extreme values. It holds: Conf(X, B) = -1 ⇔ B _I ¬ X, or in other words: the union of B and X is logically contradictory (if we assume that the arguments I hold). A simple special case occurs when X contains a literal s such that ¬ s belongs to B. In this case, B and X are contradictory regardless of the set of arguments I. Similarly, it holds: Conf(X, B) = 1 ⇔ B _I X, that is, B logically implies X (if we again assume that the arguments I hold). Inter alia, this happens when X ⊆ B: Regardless of the set of arguments I, the position B logically implies its subsets. This results in two types of subsets for which the confirmation values can be determined without running the model counter. This motivates the following lemma. Let A, B ⊂ S be positions. We set: Neg(A, B) def={ x ∈ A |¬ x ∈ B} Com(A, B) def= A ∩ B Then: * For every X ⊆ A with X ∩ Neg(A,B) ≠∅: Conf(X, B) = -1. And it holds: | { X | X ⊆ A, X ∩ Neg(A,B) ≠∅}| = 2^|A| - 2^|A| - |Neg(A,B)| * For every ∅≠ X ⊆ Com(A, B): Conf(X, B) = 1. And it holds: | {X |∅≠ X ⊆ Com(A, B)}| = 2^|Com(A, B)| - 1 We prove the claims separately. * Consider X ⊆ A with X ∩ Neg(A, B) ≠∅. There exists x ∈ X ∩ Neg(A, B). For an arbitrary complete consistent position P extending B, it holds P(x) = false since ¬ x ∈ B. Because of x ∈ X, the position P does not extend X. Thereby, B _I ¬ X and hence, Conf(X, B) = -1. The subsets of A which are disjoint from Neg(A,B) are exactly the subsets of A ∖ Neg(A, B) and hence, there are 2 ^ |A| - |Neg(A, B)| of them. * Consider ∅≠ X ⊆ Com(A, B). By the definition of Com(A, B), we have: X ⊆ B. So every complete consistent position extending B extends X too and hence, B _I X and Conf(X, B) = 1. There are 2^|Com(A, B)| - 1 non-empty subsets of Com(A, B). This lemma provides lower bounds on the number of subsets with confirmation values -1 and 1. To calculate these bounds, subsets Com(A,B), Neg(A,B) ⊆ A are required. They can be computed in 𝒪(max{|A| log |A|, |B| log |B|}) (i.e., polynomial time) by first sorting the literals in opinions A and B and then iterating over the two sorted lists. The lemma can be strengthened to the following (the proof is analogous and can be found in Appendix <ref>): Let A, B ⊂ S be positions. Let Cntr(A, B) def={ x ∈ A | B _I ¬{x}} and Impl(A, B) def={ x ∈ A | B _I {x}} Then: * For every X ⊆ A with X ∩ Cntr(A, B) ≠∅: Conf(X, B) = -1. And: | { X | X ⊆ A, X ∩ Cntr(A, B) ≠∅}| = 2^|A| - 2^|A| - |Cntr(A,B)| * For every ∅≠ X ⊆ Impl(A, B): Conf(X, B) = 1. And: | { X |∅≠ X ⊆ Impl(A, B) }| = 2^|Impl(A,B)| - 1 Observe that Neg(A, B) ⊆ Cntr(A, B) and Com(A, B) ⊆ Impl(A, B). Thus, Lemma <ref> yields stronger lower bounds on the number of subsets of A with confirmation values -1 and 1. However, the computation of these bounds is not necessarily polynomial anymore. One way to determine the set Cntr(A, B) is to iterate over statements x ∈ A and check if σ_B ?=σ_B ∪{x} using a #SAT solver. This is exactly the case if x ∈ Impl(A,B). Another way is to apply a SAT solver to check if B and I logically imply x. Similarly, to compute the set Cntr(A,B) we iterate over statements x ∈ A and check if σ_B∪{x}?= 0 or, equivalently if B and I logically imply ¬ x. Each of these approaches requires 𝒪(|A|) runs of a #SAT or a SAT solver, respectively. These theoretical lower bounds have inspired us to look at the real distribution of confirmation values. We have plotted it for different opinion pairs and different AMs and observed that in most cases, the distribution has up to three peaks and therefore can possibly be approximated by a mixture of three Gaussians (see Figure <ref>). Such a mixture is determined by the following mixture parameters: * Mean values μ = (μ_1, μ_2, μ_3) * Standard deviations σ = (σ_1, σ_2, σ_3) * Mixture parameters w = (w_1, w_2, w_3) with w_1, w_2, w_3 ≥ 0 and w_1 + w_2 + w_3 = 1 Then, the probability density function is given by 𝒩_3(w, μ, σ) def= w_1 𝒩(μ_1, σ_1^2) + w_2 𝒩(μ_2, σ_2^2) + w_3 𝒩(μ_3, σ_3^2), where 𝒩(μ_i, σ_i^2) denotes the normal distribution with the mean (i.e., expectation) μ_i and standard deviation σ_i. As a result, the expected (i.e., mean) value of the distribution is given by 𝔼(𝒩_3) = w_1μ_1 + w_2μ_2 + w_3μ_3. We again recall the definition of the one-sided coherence that we want to approximate: OneCoh(A, B) = 1/2^k_A - 1∑_∅≠ X ⊆ AConf(X, B). So the one-sided coherence is the mean value of the confirmation values. This motivates the main approach we follow in our heuristics: Model the distribution of confirmation values as a mixture of three Gaussians, estimate the mixture parameters w, μ,σ, and finally, compute the mean value of the distribution which is then an approximation of the one-sided coherence. Note that σ_1, σ_2, σ_3 do not influence the mean value 𝔼(𝒩_3) and hence, it is not necessary to estimate them. §.§ The Heuristics §.§.§ The Estimation of Weights. Lemma <ref> provides lower bounds on the fraction of subsets with confirmation values -1 and 1, so we can fix the leftmost and the rightmost Gaussians as follows: * μ_1 = -1, w_1 = 2^|A| - 2^|A| - |Neg(A, B)|/2^|A| - 1 * μ_3 = 1, w_3 = 2^|Com(A, B)| - 1/2^|A| - 1 * w_2 = 1 - w_1 - w_3 We call it the simpler estimation of mixture weights or just simpler weights. Alternatively, the mixture weights w_1 and w_3 (and hence w_2 = 1 - w_1 - w_3) can be defined by replacing Neg(A, B) with Cntr(A, B) and Com(A, B) with Impl(A, B) (see Lemma <ref>). We call it the finer estimation of weights or just finer weights. We expect that the finer estimation results in better accuracy but longer running time. Based on the weights (finer or simpler), we have developed the following four heuristics. §.§.§ Direct Estimation. The simplest action is to set μ_2 = 0. The underlying assumption is that the subsets X of A which are neither contradictory to B nor implied by it are just logically independent of B. This corresponds to Conf(X, B) = 0. We also assume that lower bounds from Lemma <ref> or <ref> yield a good approximation of the fraction of subsets with confirmation values -1 and 1. As soon as the weights are computed, this approach requires constant time to get the approximated value. For this reason, the direct estimation with simpler weights is a fully polynomial-time algorithm. The direct estimation with finer weights has a complexity of 𝒪(|A|) model counting operations. The remaining three approaches are sample-based: we sample β· |A| subsets of A, compute the corresponding confirmation values V = {v_1, …, v_β· |A|} using a #SAT solver, and use these values to estimate the mean value of the distribution 𝒩_3 in different ways. Since the computation of every confirmation value requires a run of a model counter, the number of samples is linear in the size of the opinion to hold the number of these runs small. §.§.§ Average. The simplest approach is just to compute the mean value of samples and return it, so the approximation is: mean(V) def=1/|V|∑_v ∈ V v. Note that this approach does not use the estimated weights and relies only on the set of samples V. For this reason, we expect that this approach will tend to require more samples to achieve the same accuracy compared with the approaches that additionally utilize the estimated weights. The complexity is 𝒪(β· |A|) = 𝒪(|V|) runs of the model counter. §.§.§ Average μ_2. This approach combines the previous two as follows. We act similarly to direct estimation but instead of μ_2 = 0, we set: μ_2 = mean(V). This way, we assume that all samples v ∈ V belong to the middle Gaussian. The complexity of this approach consists of the weights' estimation and of the average-approach. Note that the new average-μ_2-approach has the disadvantage that it might take certain subsets into account twice: first, in w_1 or w_3 and second, as a sample in V. This might lead to the under- or overestimation of the mean value of the distribution. Later, we will describe a filtering technique to avoid this problem. §.§.§ Fit μ_2. Another solution for this problem is to estimate the position of the second Gaussian (i.e., μ_2) by applying the Expectation-Maximization (EM) algorithm <cit.>. In statistics, the EM algorithm is an iterative method to estimate the model parameters of a distribution where the model depends on unobserved variables. In our case, the unobserved variable corresponds to the Gaussian to which a point belongs. In this way, the aforementioned subsets will not be taken into account twice: instead, the EM algorithm will “assign” them to the leftmost or the rightmost Gaussian, respectively. To be more precise, our adaptation of the EM algorithm runs as follows: Fix w_1, w_2, w_3 (simpler or finer weights), μ_1 = -1, and μ_3 = 1 so that they stay constant during the algorithm; The expectation step of the algorithm is the same as in the original EM algorithm; however, in the maximization step, we only change μ_2 according to the EM algorithm. We expect this algorithm to perform most accurately since it overcomes the disadvantages of the previous approaches. The running time is the running time of the average-approach plus the running time of the EM algorithm. The latter depends on the number of iterations needed until convergence (normally a small constant) and one iteration is linear in |V| (and hence in |A|). §.§ Filtering As we mentioned earlier, a disadvantage of the average-μ_2-approach is that it tends to take certain subsets into account twice. The fit-μ_2-approach solves this problem with the EM algorithm just assigning such samples to the leftmost or the rightmost Gaussian which are already fixed. This approach, in turn, has the disadvantage that such subsets are useless because they contain no new information. To solve both of these problems, we introduce the filtering-technique. In the filtered average-μ_2- and filtered fit μ_2-approaches, instead of sampling from all subsets of A, we only sample subsets ∅≠ X ⊆ A such that (assume we use simpler weights): X ∩ Neg(A,B) = ∅ and X ⊈ Com(A,B). This way, no subset will be taken into account twice. § RESULTS §.§ The Description of the Dataset In this section, we evaluate the accuracy of our heuristics. We have run the algorithms on synthetic argument maps. As we mentioned at the beginning, the real argument maps are mostly small so that both the #SAT solving and the computation of the one-sided coherence can be computed efficiently. For this reason, in this project, we were more interested in larger argument maps so and tested the approximation algorithms on synthetic data. The synthetic argument maps were created according to Section <ref> with the following parameters: * n ∈{50, 100, 150, 200} * γ = ψ = 0.5 * α∈{0.3, 0.5, 0.7} * k ∈{3, 5, 7, 10} We have tried out different combinations of these parameters in order to make the dataset as various as possible. However, for some combinations (e.g., n = 200 and α∈{0.5, 0.7}) the running time of model counting became infeasible (especially, if it had to be done for every subset of an opinion). For this reason, there are more argument maps with smaller n, and larger argument maps are created with tendentially smaller α. After that, for every argument map, we create a set of opinion pairs where opinions have sizes 5, 7, and 10. For every argument map and every opinion size, we tried to create several pairs of opinions. Again, we encountered the problem that the computation of the ground-truth (i.e., one-sided coherence) for some pairs of opinions was infeasible (i.e., the model counting took too long). For this reason, some combinations of parameters have been excluded (e.g., for argument maps consisting of 200 statements, there are only opinions of size 5). Totally, we have created 63 argument maps and 1627 pairs of opinions. The plots will have the following structure. On the x-axis, we have the exact value of the one-sided mutual coherence, and on the y-axis, we have the result of a certain algorithm. Both axes have the range [-1, 1]. Thus, for an accurate approximation algorithm, the points fit the line x = y; the line is also shown for convenience. More details will be provided later. §.§ Direct Estimation Here we have tested direct estimation with simpler and finer weights (see Figure <ref>). Recall that this approach does not use any samples so we did not expect high accuracy. First, we observe that the plots for simpler and finer weights look very similar. Indeed, it turns out that in our dataset for less than 1% of pairs of opinions, we either have Com(A, B) ≠ Impl(A,B) or Neg(A, B) ≠ Contr(A,B). For the remainder, both equalities hold, and hence, finer and simpler weights coincide. For this reason, in practice, we suggest replacing the time-consuming computation of finer weights with simpler weights and we will not distinguish between simpler and finer weights in the following. We also observe that with increasing Neg(A,B), the accuracy increases noticeably. Indeed there is an explanation for it. Recall the definition of w_1: w_1 = 2^|A| - 2^|A| - |Neg(A, B)|/2^|A| - 1≈ 1 - 1/2^|Neg(A, B)| Thereby: * For Neg(A, B) = 1, we estimate the confirmation values for 1/2 of all subsets correctly. * For Neg(A, B) = 2, we estimate the confirmation values for 3/4 of all subsets correctly. * For Neg(A, B) = 3, we estimate the confirmation values for 7/8 of all subsets correctly. * Etc. At this point, we want to remind you of one of the most prominent applications of mutual coherence - the “Wahl-O-Mat” system. In practice, there is normally a set of candidates such that each two of them certainly disagree on several statements. Thus, for any voter, her position will also necessarily differ from the positions of most candidates on several statements and hence, the direct estimation will yield an accurate result for the most (voter, candidate)-pairs. We also emphasize that in such an application the relative order of the mutual coherence values is important and not their absolute values. Nevertheless, there will possibly still be a (small) set of candidates such that the value Neg(voter, candidate) is small and a more accurate approximation technique is required. For this small number of pairs, one of the sample-based approaches can be applied. §.§ Sample-based Approaches The results are demonstrated in Figure <ref> where the color denotes β (determines the number of used samples β· |A|). The plots confirm our expectations. First of all, for the same number of samples, the average-approach results in the lowest accuracy since it only takes samples into account and does not utilize the weights. Especially on the negative side where Neg(A,B) < 0, the results of the fit-μ_2-approach deviate from the line x=y significantly less since the confirmation values are estimated correctly for more subsets than only for samples. As we have also supposed in the previous section, the average-μ_2-approach tends to underestimate and overestimate the value on the negative and the positive side of the plot, respectively. This happens because the subsets with confirmation values -1 and 1 can be taken into account twice. This effect is most evident for yellow points corresponding to the largest value of β. Finally, the results of the fit-μ_2-approach fit the desired line x = y and the error decreases with increasing β. §.§ Filtering In Figure <ref>, we present the plots for filtered average-μ_2- and filtered fit-μ_2-approaches. For the comparison, the original fit-μ_2-approach is plotted there too. We expected that filtering would lead to higher accuracy because all samples are “useful” and no subset is taken into account twice. Although it is not obvious from the plot, the results confirm the expectation. For more details, we refer to Appendix <ref>. Here only a short summary: the filtered average-μ_2 approach leads to the highest accuracy, after that comes the filtered fit-μ_2-approach and then, the original fit-μ_2-approach. A further advantage of the filtered average-μ_2 is that it does not require running the EM algorithm and hence, it is faster than the (filtered) fit-μ_2-approach. We have also tested robustness to make sure that the accuracy of the filtered-average-μ_2-approach does not depend on parameters other than β. The plots can also be found in Appendix <ref>. §.§ Summary In Figure <ref>, we provided the results of different approaches for β = 3. As we already stated, the average-μ_2-approach leads to the highest accuracy and is the most efficient among the sample-based approaches, and hence, it is the unique leader and the main result of this paper. If only small computational power is available, the direct estimation can be applied instead. This approach becomes more accurate with increasing Neg(A, B). In practice, two opinions differ on at least several statements and hence, a non-trivial approximation will be obtained. § CONCLUSION In this paper, we dealt with the efficient approximation of one measure of mutual coherence. First, we have encountered the problem of missing test data and as a side result, we have developed an algorithm for the creation of synthetic argument maps. This algorithm gets several parameters as the input determining the size, the density, and the form of created AMs. It also accepts the distribution of the number of premises of argument: this way, we can create synthetic AMs similar to a given real AM. The process is randomized, so if we run it multiple times with the same input, we obtain multiple argument maps with similar properties. After that, we generated argument maps and used them to test the approximation. To solve the main problem of this work (i.e., the approximation of mutual coherence), we have studied the definition of the measure of mutual coherence. First of all, the mutual coherence can be split into two symmetric parts - we call each of them the one-sided coherence. To get an efficient approximation of the mutual coherence, it is enough to get such an approximation for the one-sided coherence. This value is defined as the average confirmation value over all subsets of an opinion. The definition of a confirmation value contains an instance of the #SAT problem. Thereby, for the naive computation, the exponential (in the size of the opinion) number of iterations is required and in every iteration, an instance of the SAT model counting problem needs to be solved. This, in turn, required exponential (in the size of the argument map) time in the worst-case. However, the state-of-the-art-#SAT-solver are more efficient in practice. Our heuristics reduce the number of iterations and hence, the number of runs of a model counter. The key observation was stated in Lemma <ref> was that it is possible to efficiently determine (in general non-trivial) lower bounds on the number of subsets with confirmation values -1 and 1. This already reduces the number of needed runs of a model counter. In Lemma <ref>, we have improved the lower bounds but their computation became more time-consuming. However, we have later observed that in practice, both lemmas provide the same bounds and hence, we suggest using the results from Lemma <ref>. Next, we have observed that the distribution of confirmation values of an opinion pair can be modeled as a mixture of three Gaussians. In our heuristics, we sample a linear (in the size of the opinion) number of subsets, apply a #SAT solver to compute the confirmation values, and then estimate the parameters of the distribution in different ways. Finally, we compute the mean value of the distribution and this is then the desired approximation. One algorithm (direct estimation) is fully polynomial-time and if the number of statements on which two opinions differ is non-zero, then we already obtain a good approximation. And the accuracy increases if this number grows. We have also mentioned that in the Wahl-O-Mat application, for most opinion pairs, this number is typically large enough to achieve high accuracy. For the remaining opinion pairs, one of the more time-consuming sample-based approaches can be used. We have shown that with a filtering technique, we obtain an accurate algorithm whose running time requires only a linear (instead of exponential for the naive computation) number of runs of the model counter. A possible direction for future research would be first to look for further subsets whose confirmation value can be computed efficiently. § PROOF OF LEMMA <REF> Let A, B ⊂ S be positions. Let Cntr(A, B) def={ x ∈ A | B _I ¬{x}} and Impl(A, B) def={ x ∈ A | B _I {x}} Then: * For every X ⊆ A with X ∩ Cntr(A, B) ≠∅: Conf(X, B) = -1. And: | { X | X ⊆ A, X ∩ Cntr(A, B) ≠∅}| = 2^|A| - 2^|A| - |Cntr(A,B)| * For every ∅≠ X ⊆ Impl(A, B): Conf(X, B) = 1. And: | { X |∅≠ X ⊆ Impl(A, B) }| = 2^|Impl(A,B)| - 1 We prove the claims in lemma separately. * Consider X ⊆ A with X ∩ Cntr(A, B) ≠∅. There exists x ∈ X ∩ Cntr(A, B). For an arbitrary complete consistent position P extending B, it holds P(x) = false since B _I ¬{x}. Because of x ∈ X, the position P does not extend X. Thereby, B _I ¬ X and hence, Conf(X, B) = -1. The subsets of A which are disjoint from Cntr(A,B) are exactly the subsets of A ∖ Cntr(A, B) and hence, there are 2 ^ |A| - |Cntr(A, B)| of them. * Consider ∅≠ X ⊆ Impl(A, B). Consider x ∈ X ⊆ Impl(A, B). By the definition of Impl(A, B), we have: B _I ¬{x}, i.e., every complete consistent position extending B extends {x} too. Since this holds for every x ∈ X, every complete consistent position extending B extends X too. Thereby, B _I X and Conf(X, B) = 1. And there are 2^|Impl(A, B)| - 1 non-empty subsets of Impl(A, B). § DIRECT ESTIMATION WITH LINEAR SLOPE ON THE POSITIVE SIDE In Subsection <ref>, we have presented the accuracy of the direct estimation. We observe that although the approach is rather accurate on the negative side, a slope is missing on the positive side. For this reason, we have tried the following out. Instead of the proven lower bound w_3 = 2^|Com(A, B)| - 1/2^|A| - 1, we have used the linear relation: w_3' = Com(A, B)/|A|. However, this can lead to w_1 + w_3' > 1. Since we did not want to change w_1 too, for such pairs of opinions, we have not changed the value of w_3. The results of this adaptation can be found in Figure <ref>, on the right. This approach is more accurate than the original direct estimation but we have no explanation for this heuristic so we have not included this in the main part of the paper. § COMPARISON OF THE SAMPLE-BASED APPROACHES In Subsection <ref>, we claimed that the filtered-average-μ_2 results in the best accuracy. However, the difference is not obvious from the plots provided there. For this reason, here we present the mean squared error of the approaches mentioned there depending on β. βApproach fit-μ_2 filtered fit-μ_2 average μ_2 filtered average μ_2 0.5 0.0135 0.0052 0.0559 0.0035 1 0.0050 0.0031 0.0419 0.0019 2 0.0023 0.0020 0.0412 0.0010 3 0.0016 0.0015 0.0425 0.0007 4 0.0013 0.0012 0.0433 0.0005 5 0.0011 0.0011 0.0439 0.0004 § ROBUSTNESS ANALYSIS OF THE FILTERED Μ_2-APPROACH In Figure <ref>, we have plotted the distribution of the squared errors of the filtered-average-μ_2-approach depending on the size of the opinion (on top), size of the argument map (in the middle), and the parameter α (on bottom) for different values of β. The plots show that the accuracy of the approach does not decrease with increasing α and n. However, with increasing size of the opinion, the error increases very slowly. § GENERATION OF ARGUMENT MAPS IN PSEUDOCODE 1em 1em
http://arxiv.org/abs/2307.01898v1
20230704194250
Generative Artificial Intelligence Consensus in a Trustless Network
[ "Edward Kim", "Isamu Isozaki", "Naomi Sirkin", "Michael Robson" ]
cs.DC
[ "cs.DC" ]
Starobinsky inflation and its spin-offs in the light of exact solutions Jose Mathew August 1, 2023 ======================================================================= We performed a billion locality sensitive hash comparisons between artificially generated data samples to answer the critical question - can we verify the “correctness” of generative AI output in a non-deterministic, trustless, decentralized network? We generate millions of data samples from a variety of open source diffusion and large language models and describe the procedures and trade-offs between generating more verses less deterministic output in a heterogenous, stochastic network. Further, we analyze the outputs to provide empirical evidence of different parameterizations of tolerance and error bounds for verification. Finally, given that we have the generated an enormous amount of simulated data, we also release a new training dataset called ImageNet-Gen for use in augmenting existing training pipelines. For our results, we show that with a majority vote between three independent verifiers, we can detect image generated perceptual collisions in generated AI with over 99.89% probability and less than 0.0267% chance of intra-class collision. For large language models (LLMs), we are able to gain 100% consensus using greedy methods or n-way beam searches to generate consensus demonstrated on different LLMs. In the context of generative AI training, we pinpoint and minimize the major sources of stochasticity and present gossip and synchronization training techniques for verifiability. Thus, this work provides a practical, solid foundation for AI verification and consensus for the minimization of trust in a decentralized network. § INTRODUCTION Generative Artificial Intelligence (GenAI) represents one of the most impactful and consumer pervasive advancements in artificial intelligence technology in recent years. This form of AI is designed to learn the distribution of the training data, and sample from the learned manifold to create content, e.g. text, images, music, or other complex signals. It a significant shift from traditional AI models that are primarily used to analyze and interpret data typically seen in discriminative supervised tasks such as classification or regression. The rise of generative AI has also been enabled by the increasing availability of data and computational power. In fact, AI is growing exponentially in every aspect - in usage and adoption, as well as in cost to train, model parameterization, data, and compute. For example, ChatGPT reached 1 million users in just 5 days -a feat that took 3.5 years for Netflix, and 2 years for Twitter. OpenAI's GPT-3 required over over $12M USD in training costs, and the carbon footprint of training this model was equivalent to the output of 126 danish homes for an entire year <cit.>. This does not even cover the cost of obtaining and labeling massive amounts of data. The cost, size, and compute necessary for GPT-4 is an order of magnitude larger, and intractable to train or deploy by anyone except for a handful of the largest industry players. However, advancements in model quantization, hardware, and cloud computing have made it possible to train and deploy increasingly complex models on consumer grade hardware. Thus, a significant effort is underway in the field to democratize artificial intelligence. One of the main goals of the democratization of AI is to expand the benefits of AI beyond a small group of elite researchers and companies, and to ensure that everyone has access to the tools and resources they need to take advantage of AI. While open source software is a core component, there is also the need for “open source” hardware. This concept is not new and has been explored in decentralized cloud networks. For example, BOINC <cit.>, which stands for Berkeley Open Infrastructure for Network Computing, is an open-source software platform for distributed computing. It allows volunteers to donate unused processing power from their personal computers to scientific research projects. The system is designed to manage and utilize the computing resources of thousands of volunteers across the globe, effectively creating a massive, distributed supercomputer. This allows researchers to conduct large-scale computations without the need for a dedicated, or centralized infrastructure. Some well-known projects include SETI@home <cit.>, which searches for extraterrestrial intelligence, and Folding@home <cit.>, which simulates protein folding for disease research. In recent years, an different kind of decentralized network emerged. These blockchain networks were incentivized so the operators of the network could be rewarded for providing decentralized compute to secure a shared ledger. Due to the algorithms and economics of the Ethereum network, it quickly became the largest decentralized GPU compute cloud in the world, surpassing any other decentralized cloud entity by orders of magnitude. “Mining” was the process of finding a solution to simple, yet computationally demanding cryptographic problem. In general, this is called Proof of Work (PoW), a consensus mechanism used in blockchains to verify and add new blocks to the chain. The computational power of the other miners are used to validate the transactions and maintain the integrity of the blockchain. This idea of solving hard problems that are easy to verify is not too dissimilar to the metaphor of training a neural network versus the inference process. Training is extremely computationally intensive, whereas the forward pass (inference) is much more efficient. As a step toward this goal, we investigated how to utilize the power of blockchain networks for other purposes, e.g. inference and training a generative machine learning algorithm. However, a decentralized cloud infrastructure must utilize trustless computing principles that do not normally apply when dealing with a known entity. In a decentralized ecosystem, you do not know who you are interacting with, and the dangers of malicious or adversarial actors increases by orders of magnitude. To address these issues, the contributions of this work are the following. We investigate and present techniques to scale up the computational capabilities of a decentralized blockchain network beyond validation of transactional blocks. We present algorithms that can execute ML tasks and monitor other nodes for fraudulent output and, in our results, provide empirical evidence of different parameterizations of tolerance and error bounds for verification. Given that we have the generated an enormous amount of simulated data, we also release a new training dataset called ImageNet-Gen for use in augmenting existing training pipelines. In the context of generative AI training, we pinpoint and minimize the major sources of stochasticity and present gossip and synchronization training techniques for verifiability. In essence, this work demonstrates that we can verify generative AI work in both image and language generation with minimal overhead and extremely high precision in a decentralized, trustless machine learning network. § BACKGROUND §.§ Background in Generative AI Algorithmically, deep learning currently dominates nearly all applications of artificial intelligence and machine learning, and has shown incredible success in the past several years. These improvements can be attributed to multiple factors, where two major contributors were access to large amounts of data, and large amounts of compute power. Today, training and running these models requires an enormous amount of compute, usually accelerated in cloud infrastructure using high-powered GPU hardware. Since the introduction of Generative Adversarial Networks (GANs) <cit.>, generative AI has made remarkable strides. Today, it's used in a wide range of applications, from creating realistic imagery and generating art to synthesizing high-quality speech and writing. Stable Diffusion <cit.>, is a deep learning, text-to-image model primarily used to generate detailed images conditioned on text descriptions. It uses a latent diffusion model, which involves training the model to remove successive applications of Gaussian noise on training images, functioning as a sequence of denoising autoencoders. The model consists of three parts: a variational autoencoder (VAE) <cit.>, a U-Net <cit.>, and text encoder <cit.>. The VAE compresses an image from pixel space to a smaller dimensional latent space, capturing a more semantic meaning of the image. Gaussian noise is then iteratively applied to this compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet <cit.> backbone, denoises the output from forward diffusion to obtain a latent representation. The VAE decoder then generates the final image by converting this representation back into pixel space. This process can be conditioned on a string of text, an image, or another modality. Large language models (LLMs) are another class of generative AI that have been trained on large textual datasets. These models have shifted the focus of natural language processing research away from training specialized supervised models for specific tasks. Despite being trained on simple tasks such as predicting the next word in a sentence, LLMs with sufficient training and parameter counts capture much of the syntax and semantics of human language and demonstrate considerable general knowledge within their training corpus. §.§ Background in Decentralized Verification and Consensus The field of machine learning verification is in its infancy. In the traditional definition, verification involves making a compelling argument that the system will not misbehave under a broad range of circumstances <cit.>. This involves the need to consider unusual inputs crafted by an adversary, not just naturally occurring inputs as testing alone is insufficient to provide security guarantees <cit.>. However, we are dealing with a nuanced type of verification of correctness. In the context of a decentralized network, a consensus model is a mechanism that ensures all participants in a distributed network agree on the content of a shared database. It is the protocol by which the nodes in the network agree on a single version of the truth, despite the presence of faulty nodes or those with malicious intent. The consensus model is the core idea to minimize the need for trust in a blockchain system, where consensus ensures that every transaction is validated according to a set of agreed-upon rules. Unlike traditional centralized systems where a single authority validates transactions, in a blockchain, multiple nodes participate in the validation process <cit.>. This decentralization enhances security and transparency but requires the following properties to be effective <cit.>. (1) Agreement - all honest nodes must agree on the same value. (2) Validity - if all honest nodes propose the same value, they must decide on that value. (3) Termination - every honest node must eventually reach a decision. (4) Integrity - a node decides on a value at most once. In other words, once a node has made a decision, it cannot change it. And (5) Fault Tolerance - the consensus model should be able to function correctly even if some nodes fail or act maliciously. Consensus is not the only way to verify in a trustless system; you can also use a cryptographic proof. A SNARK (Succinct Non-Interactive ARguments of Knowledge) is a cryptographic primitive that allows one party, called the prover, to convince another party, called the verifier, that a given statement is true, without the verifier needing to perform the actual computation. Cryptographic techniques like Zero-Knowledge Proofs (ZKPs) and verifiable computing enable a party to prove that a computation was performed correctly without revealing the details of the computation itself. These methods provide strong guarantees of correctness while preserving privacy. Despite these benefits, cryptographic methods are extremely computationally expensive, often times adding 10,000x more work on the prover. While existing SNARKs exist and have been demonstrated to work on smaller neural network models <cit.>, their proving times are unusable for the large diffusion and language models being deployed today. Thus, this work focuses on the consensus mechanisms rather than cryptographic proofs to verify generative machine learning tasks. §.§ Why Gaining Consensus in Generative AI is Hard As stated previously, in order to have an effective consensus model, and ultimately verify the correctness of machine learning tasks in a distributed, decentralized system, honest nodes must come to agreement and must agree on the same value. However, in the realm of machine learning and deep learning, the parallel nature of the computations can introduce a slight non-determinism. This is because floating-point operations are not exactly associative, and when executed in parallel, the order in which they are executed can vary. This can cause tiny differences in the results, which may accumulate over time. Additionally, some GPUs and hardware accelerators have built-in randomness. For example, the NVIDIA Tensor Cores use probabilistic rounding <cit.>, which can make results slightly different even if everything else is kept constant. Software-wise, machine learning frameworks often introduce non-determinism. For example, PyTorch, like many other deep learning frameworks, involves operations that can yield different results across multiple executions, even when using identical seeds <cit.>, see Figure <ref>. This non-deterministic behavior can be attributed to factors such as the use of multi-threading, which can lead to race conditions, or specific hardware and software configurations that introduce variability. While you can limit the sources of non-deterministic behavior or use deterministic algorithms instead of non-deterministic ones where available, this often comes at the cost of performance. As a concrete example, CUDA convolution operations, which use the cuDNN library, can be a source of non-determinism. This is due to the benchmarking feature of cuDNN, which can select different algorithms for convolution operations based on the size parameters <cit.>. Disabling this feature can lead to more deterministic but potentially slower performance. § METHODOLOGY Assuming a decentralized network of machine learning nodes tasked to perform inference or training work on generative AI, how do we know the machine learning node did what it was supposed to do? Recall in the typical deterministic consensus setting, multiple nodes can re-run the instructions and check the results. This checking mechanism is typically performed by computing a cryptographic hash of the output or state. If the instructions were all run correctly, then any validator should have identical state hashes. Any minor change in the state (like altering a single bit) leads to a substantial change in the output hash, i.e. each of the output bits changes with a 50% probability. This effect, avalanche effect, can be seen in common hash functions such as the SHA-1 or MD5 hash function. If a single bit is modified, the resulting hash sum becomes entirely different. This makes it extremely difficult to predict the output of the hash function based on a given input, which is a key aspect of its security. §.§ Locality-Sensitive Hashing In our case, we employ different algorithms that exhibit the Locality-sensitive hashing (LSH) property that probabilistically groups similar input items into the same “buckets”. Unlike traditional hashing techniques that aim to minimize hash collisions, LSH intentionally maximizes them. This technique can also be viewed as a method for reducing the dimensionality of high-dimensional data, where high-dimensional input items are transformed into lower-dimensional versions while maintaining the relative distances between items. We utilize the following set of perceptual hashes (a hash string that approximates the visual characteristics of an image), that are commonly used in image hashing, Average Hash (aHash): This is a type of perceptual hash that works by resizing the image to a small, fixed resolution, converting it to grayscale, calculating the mean pixel value, and then generating a hash based on whether each pixel is above or below the mean. Perceptual Hash (pHash): This is a more complex type of perceptual hash that involves the Discrete Cosine Transform (DCT). The image is resized and converted to grayscale, the DCT is applied, the top-left portion of the DCT matrix (which represents the lowest frequencies) is retained, the mean value is calculated excluding the first element, and a hash is generated based on whether each value is above or below the mean. Difference Hash (dHash): This is another type of perceptual hash that works by comparing the relative gradients of the pixel values. The image is resized and converted to grayscale, each pixel is compared to its neighbor, and a hash is generated based on whether each pixel is greater than or less than its neighbor. Color Hash (cHash): This involves generating a hash based on the colors in an image. It involves resizing the image to a small, fixed resolution and generating a hash based on the quantized color values of each pixel. These types of perceptual hashes are used in search by image, e.g. Google images searching <cit.>, or things like identifying songs with the same fingerprint (Shazam) <cit.>. For our purposes, the perceptual hashes work well, but there are some slight variations in the hash as shown by Figure <ref>. §.§ Generative Image and Language Models Our image generation experiments are centered around stable diffusion models and fine-tuned variants. Stable diffusion supports the ability to generate new images from scratch through the use of a text prompt describing elements to be included or omitted from the output. The architecture of the model includes a variational autoencoder (VAE), a U-Net, and text encoder. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space. Popular fine-tuning methods have been employed to personalize the diffusion models, including fine-tuning all of the weights, or low rank and textual fine-tuning methods. §.§.§ Low-Rank Adapation The LoRA, or Low-Rank Adaptation <cit.>, proposes a new method for adapting large-scale pre-trained language models to specific tasks or domains. Full fine-tuning (retraining all model parameters) becomes less feasible as the size of models continue to expand, i.e. fine-tuning the 175 billion parameters in GPT-3 is prohibitively expensive. Low-Rank Adaptation (LoRA) freezes the pre-trained model weights and introduces trainable rank decomposition matrices into each layer of the Transformer architecture. This approach significantly reduces the number of trainable parameters for downstream tasks. LoRA can reduce the number of trainable parameters by orders of magnitude. In the context of image generation, LoRAs have been used to guide the diffusion process towards particular concepts or visual representations. §.§.§ Textual Inversion Textual inversions <cit.> are an alternative approach to personalizing text-to-image generation. Using only a small number of images (typically 3-5 images) of a user-provided concept, such as an object or a style, inversions learn to represent visual concepts through new “words” in the embedding space of a frozen text-to-image model. These words can be composed into natural language sentences, guiding personalized creation. Interestingly, a single word embedding is sufficient for capturing unique and varied concepts. §.§ Large Language Model Generation Text generation is crucial for many NLP tasks, including open-ended text generation, summarization, translation, and more. The text generation process, known as decoding, can be customized to improve the quality of the generated output and reduce repetition. However, in our case, our goal is to produce quality output that can be verified by other nodes. The generation configuration that yields deterministic results uses a simple decoding strategy called greedy search, which picks the token with the highest probability as the next token. For a more globally “aware” generation strategy, we can also utilize beam-search decoding, which keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. We minimize the stocasticity of the output by turning off stragegies such as multinomial sampling which randomly selects the next token based on the probability distribution over the entire vocabulary, and Beam-Search Multinomial Sampling. § EXPERIMENTS AND RESULTS For our experiments, we set up a heterogeneous, decentralized GPU network consisting of (8x) 3080ti GPUs running on 1x PCI-e risers, (4x) 3070ti GPUs running on GPU risers, (4x) 3060ti GPUs, and two other machines each running a single 3090 connected directly to the motherboard via PCI-e 16x slots. There are an additional (4x) A40 Nvidia GPUs connected in an NLink configuration. In total, the tasks described below were performed on a mix of 22 total GPUs. Unless specified in the experiment, the GPUs were selected to perform a task at random. Each machine was running in an Ubuntu 22 environment with CUDA 12 and torch 2.0. We structure our results as follows: (1) We first compute the independent likelihoods of determinism in several image and language models. (2) We then use these likelihoods to compute an algorithm to verify machine learning inference tasks on a decentralized network. (3) Next, we describe how to verify in the training setting, and lastly (4) share the derived generated dataset for reproducibility and to benefit the community at large. §.§ Consensus Likelihoods of Image and Language Generation In our first experiment, generate images from different variants of stable diffusion <cit.>, including the base v1.5 model, a fine-tuned version of stable diffusion, LoRA or low rank adapatations <cit.>, and textual inversions <cit.>. We use a template “A photo of {}”, where the class variable inserted is a class from the ILSVRC challenge <cit.>. We generate 7 images with identical prompts and seeds, for a total of approximately 7000 images per image model. The results of our generation can be seen in Table <ref>. We compute four different types of perceptual hashes and compare the results. Each class (group of 7 images) is hashed and compared with each other. The mode hash is selected as the “correct” one, and the number of outliers are computed across all thousand categories and reported as the outliers. We also record the number of outliers when allowing for a tolerance of 1 (Ot<=1) or 2 (Ot<=2) hash discrepancies (hamming distance of 1 or 2). Additionally, we report the average hamming distance (aDist) of the outliers and the time to compute a single hash on a 512x512 image. While the color hash technique yields the best accuracy results, the average hash demonstrates nearly the same performance, yet with a speed up of approximately 3.5x. The consensus percentages are computed by the number of outliers over the total number of images hashed. This can be used as the likelihood that an image generated with the same prompt and seed will generate the same perceptual hash. For language generation, we sample several open source LLMs and run the same prompt over four different machines and GPUs. The five LLMs tested were the Wizard Vicuna 13B <cit.> in 4-bit quantizaiton mode, the Vicuna 7B <cit.> in 8-bit quantization, Red Pajama 7B <cit.> in 8-bit quantization, the Red Pajama 3B <cit.>, and GPT-J 6B <cit.> in 8-bit quantization. We utilized the “instruct” version of the LLM and provided a prompt, “### Human: Please write a description about {name} ### Assistant:”, where the name comes from the 1000 ImageNet classes. A total of 4000 sentence generations were created across different decoding techniques and beam searches as shown in Table <ref>. Even with the GPU non-determinism, we saw no stocasticity in the LLM outputs when greedy or n-beam methods were used to decode. When explicitly specifying multinomial sampling where the model selects the next token based on the probability distribution over the entire vocabulary, we do observe the expected non-deterministic behavior. As a final note, we believe that although we did not observe any mismatched tokens between GPUs, we believe that there is a non-zero chance of a token mismatch in the greedy and n-beam case. The nature of the floating point operations and error drift, with a special edge case probability that is near the border of two words, could possibly flip the generated next word; however, this would be an extremely rare edge case. §.§ Fraud Proof and Tolerance in a Decentralized Network In the context of ethereum optimistic rollups, the "fraud proofs" allows for the verification of transaction states. It operates on the principle of optimistic validation, where all blocks are assumed to represent valid states until proven otherwise. A verifier can challenge a transaction state and runs a validation over a subset of the transactions. If the fraud proof detects an issue, the transactions in the batch are eliminated, and the batch is returned to a previously verified valid state. If no fraud proofs are produced during a dispute period, the state change is presumed to be correct. Given the independent likelihoods of determinism computed from the previous experiment, we can now derive an machine learning fraud detection algorithm for a decentralized network. We provide posterior probabilities of detecting incorrect or fraudulent behavior in the case that we assume the majority of nodes in the network are honest (Figure <ref>(a)), or the case where we require a super-majority (Figure <ref>(b)). We use, P(X=k) = nk p^k (1-p)^n-k , where the binomial distribution gives the probability of getting exactly k successes (defined as generating the correct perceptual hash) over n independent verifiers. The P(X=k) is the probability of getting exactly k verifications, where p is the probability of generating a correct output (derived from above). Verification of Correctness - Type I Error - For majority vote and tolerance of 2, we can achieve 99.843%, 99.988%, and 99.999% verification with 3, 5, and 7 independent verifiers. For super majority (greater than 2/3s) and tolerance of 2, we can achieve 99.692%, 99.960%, and 99.999% verification accuracy with 4, 7, and 10 independent verifiers; these probabilities demonstrate strong verification with a minimal redudant work. Another type of error is the accidental or malicious generation of a perceptual hash without performing the task, or guessing a hash based upon the given prompt. To assess the risk of this error, we generate over 1.3M images using stable diffusion. Our simulation mimics a scenario where a malicious actor sees the prompt, and tries to guess at a perceptual hash. Thus, these guess images are generated using identical prompts as the one provided (a photo of {}), but with a different seed. Verification of Correctness - Type II Error - For each category, we generated 1300 images (a total of 1.3M), and perform an all-to-all average hash comparison and count collisions. The total number of hash comparisons here is approximately 0.85 billion. The probability of an intra-class perceptual hash collision is less than 0.0267%, when allowing for a hamming distance tolerance of <=2. This indicates that there is nearly zero percent chance that an adversary would be able to guess the perceptual hash - even with information about the prompt given. §.§ Verification of Generative Training We now turn our attention to the case of generative AI training. In particular, we focus on the textual inversion fine-tuning case; this is a likely scenario in a distributed, heterogeneous GPU network. Textual Inversion is a technique for capturing novel concepts from a small number of example images. It learns new “words” in the text encoder's embedding space, which are used within text prompts for personalized image generation. Importantly, the weights of the U-Net and VAE are frozen, thus, the only parameters that are able to change and reduce the loss are the 768-dimensional word embedding. Similar to the inference case, we would need the ability to perform a fraud proof over training epochs. Thus, we need to identify the major sources of controllable stocasticity and minimize them to allow for replication. Figure <ref>, identifies and minimizes the six controllable sources of randomness including, horizontal flips of the data, random choice of the template, training data shuffling, gaussian noise for the latents, random choice of timestep, and the noise scheduler. We remove these sources of stocasticity and train several dozens of textual inversions randomly sampled from the huggingface concepts library [https://huggingface.co/sd-concepts-library]. For each concept, we present 5-10 exemplar images, and train the model with the following hyperparameters: learning rate is 5e-04, maximum train steps is 2000, batch size is 4, gradient accumulation is 1, and checkpoints are generated every 50 steps. The model is trained in mixed 16-bit precision. Each textual inversion train is performed six times. The first three times are deterministic runs - we minimize all controllable sources of stochasticity. The next three runs are ablations over the possible sources of randomness. For run 4, we allow horizontal flips in the training process, for run 5 we allow the input dataset to be shuffled, and for run 6, we do not set the seed for training and noise. We take the six runs and perform PCA over the 768-dimensional space and project the checkpoint embeddings onto principal components 1 and 2. The plots of the training checkpoints of all six runs can be seen in Figure <ref>. Training in a textual inversion with resyncing of checkpoints every 300 iterations on the <kitchenrobot> inversion. (1) Three training instances (Sync Deter1,2,3) are introduced and synchronized to DeterR1 during training. (2) At the start, there is some amount of variation within the training but within an error bounds of all three Deterministic Runs. (3) By the end of training 2000 iterations, the Synchronized trainings and Deterministic R1, are nearly identical. The graph shows 4 overlapping vector projections from the DeterR1 and Sync Deter1,2,and 3. < g r a p h i c s > We observe that by minimizing the stocasticity leads to nearly deterministic training. There are cases in Figure <ref>(b) and Figure <ref>(e) where the random error begins to propagate and cause drift in the final textual embeddings. To control for this, we demonstrate a gossip training procedure in Figure <ref> where the checkpoints between deterministic nodes can be synchronized every several checkpoints (here we do every 6 checkpoints, or 300 epochs). Given the gossip training mechanism, the end embedding weights of the verifiers follow the trajectory of the trainer and provide much tighter error bounds. Finally, we provide an qualitative outputs of trained textual inversions in Figure <ref>. As expected, the (a)(b)and (c) outputs are perceptually close given that their models had controlled their sources of stochasticity, while (d) introduced random flips, (e) introduced data shuffling, and (f) allowed random seeds, and (g) allowed all randomness parameters. §.§ ImageNet GenSD v1.5 As an additional contribution of this work, we developed and published a new ImageNet dataset of generated images using Stable Diffusion v1.5. This dataset was a result of the check for Type II errors, where we compute the intra-class collisions to provide assurances of the minimization of accidental or malicious errors. The dataset mimics the size and class structure of the original ImageNet database, e.g. it contains approximately 1300 images per class over 1000 classes for a total of 1.3 million images. Each category is generated with the text prompt "A photo of " using seeds ranging from 1 to 1300. Images are 512x512 in resolution. While this dataset can be used to replicate our results, we also note that synthetic datasets such as these have been used in the past to improve image classification. For example, <cit.> showed that the use of 1.2M synthetic images to augment the training set provided the most benefit (more synthetic data actually hurts performance), and gives 1-2% overall accuracy improvement in the Top-1 match. The dataset is available now for download at the following huggingface link [https://huggingface.co/datasets/ek826/imagenet-gen-sd1.5]. A random sampling of images generated can be seen in Figure <ref>. § CONCLUSION In conclusion, we demonstrate that we are able to verify the correctness of machine learning tasks in a distributed, decentralized network. We tackle a particularly challenging task in ML of verifying both the inference and training of generative AI. We present the likelihoods of an honest node producing an output that can be verified by other independent nodes in the network and in the process generated millions of data samples and billions of hash comparisons. We provided empirical evidence of different parameterizations of tolerance and error bounds for verification as the groudwork for a fraud proof in a blockchain network. Our results show that with minimal overhead and extremely high precision, we can verify generative AI work in both image and language generation, and a malicious actor has close to zero percent chance of exploiting the algorithm. The study also identified and minimized the major sources of stochasticity in generative AI training and presented gossip and synchronization training techniques for verifiability. In addition to this, we released our data for verification and to act as a new training dataset, ImageNet-Gen, which can be used to augment existing training pipelines. In summary, we provide a robust and practical foundation for AI verification and consensus, significantly reducing the need for trust in a decentralized network. § ACKNOWLEDGEMENTS Thank you to Slawomir Johansen for the development work and for the discussions with Johnny La, Zachary DeStefano, Kevin Choi, and 0xSMB. unsrt
http://arxiv.org/abs/2307.02044v1
20230705055849
The distribution of Ridgeless least squares interpolators
[ "Qiyang Han", "Xiaocong Xu" ]
math.ST
[ "math.ST", "cs.IT", "math.IT", "stat.TH" ]
leftmargin=9mm [4] equationsection lim inf lim sup
http://arxiv.org/abs/2307.03126v1
20230706165220
Context-Aware Configuration and Management of WiFi Direct Groups for Real Opportunistic Networks
[ "Valerio Arnaboldi", "Mattia Giovanni Campana", "Franca Delmastro" ]
cs.NI
[ "cs.NI", "cs.LG" ]
2pt switch case [1](#1) SE[SWITCH]SwitchEndSwitch[1] #1 SE[CASE]CaseEndCase[1] #1 *EndSwitch *EndCase for each S[FOR]ForEach[1] #1 *EndWhile *EndIf *EndFor Context-Aware Configuration and Management of WiFi Direct Groups for Real Opportunistic Networks Valerio Arnaboldi, Mattia G. Campana, and Franca Delmastro IIT-CNR, Via G.Moruzzi, 1 56121, Pisa, ITALY {v.arnaboldi, m.campana, f.delmastro}@iit.cnr.it Submitted July 6, 2023 ============================================================================================================================================================= Wi-Fi Direct is a promising technology for the support of device-to-device communications (D2D) on commercial mobile devices. However, the standard as-it-is is not sufficient to support the real deployment of networking solutions entirely based on D2D such as opportunistic networks. In fact, WiFi Direct presents some characteristics that could limit the autonomous creation of D2D connections among users' personal devices. Specifically, the standard explicitly requires the user's authorization to establish a connection between two or more devices, and it provides a limited support for inter-group communication. In some cases, this might lead to the creation of isolated groups of nodes which cannot communicate among each other. In this paper, we propose a novel middleware-layer protocol for the efficient configuration and management of WiFi Direct groups (WiFi Direct Group Manager, WFD-GM) to enable autonomous connections and inter-group communication. This enables opportunistic networks in real conditions (e.g., variable mobility and network size). WFD-GM defines a context function that takes into account heterogeneous parameters for the creation of the best group configuration in a specific time window, including an index of nodes' stability and power levels. We evaluate the protocol performances by simulating three reference scenarios including different mobility models, geographical areas and number of nodes. Simulations are also supported by experimental results related to the evaluation in a real testbed of the involved context parameters. We compare WFD-GM with the state-of-the-art solutions and we show that it performs significantly better than a Baseline approach in scenarios with medium/low mobility, and it is comparable with it in case of high mobility, without introducing additional overhead. Wi-Fi Direct, Opportunistic Networks, D2D, Power consumption, Context-Awareness § INTRODUCTION The number of active mobile devices recently bypassed the world population, according to GSMA Intelligence[https://www.gsmaintelligence.com/]. People often carry multiple devices in their pockets, each equipped with several wireless interfaces. This generates a number of opportunities for the users to create wireless communications. Currently, wireless interfaces are mainly used to access the Internet through fixed infrastructures (WiFi access points or cellular base stations). Despite this, many interfaces support also direct communication between devices (device-to-device communication, or D2D). The support for D2D on commercial mobile devices such as smartphones, tablets and laptops attracted the interest of researchers in the field of mobile and pervasive computing <cit.>. In fact, it can foster the creation of distributed services that run on mobile devices and rely on the network formed by multiple direct connections between them to coordinate operations and to disseminate user-generated contents, without the need for a fixed infrastructure. In this context, since devices follow people movements, the structure of the network is usually unstable, and nodes (or groups of nodes) might end up being temporarily isolated from the rest of the network. To enable networking functions in these conditions (e.g., routing and content dissemination), several protocols have been proposed in the literature <cit.> based on the store-carry-forward paradigm and dealing with variable delays during message propagation. These protocols paved the way for the definition of new context- and social-aware distributed services and applications based on D2D, such as Mobile Social Networks (MSN) <cit.>, and novel mobile-based recommendation systems for content dissemination <cit.> <cit.>. However, to effectively deploy opportunistic networks in a real environment, through available commercial mobile devices, we must face with technical constraints introduced by the available communication standards like Bluetooth, WiFi, WiFi Direct (hereinafter WFD), NFC, and their implementation on mobile operating systems. NFC interfaces have a very limited range (< 20cm), which results in the need for the users to put the devices in physical contact to create a connection between them, and this is not reasonable for opportunistic networks. On the other hand, 802.11 standard originally provided an “ad hoc” mode, in which devices could communicate directly with each other in a peer-to-peer manner, but this mode has been explicitly removed by Android (unless the device is rooted) and iOS. Commercial devices typically support the tethering mode, through which a device can act as a hotspot Access Point (AP) in order to share its Internet access. In this case, each node has a fixed role as AP or client, and two nodes with the same role cannot communicate. In addition, it has been demonstrated that the AP mode heavily consumes device's energy <cit.> <cit.> <cit.> since IEEE 802.11 standard does not include any power saving mechanism for the AP (assuming it as a continuously powered device). D2D communications are also available through Bluetooth and WFD, which both introduce power saving techniques. However, they require the explicit authorization by the user for each connection establishment. Specifically, Bluetooth pairing requires the selection of a pin, while WFD asks for the acceptance of a pop-up notification during the connection phase. These features limit the creation of autonomous connections and the deployment of opportunistic networks. Moreover, both standards do not support communication among groups, even if these are in proximity. This prevents the content dissemination in the network, maintaining the groups isolated. In this paper, we propose a novel middleware-layer protocol for the efficient configuration and management of WFD groups (WFD Group Manager, WFD-GM) to enable the creation of opportunistic networks in real conditions. WFD-GM exploits the main features of WFD standard to discover devices in proximity, and then it exchanges context information among nodes in order to compute (in a distributed manner) a context function defined to identify the best group configuration in a specific time window. It is designed for Android commercial devices, since it represents one of the most diffused mobile operating system and the most open to third-party development. Specifically, WFD-GM combines two mechanisms of WFD standard. As a first step, it uses the Service Discovery function designed to support zero-configuration networking protocols (e.g., UPnP[https://openconnectivity.org/resources/specifications/upnp/specifications] and Bonjour[https://support.apple.com/bonjour]) on top of Wi-Fi connections. In this procedure, nodes are able to exchange the SSID and key of the group they belong to. This allows devices to avoid the manual user authorization for D2D connections and they can autonomously connect to each other. Even though this procedure overcomes the security level introduced by the mobile operating system, it operates at a first authentication level. We will show in the next section how it is possible to maintain additional security levels in opportunistic networks even implementing a simple key exchange procedure during the service discovery, as presented in <cit.>. WFD-GM includes also the definition of a context function that takes into account heterogeneous features of the devices (e.g., battery level, list of neighbors) in order to identify the best group configuration. In fact, group configuration and establishment in WFD requires the identification of a node that assumes the role of `Group Owner' (GO), mainly acting as AP for the group, and the others acting as clients. A node can become GO in an autonomous way (i.e., directly creating its own group) or by a negotiation phase between two devices in proximity. However, selecting and creating the best group configuration is not always sufficient to guarantee an optimized network coverage and content dissemination. In fact, by using only this initial procedure, nodes reside in the same group until they are in proximity and/or they have not consumed their resources. In this case, the network might be formed of several isolated groups. For this reason, we introduced an additional procedure in WFD-GM for selected nodes that are in the communication range of two or more separated groups. Specifically, it can force the disconnection of a client from the original group and its subsequent connection to another group in proximity, making that node a traveler between the two groups and contributing to disseminate contents in the network. On the other hand, a GO can decide to merge its group with others in proximity through a specific procedure. The rest of the paper is structured as follows. Section <ref> describes the main Wi-Fi Direct operations. Section <ref> provides an overview of the existing research work in this field. Section <ref> presents the details of WFD-GM, and in Section <ref> we present the evaluation metrics and the experimental results obtained in three realistic scenarios. Finally, in Section <ref> we draw our conclusions and present directions for future works. § WI-FI DIRECT WFD is based on the definition of P2P groups, in which one device (called Group Owner or simply GO) implements the functionalities of a IEEE 802.11 AP and the others act as clients. In addition, WFD implements power saving services running on the GO in favor of its clients and the GO is in charge of running a DHCP to assign IP addresses to the clients to enable the communication <cit.>. The clients of a P2P group can be both P2P-enabled or legacy devices (i.e., not supporting WFD). In the latter case, clients cannot exploit the enhanced features of WFD, but they may join the P2P group by connecting to the GO as they typically do with a traditional AP. WFD allows devices to establish a P2P group through three different procedures: i) Standard, in case two or more nodes discover each other and there is a negotiation phase for the GO election; ii) Autonomous, when a device autonomously decides to create a group and becomes the GO, announcing itself through beacon messages; and iii) Persistent, in case the devices use stored configuration parameters of a previous group to re-establish the same group and speed up the process. Each of the three procedures exploits the main functionalities of WFD. Specifically, they mainly rely on (i) Peer (optionally Service) Discovery, (ii) GO Negotiation, and (iii) WPS Provisioning. In the following, we briefly describe the main characteristics of these features, highlighting their advantages and drawbacks in supporting a real opportunistic network. Peer Discovery In order to create a communication group, two P2P devices must first discover each other. The Peer Discovery phase usually starts with a traditional 802.11 Wi-Fi scan, through which the devices are able to find existent P2P groups and traditional WLAN networks. After this scan, the following discovery algorithm is executed. First, a P2P device randomly selects one of the so called Social channels (i.e., channel 1, 6, and 11 in the 2.4 GHz band) as its own Listen Channel, i.e., the channel on which it will “listen” for discovery messages coming from other devices. The chosen Listen Channel remains the same until the Peer Discovery is completed. Then, the device continuously switches between two operative states: search and listen. When it is in the former state, the device sends Probe Request messages to each of the Social channels; instead, when it is in the latter one, the device listens for Probe Requests in its Listen Channel in order to respond with Probe Response messages. Finally, two devices discover each other when they are on the same channel, but in different discovery state. Convergence of two devices on the same channel is assisted by randomizing the time spend in each state. Typically, this time is randomly distributed between 100 ms and 300 ms <cit.>, but the actual amount of time is implementation dependent. GO Negotiation Once two devices have discovered each other, they proceed with the Standard group formation, where the GO Negotiation procedure begins. This phase implements a three-way handshake used to agree on which device shall become GO, and the channel to be used for the communication. During the negotiation, nodes exchange a GO Intent (GI): an integer value (from 1 to 15) with which a device expresses its willingness to act as GO. The device which sends the higher intent becomes the owner of the group. In order to prevent conflicts during the GO election (e.g., if two devices send the same GO Intent), a Tie breaker bit is randomly set to 0 or 1 every time a GO Negotiation Request is sent. The device with the Tie breaker bit set to 1 will be elected as GO. Generally, the GI value is not related to the actual suitability of a node to act as GO. In Android, upper-layer applications can specify a GI, otherwise the WFD framework simply sets it with a random value[https://developer.android.com/guide/topics/connectivity/wifip2p.html]. Service Discovery The Service Discovery is a WFD optional feature and it represents an extension of the Peer Discovery. In fact, it adds a message exchange phase among nodes in proximity by exploiting the Generic Advertisement Service (GAS) protocol defined in IEEE 802.11 <cit.>. GAS is a link layer query/response protocol that allows two non-associated 802.11 devices to exchange queries coming from a higher layer protocol (e.g., Bonjour or UPnP). When a requester discovers another peer, it transmits one or more GAS Initial Request frames, and the target responds with one or more GAS Initial Response frames if it exposes some services. This procedure can be performed both as a complete discovery procedure, collecting additional information for the GO selection, and after the group formation to periodically check devices in proximity and to dynamically manage groups. According to <cit.>, Peer Discovery and GO Negotiation phases require several seconds, especially in the Standard procedure, introducing thus a not negligible delay in the group formation. For this reason, in WFD-GM we decided to avoid the GO Negotiation phase, and we exploit the Service Discovery procedure to exchange context information among devices related to single nodes' characteristics (e.g., available computational resources, or the battery status). This information is then used by each node to evaluate its suitability to become GO of the group as the result of a context function. In fact, one of the main targets of our protocol is to select the best GO in the surroundings in order to establish a stable and long lasting communication group, in addition to speed up the group formation process. Then, once the group is created, WFD-GM performs a periodic Service Discovery procedure and dynamically evaluates the group configuration, depending on the information shared by surrounding devices, activating, in case they are needed, traveling or merge operations. In addition, in order to make the group formation as much autonomous as possible, WFD-GM also avoids the explicit user's authorization for D2D communication. To this aim, it acts on the WPS Provisioning phase, as described below. WPS Provisioning The main purpose of this phase is to establish a secure connection between the GO and the group members, after the explicit user's authorization (through a PIN confirmation or an Accept button). WFD implements thus the Wi-Fi Simple Configuration (WPS) <cit.> protocol, by requiring that the GO generates and issue the network credentials to its clients. WPS uses WPA-2 with a randomly generated Pre-Shared Key (PSK) as security measure to protect the connections, and the Advanced Encryption Standard (AES)-CCMP in order to encrypt the transmissions. In this case, the user's authorization mainly focuses on the authentication process of the connecting device, often ignoring the reason of the connection request. If we consider a middleware framework designed to support the establishment of opportunistic networks, and the users willing to participate to the network through their mobile devices, we can also envision that the framework could obtain a general user authorization during the installation phase and autonomously manage the devices' connection while maintaining the data encryption mechanism. Then, additional security measures could be defined at the upper layers to implement secure routing mechanisms, trust management and cooperation protocols, and application/user specific privacy protections. This approach has been largely studied in the literature and presented in <cit.>, highlighting the different levels of security we can implement in an opportunistic network while supporting the autonomous generation and management of groups of nodes. Following this approach, WFD-GM exploits the Service Discovery procedure, running after the group creation, to exchange the encrypted network credentials of the GO with the nodes' in proximity. In this way, those nodes can autonomously join the WFD group as legacy clients, without any user intervention to authorize the connection. Then, upper-layer security mechanisms (which are currently not provided by WFD-GM) can be implemented to guarantee an efficient access control and trust operations among nodes running the same middleware framework and WFD-GM protocol. § RELATED WORK In the last few years, WiFi Direct has generated a lot of interest in the opportunistic networking research community. However, most of the works in the literature focused on the experimental evaluation of basic standard features (with a limited number of nodes), trying to overcome WFD limitations through hacks and/or by rooting the devices. We can also divide related work depending on their main optimization target: (i) selection of the best GO, (ii) autonomous group formation (bypassing the user's authorization), and (iii) inter-group communication. §.§ Selection of the best GO As previously described, GO selection represents one of the most important phases of the entire protocol since a “good” GO can be able to guarantee a communication path among the highest number of nodes in proximity, and to improve the entire group performance. WD2 <cit.> is an algorithm aimed at automatically selecting a GO based on the Received Signal Strength Indication (RSSI) measurements. In this case, each device collects the RSSI reading from nearby devices, and a GO Intent (GI) value is calculated based on such collected measurements. The devices then exchange their GI values during a modified discovery phase. The device that exposes the highest GI value creates the group. WD2 has been validated on simple network topologies composed by a maximum of five Android devices, and it effectively speeds up the standard Android group formation. However, it requires a modified implementation of WFD native framework, which limits the applicability of the algorithm in real scenarios. Other researchers propose more advanced strategies for the selection of the best GO candidate. For Menegato et al. <cit.>, the device who act as GO should change dynamically, and the choice of a new GO should be based on the residual energy of the candidates. In <cit.>, the authors proposed three different approaches to choose the GO: i) the device with the higest ID in the surroundings, ii) the peer that has the shortest average distance from the other nodes, iii) the node with less mobility with respect to its neighbors. However, considering only a feature at a time could be not sufficient to manage the complex dynamic that typically govern a real mobile scenario. For instance, the device that discovers the highest number of neighbors might also be the one with the lowest battery level. Selecting it as the GO would lead to a more extended group, but probably characterized by a very limited duration. WFD-GM leverages a combination of several features to evaluate the suitability of a node to act as GO in a specific context, as described in Section <ref>. §.§ Autonomous group formation Once a GO is selected, the other peers must connect to it in order to start the communication. As described in Section <ref>, the WPS Provisioning phase might represent a limitation to the use of WFD in mobile scenarios. In the literature, researchers proposed different approaches in order to allow users' personal devices to autonomously form WFD groups (i.e., without asking for the explicit user's authorization). Wong et al <cit.> have been probably the first to tackle this problem. They exploited WFD ability to support legacy devices and the Service Discovery in order to avoid user intervention in the formation of groups. The device that elects itself as GO, sends the security credentials of the group to the other peers through a Service Discovery Response message. In this way, the peers in proximity can connect to it as legacy clients without the need of the user's intervention. This solution exploits the same approach that we adopted in WFD-GM to avoid the explicit user's authorization but it does not take into account the other two fundamental features: (i) the best GO selection and (ii) the communication among isolated groups. In addition, it does not take into account security issues derived from previous operations. The solution proposed in <cit.> exploits the same approach and includes a simple criterium to elect the GO: the best candidate is the node with the highest battery level among those in proximity. In addition, exploiting the ability of WFD to support legacy devices, this solution also enables inter-group communication, but it requests a customization of the native WFD framework. §.§ Inter-group communications On current WFD implementations, especially on Android OS, each group is characterized by the same IP subnet. Thus, even if different groups could be interconnected, due to the presence of some nodes in both communication ranges, this is not possible <cit.>. Some recent work attempt to bypass the IP subnet constraint in different ways. Specifically, Casetti et al. <cit.> proposed a solution that allows a GO to manage a group and, at the same time, to connect itself as a legacy client to a second group. The system exploits a combination of unicast and broadcast messages in order to transmit data among different groups, introducing however a non-negligible overhead in the overall communication. A recent work <cit.> proposes an algorithm which exploits the Service Discovery mechanism in order to allow devices to negotiate distinct IP subnets before the establishment of the groups. Once the GOs agree on the IP subnets, each of them creates its own group and uses its proposed IP subnet. However, the solution is based on a customization of the Android WFD framework implementation to force the replacement of the default fixed IP subnet with the negotiated one. This limits the applicability of the solution on a broader set of devices, and consequently does not allow large-scale deployment of opportunistic networks. §.§ Other approaches Other solutions, such as <cit.> and <cit.>, embed application messages directly into Service Discovery frames. This kind of approach does not require any infrastructure, connections, or groups formation for data exchange, relying only on the service discovery announcements and requests to propagate messages between peers. Even though this approach could represent a valid solution to exchange small amount of data between devices (e.g., alerts or advertisements), it has a very limited bandwidth, which might not be sufficient for many real world situations. § WIFI DIRECT GROUP MANAGER WFD-GM combines all the operations sketched in Section <ref>, implemented through Android SDK version 14-25. The protocol, as detailed in Procedures <ref>, <ref>, and <ref>, runs on each single device. In order to minimize the time required for a group formation and to optimize the credentials exchange, WFD-GM starts on each node by creating a WFD group in which the local device autonomously becomes GO (initially without any associated client). In Android, this operation is performed through the API, which also automatically generates the SSID and the group credentials (i.e., the WPA2-PSK key). This information is then included in the Service Discovery frames to allow autonomous connections. After this operation, five parallel procedures start and keep running until the termination of the protocol. The first procedure is the , which performs a continuous WFD Service Discovery and maintains an updated list (L_N) of the devices in proximity. The messages exchanged during the Service Discovery include, in addition to the group's credentials, an index of the local node suitability to become/remain GO of a larger group. We define s(ln) (called Suitability index) as a function of the following set of context features: i) r_ln, the amount of available resources of the local device (e.g., battery level, free CPU, free memory), ii) pp_ln, the current number of peers discovered in proximity, iii) c_ln, the capacity of the node (i.e., the number of incoming connections that the device can still accept), and iv) st_ln, the stability index, which provide a measure of the ability of the node to create a long lasting WFD group (i.e., a group that will not be rapidly destroyed due to the local node's mobility). More formally: s(ln) = ω_1 · r_ln + ω_2 · pp_ln + ω_3 · c_ln + ω_4 · st_ln, where the weights ω_1,⋯,4 govern the relative importance of each feature in the overall computation of s(ln). The stability index st_ln evaluates both the mobility of the local node and how much its surrounding environment changes over time. Currently, we consider it as a function of the nodes in proximity (L_N), but more complex approaches can be taken into account (e.g., a function of the geographical locations visited by the node in the past). The procedure is in charge to update st_ln every T_st seconds as follows. Every time L_N changes, it calculates the difference between the current list of neighbors and the one of the previous time window, then computing the Jaccard index of the two lists. Then, it updates a running average J̅ of the Jaccard indices calculated since the last update of st_ln. Finally, the stability index is updated with the following formula: st_ln = st'_ln·ω_st^1 + J̅·ω_st^2, where st'_ln is the stability index calculated in the previous time window of T_st seconds, and the weights ω_st^1 and ω_st^2 govern the relative importance of the past stability index in the current computation. The procedure is in charge of updating the information included in the Service Discovery, and to this aim it uses the last updated stability index st_ln. In order to manage the group dynamically, reflecting nodes' mobility, the protocol defines two asynchronous procedures running concurrently to the previous ones. The procedure constantly listens for incoming connection requests from other devices and for clients' disconnections from the local group, maintaining an updated list of the group members (G_M). This events are only managed by GO nodes and, after each event, they broadcast a message to all their clients containing the updated G_M list, allowing them to maintain an updated view of the group. Note that WFD does not provide this feature natively. The second procedure is the , which is in charge of receiving and processing incoming control messages depending on the local node status. After launching these procedures, the protocol executes its main loop in which, every T_D seconds, it checks the status of the local variables and the current role of the local node to choose and execute the appropriate action. Specifically, if the local node is a GO, it can be in one of the following status: GO1:has no associated client (G_M=∅), but the list of peers in proximity is not empty (L_N ≠∅). The node must decide whether to remain GO or to connect as a client to another peer, using the procedure. This compares the s(ln) of the local node with those received by the others. If the local node has the highest s(ln), it remains GO and waits for incoming connections, otherwise it connects to the best GO as a client. GO2:has some connected client (G_M≠∅) but the amount of resources consumed to manage the current group is beyond a predefined threshold res_th. The node sends a message to its clients in order to alert them that it is destroying the group for limited resources. Then, it disbands the group and comes back to the inital status GO1. GO3:has discovered other GOs in proximity (GO_N), with or without associated clients. It executes the procedure (Procedure <ref> and Fig. <ref>), aimed at evaluating the advantages of merging its local group with the others in proximity in order to form a larger group. The procedure firstly selects the best GO in proximity from the GO_N list, based on the suitability index s_ln. If the best GO for the merge (go_best) is not the local node, this asks to its client if go_best is in their respective proximity ranges and waits for the responses. Then, if the majority of the clients respond positively, the local node sends a message to its clients (to notify the merge decision), disbands the group and connects to go_best. Otherwise, it maintains its current status role of GO. If the local node is a client, it can be only in the C1 status and executes the procedure (Procedure <ref> and Fig. <ref>). Specifically, with probability p_T, inversely proportional to the group cardinality |G_M|, it disconnects from its current group, and places the GO in a blacklist (L_B) for a fixed amount of time (T_Btravel) to avoid considering it as potential GO during the subsequent GoElection procedure. Finally, the local node returns to the GO1 status in order to choose which group to connect among those in the GO_N list. A node assuming the role of client performs then additional actions depending on the type of message it receives from its current GO. Specifically, it can receive the following messages: GROUP_BYE:means that the GO is disbanding the current group and a new one will be formed. Therefore, it places the GO in L_B for T_B seconds, and it comes back to its initial GO status (without clients), GO1. VISIBILITY_REQ(go_best):means that the current GO is evaluating a possibile merge operation and it selected the new best GO (go_best). The local node must verify its proximity to go_best and appropriately reply to the current GO. MERGE_WARNING:means that the GO decided to disband its group in favour of a new GO, indicated as go_best in the previous message. The local node places the GO in L_B for T_B seconds and it sends a connection request to the go_best if in proximity, otherwise it comes back to its initial GO1 status in order to execute the procedure. WFD-GM is thus able to autonomously create connections between devices in proximity and to manage the creation of optimal groups (with respect to the s(ln) value of the GOs). In addition, it allows inter-group communication through groups' merge operations and travelling clients. § EXPERIMENTAL EVALUATION To evaluate WFD-GM performances and compare it with some reference solution, we decided to implement also a Baseline protocol. It just implements the group's creation by using a simple rule to select the GO among nodes' in proximity: it chooses the one with the highest MAC address. Baseline executes the GO selection at the beginning of the protocol and the GO maintains its role until the end of its resources or in case it moves out of the connectivity range of all the group's members. It basically exploits WFD Service Discovery to exchange the group's credential to enable autonomous connections among nodes, and it does not implement any additional strategy for the group management (e.g., merge operations or traveling nodes). Therefore, Baseline is comparable with the state-of-the-art solutions presented in Section <ref>. We compare WFD-GM and Baseline in three simulation scenarios representing three real world use cases involving a variable number of nodes characterized by different mobility models. In addition, since WFD-GM is characterized by a context function to evaluate the suitability of a node to assume or maintain the role of GO (Eq. <ref>), we need an estimation of the parameters to be included in the simulation set up. To this aim, we conducted a set of real experiments aimed at evaluating the resource consumption on real commercial devices related to the execution of main WFD operations (i.e., Service Discovery, Message exchange) and the capacity of a node in terms of maximum number of acceptable connections. Therefore, in this section we present, firstly, the experimental results related to power consumption and groups' configuration. Then, we describe the characteristics of the simulation scenarios, the metrics used to compare the performance of Baseline and WFD-GM, and the achieved results. §.§ Context parameters estimation The main purpose of WFD-GM is to allow mobile devices to autonomously create an opportunistic network and to dynamically manage its configuration according to nodes' mobility and with a fairly usage of the nodes' resources. Clearly, the most critical resource for a smartphone is represented by the battery consumption, which also represents a critical factor for the user experience while running a mobile app or framework on the personal mobile device. We decided thus to model the power consumption on simulated nodes with respect to the main WFD operations implemented in our protocol in order to provide also an evaluation of the overhead introduced by WFD-GM on real devices. To this aim, we carried out an experimental evaluation of the time required to entirely consume the battery of some commercial devices while assuming the role of GO or client in different configurations. We performed a series of empirical experiments with different smartphones (LG Nexus 5 and Motorola Nexus 6) equipped with different versions of Android (6.0.1 and 7.1.2). In the first set of experiments, we considered a node assuming the role of GO without any associated client and we evaluated the power consumption while simply maintaining the GO status, and running a Service Discovery procedure every 2 minutes (the default time duration set in the Android P2P Framework[see the com.android.server.wifi.p2p.WifiP2pServiceImpl.java class in the Android P2P Framework source codes]). We repeated this experiment with an incremental number of peers in proximity (up to 10 devices). We observed that there is no relevant difference between the two operations in terms of battery consumption. The overall measured cost is the same as maintaining the Wi-Fi interface active without performing any network connection or data transfer. In both experiments, the battery depletion is strictly linear, with a fall of approximately 20% of the battery capacity every 5 hours. We used this data to update the battery level of each node during the simulation in case it performs the WFD Service Discovery procedure and it is not connected to any other peer. Then, we performed a set of experiments to estimate the battery consumption of each node involved in a WFD group with a variable number of members (from 1 to 4 connected clients). The limit of 4 clients per group reflects the capacity of the commercial devices we used in our experiments (i.e., LG Nexus 5 and Motorola Nexus 6). In fact, we experienced that, for both models, when this limit is reached, the DHCP module running on the GO is not able to assign additional IP addresses to new clients and their connection request fails. The number of supported incoming connections strictly depends on the manufacturer's implementation, and it cannot be changed by the applications on not-rooted devices. In fact, we experienced different group cardinality with other commercial devices (e.g., up to 10 clients for a HTC Nexus 5X or Xiaomi Mi5 running as GO). Thus, to reproduce a realistic scenario, in which users are equipped with heterogeneous mobile devices, we assigned to each node in the simulation a capacity parameter randomly chosen between 4 and 15 (i.e., the maximum number of acceptable incoming connections). To estimate the battery consumption in a group configuration, we deployed a simple application in which devices create the group and each member constantly sends data to the others with a frequency of one message every 100 milliseconds. Even if the transmission frequency can be quite high for a mobile application use case, it lets us to model the battery depletion in a worst case scenario. Figures <ref> and <ref> respectively show the discharge curve of the battery on each single node. We can note that the curves follow a linear trend, but the GO generally discharges faster then the clients. This is due to the fact that, in a WFD group, the GO is also in charge of enabling communication between clients, forwarding all the messages exchanged by the members of its group. To exploit these results in our simulation scenarios, we used a linear regression model to predict the battery consumption on a single node involved in larger groups. Figures <ref> and <ref> show the predicted discharging curves for both the GO and the clients in groups with up to 20 members. Formally, the battery level at a certain time is given by the following linear function: b_l(t,n) = t · (p_1 · n - p_2) + 1, where t is the time in hour and n is the number of clients in the group. The values of p_1 and p_2 that we found by fitting real battery consumption data differ by the role played by the device in the group: p_1 = -0.006802 and p_2 = - 0.03356 if it is the GO, otherwise p_1 = -0.003365 and p_2 = - 0.04075 if it is a client. §.§ Simulated use case scenarios We implemented three use case scenarios by using the ONE simulator <cit.>. Specifically, we envisioned three different application environments involving different numbers of users, with different mobility patterns and with a different geographical distribution: a concert, a convention venue and a working day in an European metropolitan city (Helsinki). In the Concert scenario, we replicated a medium-size concert with an audience of 1000 seated people, arranged in a 20 x 25 grid in an area of 500 m^2. We assumed that people are seated for the entire duration of the concert (i.e., 3 hours), without interruptions or users' movements during the exhibition (i.e., a static scenario). In the convention scenario (namely, ComiCon), users are characterized by a moderate mobility. In this case the simulation lasts for 4 hours and the simulated geographical area is modelled as a grid of 4000mt x 2000mt. In such space, we distributed a total of 575 points of interest (POIs - e.g., exhibitors stands, toilets, eateries) in order to replicate the characteristics of a big convention of comics and games (e.g., the New York Comic Con[http://www.newyorkcomiccon.com]), and we simulated 2000 users moving following the ShortestPathMapBasedMovement model implemented in ONE. This represents a map based movement model (i.e., the grid in this case) that uses Dijkstra's algorithm to find the shortest path between two random POIs. The simulation nodes are characterized by a speed in the range of [ 0, 1.5 ] m/s and each of them remains in a given POI for t_w seconds, where t_w is randomly drawn from [ 600, 3600 ] seconds. These parameters allowed us to model possible queues and crowds around the stands, characterizing the users with a moderate mobility, which is very common in a convention scenario. Finally, we simulated an urban scenario (called Helsinki), in which users are characterized by a high degree of mobility. We used the Working Day Movement model <cit.> implemented in ONE in order to simulate a typical working day of 4000 people. The mobility model uses several highly customized mobility sub-models that define nodes' behavior during different daily activities in the Helsinki city center, such as staying at home, working, and evening activities with friends. To simulate the movements between home and work, and between work and possible meeting points for evening activities, the model defines three additional mobility sub-models that are combined for each single node: car travel mobility, public transportation mobility, and walking mobility. For a complete description of the mobility model, the reader can refer to <cit.>. In this kind of scenarios, stable groups may be rare and the structure of the network configuration continuously evolves over time. We expect that ComiCon and Concert scenarios highlight the advantages of using WFD-GM protocol since the limited nodes' mobility could create network's partitions, limiting the content dissemination among nodes of different groups. In addition, these crowded scenarios can benefit from the autonomous generation and management of opportunistic networks, reducing the load of infrastructured wireless networks (characterized by limited capacity of APs or limited bandwidth). On the other hand, in an urban scenario with high mobility, we expect that WFD-GM performs similarly to Baseline, while not introducing additional overhead. In the following section we present the evaluation metrics and we discuss the experimental results. §.§ Evaluation metrics and results discussion Baseline and WFD-GM share the same parameter T_D, which defines how frequently the two protocols take a decision, according to the status of the local node (Procedure <ref>). Therefore, we simulated each scenario for different values of T_D (i.e., 5, 30, and 60 s), considering that the default value for a Service Discovery duration is 120s. In addition, we used the following values for the WFD-GM parameters: ω_1,…,4 = 0.25 are the weights of the Suitability index (Eq. <ref>), ω_st^1 = 0.4 and ω_st^2 = 0.6 are used to compute the stability index (Eq. <ref>), res_th = 0.1 is the resource threshold, and T_B = T_B_travel = 60s are the blacklist times. During each simulation, both Baseline and WFD-GM create a network of multi-hop paths among the nodes. This network can be represented as a graph, called Connectivity Graph, in which two nodes are directly connected (1 hop) if they have participated in the same WFD group during the simulation. Formally, the Connectivity Graph CG = (V,E) is an undirected and weighted graph, where V is the set of nodes, and each edge e_a,b∈ E between two nodes a and b is labeled with a weight representing the total connection time for two nodes (i.e., the sum of all the connection times between them). In order to evaluate the CG generated by the two protocols, we performed two different analysis. Firstly, we measured how fast a set of information can be disseminated within a given network configuration, assuming that nodes implement an epidemic forwarding algorithm. When a simulation starts, each node generates a message which contains the identification number of its creator. Every time a node joins a group, it sends all the messages contained in its own cache to each member of the group. Each node keeps in its own cache a copy of the received messages for the entire simulation. Every 30 minutes (simulation time) we measured the mean percentage of the messages contained in the nodes' caches. Then, we performed a complex network analysis on CG at the end of the simulation. Specifically, we evaluated the connection probability of each pair of nodes based on their total connection time, including also multi-hop paths, as an additional measure of the network connectivity. Then, we evaluated the partitioning degree of CG in terms of number of connected components in the graph, and the percentage of nodes in the largest one (Tab. <ref>). Both these measures highlight the degree of connectivity of the network depending on the nodes' mobility and the actions performed by the two protocols. As a final measure, we compared the amount of nodes' resources used by the two protocols.In Tab. <ref>, we reported the mean value (μ), the median (𝐱̃), and the variance (σ^2) of the nodes' battery level at the end of each simulation scenario, assuming T_D = 30s (we obtained similar results for the other values of T_D).Since Helsinki simulation is much longer than the other two scenarios (i.e., 24 hours), all the nodes discharged their batteries before the end of the simulations. Therefore, for this scenario we report the statistics about the (normalized) times used to expire the entire battery. We start discussing the two opposite scenarios in terms of nodes' mobility and geographical distribution: Concert and Helsinki.In the first one, as detailed in Tab. <ref>, Baseline generated a highly fragmented CG with more than 90 connected components, and the largest one containing only 2% of the nodes. As we can note from Fig. <ref>, the message dissemination is really scarce with Baseline protocol due to the lack of nodes' mobility, which represents the only possibility of network reconfiguration for this protocol. In fact, in this case, groups are static until the end of the GOs resources (that go over the simulation time) and they are characterized only by 1-hop connections. In fact, as shown in Fig. <ref>, only 1% of nodes are connected for the entire duration of the simulation, while all the others result not to be connected at all. Instead, with WFD-GM all the messages are disseminated in the network in the first 30 minutes of simulation, mainly thanks to merge and travelling operations (Fig. <ref>). All the nodes have a not null connection probability, even though their paths are characterized by a limited duration and, finally, the protocol provides a fully connected network over time (i.e., just one connected component with 100% of the nodes). On the other hand, in Helsinki scenario, Baseline and WFD-GM have similar performances. Specifically, in terms of messages dissemination (Fig. <ref>), the curves of the two protocols are mostly overlapped, reflecting the impact of the high nodes' mobility on the network reconfiguration and nodes' connectivity. In fact, in the first two hours, the percentage of the messages exchanged by nodes rapidly grows because the mobility model assumes that each node encounters several others on the way to the offices. Then, the messages dissemination slows down and stops for about 8 hours at approximately 80%. After the working hours, curves rise again because most of the nodes move towards some meeting point (e.g., shopping center, restaurants, pubs), and this mobility supports the creation of new paths and connections with new nodes, exchanging thus new messages. Then, the curves become stable around 95%. This can be due to the fact that part of the nodes came back to their homes position where there should be limited new connections. Looking at Tab. <ref>, we can note that WFD-GM generates a less partitioned network than Baseline, even though both protocols create a large connected components including 99% of the nodes, and Fig. <ref> shows a high connectivity probability for most of the nodes in both solutions. However, WFD-GM performs better than Baseline considering T_D = 60s. ComiCon represents an intermediate scenario between the other two, both in terms of mobility and geographical area, highlighting significant advantages of WFD-GM with respect to Baseline in all the measures. In terms of message dissemination (Fig. <ref>) it behaves as in the static scenario, thanks to merge and travelling operations and, even if Baseline performs better than in Concert scenario, it is not able to compete with WFD-GM. Then, Fig. <ref> shows a total connectivity of the network for all T_D parameters, much higher than Baseline, and this is also confirmed by the number of connected components in Tab. <ref>.Therefore, we can summarize that WFD-GM improves the network connectivity and the message dissemination with respect to Baseline in scenarios characterized by medium and low mobility, and it performs similarly to Baseline in scenarios characterized by high mobility, since this represents a natural condition for WFD network reconfiguration. However, as shown in Tab. <ref>, it does not introduce additional overhead in terms of power consumption. In fact, it saves about 6% of the devices' battery in ComiCon scenario, and it consumes about the same resources of Baseline in Helsinki scenario. § CONCLUSIONS AND FUTURE WORK In this work we present WFD-GM, a novel middleware-layer protocol for the autonomous configuration and management of Wi-Fi Direct groups in opportunistic networks, relying on commercial mobile devices. WFD-GM is able to identify the best group configuration based on a context function that takes into account heterogeneous features of the devices (e.g., battery level, memory and CPU usage) and the current peers in proximity. In addition, it enables inter-group communication by implementing groups' merge operations and enabling client nodes to “travel” between groups in proximity. In this way, they can contribute to messages dissemination relying on the classical store-carry-and-forward paradigm. We validated WFD-GM through simulations of three realistic scenarios: a concert, a big convention, and a working day in an European city. We compared WFD-GM performances with a Baseline solution implementing a simplistic rule for the GO selection and not supporting additional network reconfiguration procedures. In addition, we performed a set of real experiments to estimate the context parameters involved in the simulations (i.e., power consumption on commercial devices for WFD operations and maximum number of incoming connections acceptable by a device acting as GO). We showed that WFD-GM improves the network connectivity reducing the number of partitions and supporting higher connectivity probabilities for each pair of nodes in all the scenarios. In addition, it improves messages dissemination with respect to Baseline in scenarios characterized by medium and low mobility, with a limited resource consumption. It performs similarly to Baseline in scenarios characterized by high mobility, maintaining always a limited resource consumption.We are currently implementing a prototype of WFD-GM on Android devices to extend the protocol evaluation in real environments and, concurrently, we are investigating learning methods to allow WFD-GM to self-adapt its parameters to environmental changes and heterogeneous devices' characteristics. IEEEtran
http://arxiv.org/abs/2307.00495v1
20230702065652
STG4Traffic: A Survey and Benchmark of Spatial-Temporal Graph Neural Networks for Traffic Prediction
[ "Xunlian Luo", "Chunjiang Zhu", "Detian Zhang", "Qing Li" ]
cs.LG
[ "cs.LG", "cs.AI" ]
1]Xunlian Luo xlluo@stu.suda.edu.cn 2]Chunjiang Zhu chunjiang.zhu@uncg.edu 1]Detian Zhang a detian@suda.edu.cn 3]Qing Li qing-prof.li@polyu.edu.hk [1] Institute of Artificial Intelligence, Department of Computer Science and Technology, Soochow University, Su Zhou, China. [2]Department of Computer Science, UNC Greensboro, NC, USA. [3]Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China. [a]Corresponding author. Traffic prediction has been an active research topic in the domain of spatial-temporal data mining. Accurate real-time traffic prediction is essential to improve the safety, stability, and versatility of smart city systems, i.e., traffic control and optimal routing. The complex and highly dynamic spatial-temporal dependencies make effective predictions still face many challenges. Recent studies have shown that spatial-temporal graph neural networks exhibit great potential applied to traffic prediction, which combines sequential models with graph convolutional networks to jointly model temporal and spatial correlations. However, a survey study of graph learning, spatial-temporal graph models for traffic, as well as a fair comparison of baseline models are pending and unavoidable issues. In this paper, we first provide a systematic review of graph learning strategies and commonly used graph convolution algorithms. Then we conduct a comprehensive analysis of the strengths and weaknesses of recently proposed spatial-temporal graph network models. Furthermore, we build a study called STG4Traffic using the deep learning framework PyTorch to establish a standardized and scalable benchmark on two types of traffic datasets. We can evaluate their performance by personalizing the model settings with uniform metrics. Finally, we point out some problems in the current study and discuss future directions. Source codes are available at https://github.com/trainingl/STG4Traffic. Traffic prediction Urban computing Graph neural networks Spatial-temporal data mining Benchmark § INTRODUCTION With the rapid development of the Internet of Things (IoT) and urban computing, the massive deployment of sensors provides a reliable source of data for intelligent transportation systems <cit.>. To alleviate the pressure of the growing population and vehicles in urbanization, research on data-driven traffic systems has become a hot topic in academia and industry. Traffic prediction, as a fundamental-level task of Intelligent Transportation System (ITS), supports a large number of upper-layer applications on the traffic scene, such as congestion warning, route planning and location services <cit.>. Traffic forecasting is achieved through the statistics, analysis and summary of historical traffic data to realize the judgment of future flow trends <cit.>. In traffic management and control systems, accurate traffic forecasting can help city managers perceive the health of the traffic road network in real-time, adopt timely solutions to optimize the traffic flow, and thus improve road traffic efficiency. In addition, online maps (e.g., Google Maps, Baidu Maps) can improve the quality of urban services by planning routes in advance for travelers and shortening travel time. As shown in Fig. <ref>, traffic variation exhibits intricate and multifaceted patterns of spatial and temporal interdependence. In terms of time, traffic volumes (such as, flow, speed, and demand) are affected by the living routine of urban residents and show significant periodicity, e.g., weekdays morning and evening peaks and weekend/holiday aggregated traffic flow <cit.>. The traffic at an observation point is closely related to the traffic state in the periods before and after, showing certain closeness and trend. In addition to the temporal properties, the intuitive traffic volume changes are also reflected in the information transmission between nodes in the traffic network. Unlike temporal correlations, the potential spatial relationships are diverse <cit.>, as illustrated in Fig. <ref>, where node 7 and node 10 are connected on the same road with essentially the same traffic patterns. Both node 7 and node 1 belong to the residential area and have significant semantic similarities despite their physical distance. Node 7 and node 4 have the same location function (Same POI, i.e., School, Bank), and even though they are not directly connected, they have the similar spatial pattern <cit.>. These complex and changing spatial-temporal properties make accurate traffic forecasting still challenging. Extensive research has been devoted to address the challenge of modeling spatial-temporal data. The earliest statistical models (e.g., VAR <cit.>, ARIMA <cit.>) are widely used for time series forecasting because of their simplicity and interpretability. However, these designs with restricted parameters are difficult to accomplish complex pattern recognition and the data cannot satisfy the assumption of stationary. Although machine learning methods (e.g., SVR <cit.>, FNN <cit.>) are often good at non-linear representation, the performance of the models heavily depends on feature engineering and expert experience. Data-driven deep learning techniques, especially temporal convolution <cit.>, recurrent neural network (and its variants LSTM <cit.>, GRU <cit.>) and Transformer <cit.>, have made breakthroughs in sequence tasks. However, they treat traffic data as independent signal streams <cit.>, ignoring or barely exploiting the spatial dependence information. One attempt is to divide the spatial region into same-size grids. ST-ResNet <cit.> implicitly represents the correlations between variables in fixed-size convolution kernels by using deep convolutional networks. However, due to the irregularity of roads, topological information inside the traffic network is inevitably missing using grid modeling. Inspired by graph neural networks modeling topology graphs, STGCN <cit.> first proposed to stack gated temporal convolution with graph convolution into Spatial Temporal Blocks to achieve spatial-temporal prediction. This practice demonstrates that embedding prior knowledge of the traffic graph is beneficial to greatly improve the predictive performance of the model. In later model design, extensive research efforts have integrated graph neural networks into sequential models (CNNs, RNNs, etc) to jointly model the potential temporal and spatial dependencies of traffic data <cit.>, and have achieved state-of-the-art performance. Some models such as DCRNN <cit.> integrates diffusion convolution into GRU to propose a multi-step prediction architecture that can capture bidirectional random walking graph signals. MTGNN <cit.> designed a structure that combines adaptive graph learning with dilation convolution to capture spatial-temporal correlation. GMAN <cit.> use spatial-temporal attention fusion to expand the perceptual domain of information and reduce the loss of long-term prediction. Over the years, although a considerable number of spatial-temporal graph neural network models have been proposed for traffic prediction, the existing literature lacks comprehensive surveys specifically focusing on graph learning and graph computing. While some studies <cit.> attempt to comprehensively and meticulously organize all the datasets and methods for traffic tasks, they are too broad and fall short in providing precise insight into the core issues and methods of spatial-temporal prediction modeling. Unfortunately, the absence of a fair and standardized benchmark is a significant drawback in the field. Existing benchmarks such as <cit.> suffer from either irregular experimental settings and limited scalability or exhibit inconsistent results compared to the original papers. Furthermore, the evaluation of these diverse models remains confusing and lacks proper organization. To address the above problems, a focused, well-understood, and inslightful survey will be of significance to the development of traffic prediction. In this paper, we highlight graph structure designs and graph computation methods used in spatial modeling, followed by a concrete survey of spatial-temporal graph neural networks. Then we propose a standardized and easily extensible benchmark to evaluate the performance of the different models. Lastly, we conclude with a prospective analysis of the difficulties and challenges in this study, and possible solutions to resolve them. In summary, the main contributions of our paper are provided as follows: * Firstly, we provide a comprehensive overview of graph structure design and graph computation methods employed in spatial-temporal graph modeling. This aspect has received less attention in previous surveys, making our discussion particularly valuable. * Secondly, we conduct an in-depth survey of spatial-temporal graph neural networks used for traffic prediction. We categorize these methods into three groups based on their temporal characteristics: CNN-Based, RNN-Based, and Attention-Based. Furthermore, we analyze the technical details and limitations of each specific model. * Thirdly, we introduce a benchmark named STG4Traffic, which facilitates a comprehensive evaluation of approximately 18 models on traffic speed datasets (METR-LA, PEMS-BAY) and traffic flow datasets (PEMSD4, PEMSD8). Our benchmark yields results that closely align with those reported in the original papers. It not only offers a common data access interface but also provides a unified model training pipeline for future studies in model design. * Lastly, we outline the challenges encountered in traffic modeling from the perspective of data quality, research perspectives, and migration methods. We aim to provide feasible approaches to overcoming the difficulties faced in this field. § PROBLEM STATEMENT The traffic network can be abstracted as a graph 𝒢 = (V, E, A). V is the set of N=|V| nodes, which represent different observation locations (e.g., traffic sensors, roadway monitoring stations) distributed in the road network. E is the set of edges and A ∈ℝ^N × N denotes the adjacency matrix depicting the relations between nodes, where each element represents the quantification of proximity from a certain insight, such as road connectivity, distance proximity, POI similarity, etc. The traffic data observed on 𝒢 at time t is denoted as the graph signal X_t ∈ℝ^N× D, and the signal of the i-th node is denoted as X_t^(i)∈ℝ^D, where D denotes the feature dimension of the node (e.g., traffic flow, speed, and density). Similarly, we can represent the signals of all nodes on 𝒢 at time length T as a 3D feature tensor X ∈ℝ^T × N × D. Finally, the traffic prediction problem can be formalized as follows: [X_t-P-1, ⋯, X_t-1, X_t; 𝒢] [X_t+1, ⋯ ,X_t+Q] This formula indicates that given P time lengths of historical observations and graph 𝒢, predict future traffic status for Q time lengths. The task aims to learn a non-linear function f(·) based on the gradient descent of the error. The mathematical form of ℒ optimization objective is defined as follows: Θ^* =min _Θ ℒ(f(A, X_t-P-1:t; Θ), X_t+1:t+Q) where Θ is the parameters to be optimized in the function. § GRAPH LEARNING AND COMPUTING Although graph neural networks (GNNs) have the advantage of aggregating node neighborhood contexts to generate spatial representations <cit.>, the performance of the task is closely related to the quality of the input graph structure and the computational method used for graph convolution. As shown in Fig. <ref>, the spatial relationships among nodes in traffic networks are complex and diverse. The complex spatial dependencies behind the traffic system cannot be explored by a single graph design and simple equation <cit.>. Therefore, this section focuses on the following two issues: * Q1: How to design a reasonable graph structure? And how to mine underlying spatial relationships from the time-series data itself without prior knowledge? * Q2: How to perform efficient convolution computation on the existing graphs? §.§ Graph Stucture Learning (Q1) Message passing in GNNs is based on local similarity <cit.>, where closer nodes exhibit more similar traffic patterns. Most of the spatial-temporal graphs used for traffic prediction use road connection distances or absolute distances of physical coordinates to calculate the weights of edges <cit.>. The former is typically a directed graph that reflects the objective distribution of the actual network, while the latter is an undirected graph that only measures the spatial distance between pairs of nodes. Distance-based Graph. The matrix A_D of the distance graph is defined using a threshold Gaussian kernel <cit.> as follows: A_D, ij= exp(-d_i,j^2/σ^2), for i ≠ j and exp(-d_i,j^2/σ^2) ≥ϵ, 0, otherwise. where d_ij is the measured distance of v_i and v_j. The threshold ϵ and the variance σ^2 are used to control the sparsity and distribution of the matrix A_D. Connectivity Graph, also called as Binary Graph <cit.>. Similarly, A_C is mathematically defined as follows: A_C, ij= 1, if v_i connects to v_j, 0, otherwise. Semantic Graph. We observe that certain nodes are geographically distant but they tend to have the same or similar patterns of traffic variation (They may be lying in the same type of area, such as a residential or commercial region). This suggests that node pairs also have significant semantic correlations. Generally, the Dynamic Time Warping (DTW) algorithm <cit.> is used to calculate the similarity in temporal patterns of historical observations. Semantic similarity matrix A_SE can be calculated according to the following equation: A_SE, ij= 1, DTW(X^(i), X^(j)) ≥ϵ, 0, otherwise. where X^(i) is the historical observation data of the i-th node. Functionality Graph. The POIs distribution surrounding nodes determines the usage of the district. Studies <cit.> revealed that this composite spatial dependence and heterogeneity largely influence the trend of traffic. In practice, we characterize the region functionality by the category and number of nearby POIs <cit.>, and the formula that defines the functional proximity between node pairs is A_F, ij=sim(P_v_i, P_v_j) ∈ [0, 1]. Cosine similarity <cit.> is a typical method used to calculate the functional similarity matrix A_F with the following equation: A_F, ij= P_v_i· P_v_j^T/||P_v_i|||P_v_j|| where P_v_i is a vector encoding of POIs for node v_i and its dimensions label the number of categories of POIs. P_v_i[j] is calculated in some way to represent the density of POI categories j around node v_i. Distribution Graph. Some metrics (e.g., Pearson correlation coefficient) can be used to describe the differences in traffic trends between nodes. However, their results are susceptible to the effects of series length. When the series length is small, it is susceptible to noise interference, and when the length is set too large, the variability of the trends is reduced. In contrast, from a macroscopic perspective, it is possible to simultanously combat data noise and effectively measure the overall proximity of nodes by comparing the feature distribution of nodes. KL divergence <cit.> and JS divergence <cit.> are often used to evaluate the similarity of two probability distributions. Let P_i and P_j denote the observed values of two nodes. The distribution matrix A_J based on JS divergence (JSD) <cit.> can be formulated as follows: JSD(P_i||P_j)= 1/2KL(P_i||P_j) + 1/2KL(P_j||P_i) where KL(P_i||P_j) can be expressed as: KL(P_i||P_j)= ∑_x ∈ XP_i(x)logP_i(x)/P_j(x) The range of JSD is [0, 1], and smaller values indicate greater distribution similarity. Thus, we define A_J, ij=1-JSD(P_i||P_j). The aforementioned methods of constructing graphs either encode adjacency matrices using prior knowledge or construct similarity matrices based on statistical analysis. They significantly enhance the spatial-temporal awareness ability of the model in auxiliary space modeling, compensating for the information bias introduced by an individual graph. However, the connectivity relations of pre-defined graphs are often missing and biased. On the one hand, they rely on additional data sources and experience, and on the other hand, it is difficult to depict a spatial dependence panorama. This leads to an inability to extend to general spatial-temporal graph tasks. The Adaptive Graph is based on parameter representations of node embeddings <cit.> that are continuously updated during the training phase to reduce model errors. It identifies biases caused by human definitions and captures hidden spatial dependencies. The adoption of adaptive graphs has made remarkable progress in traffic prediction. Here we organize the frequently used constructive equations in STGNN models <cit.> into Table <ref>. These efforts allow the study of graph computation without having to rely on priori knowledge. Meanwhile, a continuous signal sampling-based graph learning and optimization strategy has been proposed <cit.>, as illustrated in Fig. <ref>. It first extracts spatial embedding for each node from the historical observation sequence or initializes embedding parameters directly. Then it computes a pairwise similarity matrix Θ using the dot product on the spatial embeddings. Finally, it uses the Gumbel softmax trick <cit.> to reparameterize the distribution of the probability graph and remove noise information contained in redundant small values. The Sampled Graph is formulated as follows: A_ij= σ((log(θ_ij/(1-θ_ij) + (g^1_ij - g^2_ij)/s))), where Θ is a probability matrix, then θ_ij∈Θ represents the probability of retaining the edge between v_i and v_j. Here g^1_ij, g^2_ij∼ Gumbel(0, 1), s is a temperature hyperparameter. The sampled graph in the downstream task continuously adapts to the training data to optimize their structure parameters and learn a similarity matrix that minimizes the training error in an end-to-end way <cit.>. Additionally, a regularization term for the graph error is also added to prevent the learned graph from deviating from the prior graph. Research on graph learning goes beyond this. Graphs, either learned or pre-defined, can be used as additional information to help models better extract spatial representations. But currently no golden measure of learned graph quality exists, other than prediction accuracy. §.§ Graph Computation Method (Q2) The essence of GNNs is to aggregate the features of the target node itself and its neighbors to generate high-level hidden representations <cit.>. The spectral method <cit.> uses Chebyshev polynomial approximate filters to achieve and feature extraction of the graph signal. The formula is as follows: Θ⋆_𝒢 X=Θ(L) X=Θ(U Λ U^T) X=U Θ(Λ) U^T X where Graph Fourier Basis U ∈ R^N × N is the matrix of eigenvectors of the normalized graph Laplacian L = I_N - D^-1/2AD^-1/2=U Λ U^T. To balance performance and complexity, in practice, GCN (Graph Convolutional Network) <cit.> is most commonly used as the 1^th-order approximation of ChebNet. Given the graph signal matrix X and the adjacency matrix A, the graph convolutional network can be simplified to the following equation: Z=(I_N+D^-1/2AD^-1/2)XW+b where I_N is the identity matrix and D=diag(∑^N_j=0A_i,j) is the degree matrix. Spatial domain graph convolution is widely used in spatial-temporal graph networks to capture the spatial dependence of undirected graphs. Diffusion Graph Convolution. On the one hand, the traffic dissemination is directional, while on the other hand, the impact of traffic may come from more distant nodes, making simple GCN inadequate for complex scenarios. As a comparison, diffusion convolution on directed graphs can capture information up to k-order bi-directional neighbors, expanding the model's spatial receptive field. The equation <cit.> is expressed as follows: g_θ⋆_𝒢 X = ∑^K-1_k=0(θ_k,1(D_O^-1A)^k + θ_k,2(D_I^-1A^T)^k)X, where D_O and D_I are the out-degree and in-degree matrices resp., and θ_k,1, θ_k,2 are the learnable parameters. Multi-Hop Graph Convolution. A basic fact is that cascading GNNs leads to the over-smoothing of signals <cit.>, i.e., all nodes converge to the same value. Therefore, the graph convolutional network is typically set to two layers, but shallow networks are unable to capture the rich and deeper spatial features <cit.>. The residual connection can mitigate this issue. The computation of a uni-directional multi-hop graph convolution <cit.> can be expressed as Eq. (<ref>) and Eq. (<ref>). H^k+1 =ϕ(D^-1(A+I)H^kW), where H^0 = X, H^k+1 = β H^0 + (1-β)H^k+1, where ϕ is a signal activation function and β is a hyperparameter that controls the proportion of the original state of the root node that is preserved. The messages from multi-hop nodes are aggregated as the output of the hidden layer using linear weighting or attention aggregation <cit.>, in addition to pooling approaches such as Max and Avg. Formally, H_out = ∑^K-1_k=0α^(k)H^k. The multi-hop graph convolutional network uses multiple layers of convolutions to effectively extract features of the layered local substructures of nodes. Graph Attention Network. The graph attention <cit.>, which dynamically calculates edge weights between nodes based on feature similarity, is more suitable for real-time changing traffic scenarios. And compared to GCN, GAT is more flexible. Let the feature vectors of v_i and v_j be h_i and h_j ∈ℝ^D , and 𝒩_i be the set of neighbors to v_i. The equation of the graph attention is as follows: e_ij=a(Wh_i, Wh_j), j ∈𝒩_i, α_ij=softmax(e_ij)=exp(LeakyReLU(e_ij))/∑_k ∈𝒩_iexp(LeakyReLU(e_ik)), where W ∈ℝ^D × F is a learnable linear matrix, and a: ℝ^F ×ℝ^F →ℝ maps the combined parameter matrix into a scalar. a_ij denotes the normalized attention score. To obtain more abundant representations, the K-head attention performs multiple transformations of independent subspaces before concatenating them to obtain the calculated result: h_i'=||^K_k=0σ(∑_j ∈ N_iα_ijW^kh_j). Formally, we formulate the above process in a unified equation: H^l=(Ã⊙ M)H^l-1W, where à = A + I_N, ⊙ is the element-wise product, M ∈ℝ^N × N denotes the dynamic attention matrix. H^l ∈ℝ^N × F is the l-th head graph attention layer output, and when l = 0, H^0=X. The exploration of deeper spatial features continues to be an ongoing area of research. Currently, the above graph computation methods have been widely adopted in modeling spatial dependencies for STGNNs. § SPATIAL-TEMPORAL GRAPH NEURAL NETWORKS Spatial-temporal graph neural networks (STGNNs) have gained popularity as a deep learning approach that integrates graph convolutional layers into sequence models. This methodology effectively captures the spatial and temporal characteristics of traffic signals. By considering the entire road network and modeling spatial information, STGNNs surpass the limitations of analyzing independent data streams separately and the prediction accuracy of the model is significantly improved. Through the rapid development in the past five years, a large amount of work has been accumulated to apply spatial-temporal graphs to traffic prediction. According to the modeling strategy of the temporal axis, it can be divided into three categories <cit.>, namely CNN-Based, RNN-Based, and Attention-Based. The representative STGNNs architectures are shown in Fig. <ref>. The CNN-Based methods (STGCN, Graph WaveNet, MTGNN, etc.) employ 1D CNNs (TCNs) in tandem with graph convolutional layers to construct ST-Blocks and then learn asynchronous spatial-temporal patterns through the cascading ST-Blocks. 1D CNNs capture more long-range temporal features by stacking convolutional layers or adding dilation factors, thus enjoying the advantages of good computational efficiency and gradient stabilization. However, the implicit temporal connections represented by fixed convolution kernels deprive them of some flexibility and, more importantly, fail to capture the synchronized information in spatial-temporal signals. In contrast, RNNs (e.g., LSTM, GRU) are powerful in modeling sequence dependencies. Many RNN-Based methods (DCRNN, MRA-BGCN, AGCRN, etc.) extend the fully-connected operation in RNNs using GCNs so that they use graph convolution to capture local spatial dependencies at both input-to-state and state transitions. This design approach associates each time step with graph convolution, enabling the learning of spatial-temporal signals that undergo synchronous changes. However, it suffers from gradient instability and being very time-consuming in training and inference. In particular, RNNs-based encoder-decoders are difficult to compute in parallel and are sensitive to error propagation in multi-step prediction. The forgetten gating mechanism in RNNs also prevents them from capturing long-term time dependence. In recent studies, “GRU-GCN" is one of the most used spatial-temporal graph modeling frameworks. To overcome the challenges of high run-time and error accumulation, several works have proposed “curriculum learning" to optimize the training phase of the model. Curriculum learning <cit.> argues that it is not necessary to calculate the error and backpropagation for all time steps early in the training but to gradually increase the prediction length of the model as the number of iterations increases, i.e., in a progressive convergence way. This strategy for the encoder-decoder architecture substantially reduces the training time consumption and alleviates the pressure in terms of efficiency and resource occupation. The Attention-Based methods (e.g., GMAN, ASTGCN, DSTAGNN <cit.>) take a more flexible manner and calculate the global attention of signals in time and space, aggregated in parallel or in series. Temporal and spatial attentions (including Transformer) can effectively improve the global perception of the model and capture the dynamically evolving spatial-temporal correlation. It is a compromise between the first two methods in terms of computational efficiency. But it is a fact that attention-based approaches tend to be less effective in short-term and local spatial-temporal modeling. Just like the No-Free-Lunch theorem <cit.> in machine learning, using STGNNs to model the spatial-temporal correlation of traffic scenes requires selecting appropriate components according to specific problems and conditions. Here we summarize some representative research works, as shown in Table <ref>. Some of these methods had reached the state-of-the-art in prediction tasks. STGCN <cit.> first combines gated temporal convolution with ChebNet operator for spatial-temporal prediction and achieves better performance than traditional time series models in all metrics. DCRNN <cit.> extends the diffusion graph convolution to a recurrent neural network of encoder-decoder to solve the directional problem of asymmetric traffic graph propagation. Graph WaveNet <cit.> innovatively proposes an adaptive graph dissipation of biases caused by human-defined spatial relations, and WaveNet based on causal convolution is used to learn temporal relations. Inspired by ST-ResNet <cit.>, ASTGCN <cit.> proposes a set of temporal components called Clonesss, Period and Trend. Structurally, it utilizes a skip connection to connect the spatial-temporal attention layer to the convolutional layer to form model branchs, and finally fuses the three components together for prediction. These early studies of spatio-temporal graph neural networks significantly outperform traditional statistical methods and machine learning models in terms of predictive performance. The complexity of traffic scenarios has also given rise to some novel research perspectives. STSGCN <cit.> argues that spatial-temporal dependence often affects traffic volumes not individually but synergistically. It proposes a unique local spatial-temporal graph for capturing spatial-temporal heterogeneity and models synchronous spatial-temporal relationships through multiple graph convolutional layers. GMAN <cit.> proposes gated spatial-temporal attention fusion to capture dynamic nonlinear spatial-temporal correlations and establish the temporal connections between historical and future time steps based on transforming attention. STFGNN <cit.> combines a novel fusion operation to learn hidden dependencies from spatial and temporal graphs, and handles long sequences by stacking fusion graphs and gated convolutional modules. However, the accuracy of these methods heavily relies on pre-defined graph designs, and the number of layers of graph convolution is superficial. Adaptive graph convolutional recurrent network (AGCRN) <cit.> designs two types of adaptive modules based on parameter decomposition. Firstly, the node adaptation module decomposes the shared weights and biases to generate node-specific parameters to capture node-specific patterns. Secondly, the data-adaptive graph generation module automatically infers the interdependencies between different traffic series. STGNN <cit.> first combines the improved GAT and GRU, and then captures different scales of temporal patterns by concatenating a Transformer neural network architecture. This approach can effectively learn the dependencies among spatio-temporal data and provides a design framework that can be readily adapted. ASTGNN <cit.> adopts a Transformer-like spatio-temporal encoding-decoding architecture. It first extends the computation of query, key, and value matrices using temporal convolutions, proposes a trend-aware attention layer, and then replaces the feedforward network layer with GAT. The method has been proven effective on multiple traffic datasets, but the complex parameter training brings heavy system strain. DGCRN <cit.> found the fact that the connectivity of nodes is not immutable, but dynamically evolves with time periods. It proposes using a hyper-network to generate a dynamic adjacency matrix before each step of the RNN to accommodate the dynamic changes in the road network. Additionally, the generated dynamic matrix is merged with the original road network matrix to capture more spatial information. DMSTGCN <cit.> designs a dynamic graph constructor and dynamic graph convolution method to propagate node hidden states based on dynamic spatial relationships. It also provides a multi-aspect fusion module to merge auxiliary hidden states and primary hidden states in both time and space. They have made a lot of efforts in graph design and graph computing. Spatio-Temporal Multi-Graph Convolution Network (ST-MGCN) <cit.> and Temporal Multi-Graph Convolutional Network (T-MGCN) <cit.> devise multiple attribute graphs to assist in enhanced spatial modeling and mine spatial information from multiple insights. But this means that more expert knowledge is required. STGODE <cit.> adopts ODE to handle multipe layer GCN over-smoothing problem by expressing residual-connected GCNs as continuous GCNs. Then it adopts the dual branching of TCN and CGCN to solve the spatial-temporal prediction problem. Similarly, STG-NCDE <cit.> employs neural control differential equations to tackle the knowledge of temporal and spatial dimensions separately. The neural differential equation restores the temporal continuity that is lost due to interval sampling. Unlike previous work, STEP <cit.> is motivated by unsupervised learning and proposes a time-series pre-training strategy to enhance the graph structural design of STGNNs with promising results. According to our survey, the studies of spatial-temporal graphs focus on designing graph structures and combining spatial-temporal components. Meanwhile, other related pieces of techniques (unsupervised pre-training, generative adversarial networks, graph contrastive learning, reinforcement learning, etc.) are widely applied to traffic prediction <cit.>, greatly expanding the technical basis of spatial-temporal data mining. § BENCHMARK AND EVALUATION In this section, we first provides an overview of the datasets and models, the experimental setup, and the evaluation results used in the benchmark. Then we analyze the performance and efficiency of some models through the visualization of charts. Finally, we provide a brief introduction to the benchmark interface and its extended usage. §.§ Benchmark Implementation In view of the heterogeneity of the traffic data, we selected two kinds of datasets, traffic speed (METR-LA, PEMS-BAY) <cit.> and traffic flow (PEMSD4, PEMSD8) <cit.>, for the building of the benchmark. They are both sampled at 5-minute intervals, and the detailed statistical information of the data is shown in Table <ref>. At the same time, we selected some representative spatial-temporal graph models in these datasets for comparative studies. Our experiments are conducted on a GPU server with eight GeForce GTX 1080Ti graphics cards, using the unified deep learning framework PyTorch 1.8.0. The raw data are standardized using Z-Score <cit.>. To maintain consistency with previous studies, we divide the speed data into the training set, validation set, and test set in the ratio of 7:1:2, and the flow data in the ratio of 6:2:2. If the validation error converges within 15-20 epochs or stops after 100 epochs, the training model would stop early and the best model on the validation data is saved <cit.>. In the multi-step prediction task, we set both P and Q of the problem definition (<ref>) to 12. For the specific model parameters and settings including optimizer, learning rate, loss function and model parameters, we are faithful to the original paper on the one hand and conduct several parameter tuning efforts to obtain better experimental results on the other hand. In our experiments, we evaluate the model results using the Mask-Based Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE) as metrics, where the zero values (i.e., noisy data) will be ignored <cit.>. Their initially defined equations are as follows: 0.30!MAE=1/M∑^M_i=1|Y_i - Y_i|,0.30!RMSE=√(1/M∑^M_i=1(Y_i - Y_i)^2),0.30!MAPE=100%/M∑^M_i=1|Y_i - Y_i/Y_i|. where M is the number of values to predict, Y_i is the prediction result and Y_i is the ground truth. The smaller their values, the better the performance of the method is indicated. Table <ref> reports the multi-step prediction results of STGNNs for 15 minutes, 30 minutes, and 1 hour in the traffic speed datasets. Table <ref> records the average performance of the model across all time steps in the traffic flow datasets. It can be noticed that all our reproduced results are very close to the results reported in the original paper. We can draw the following conclusions from the abundant data information: (1) In both METR-LA and PEMS-BAY, DGCRN achieves state-of-the-art performance in almost all horizons except a few metrics. The overall prediction accuracy of GWNET and MTGNN are similar. Both of them use an adaptive graph learning strategy, which is superior to some graph methods based on distance representation. We can also observe that GMAN performs worse at short-term horizons and better at longer times (Horizon 12), suggesting that spatial-temporal attention helps to improve long-range predictions. (2) DAAGCN <cit.> is a dynamic spatial-temporal recurrent network combined with GAN, which achieves the best performance on both PEMSD4 and PEMSD8. MTGNN still performs well on traffic flow data, indicating its robustness and versatility across both speed and flow datasets. (3) In our experiments, we find that AGCRN not only achieves higher prediction accuracy compared to ASTGCN, STSGCN, and STGODE, but also occupies very few system resources in training stage. RGSL is an improvement on AGCRN, which proposed to fuse explicit and implicit graphs to effectively model node pair dependencies. STG-NCDE employs nonlinear differential equations to describe the continuous dynamic evolution of node features in both time and space. Despite its superior performance compared to the majority of methods, the complex differential operators increase both the training and testing time of the model. To observe more intuitively the differences in the predictions of the different methods at each prediction time step, we also plot the Fig. <ref>. In which, we can clearly analyze the difference in the errors of these methods in the short and long term prediction. §.§ Case Study We plot the time-series curve of ground truth versus model prediction for a random selection of road network sensors from the four datasets for a given day as in Fig. <ref>. For better presentation, a limited number of models are selected for comparison across the datasets, rather than all models listed in Table <ref>. We observe that: (1) Node 36 in METR-LA has a sudden decrease in the speed of vehicle traffic during the 15-18h segment, which indicates that the highway is jammed. Node 53 in PEMS-BAY shows successive traffic congestion during the hours of 10-12 and 18-21. (2) All GMAN, GWNET, and DGCRN have the capability of capturing stable temporal patterns in the data and can fit the traffic trends in non-congested periods better. This demonstrates the excellent performance of STGNNs in capturing temporal and spatial dependencies. (3) Under some traffic congestion conditions, they learn the valley and peak trends. The fitting effect of DGCRN is more prominent compared to GMAN and GWNET, because it seem to be more sensitive to complex traffic state changes. However, it is worth noting that their forecasts all have some measure of lag. Similarly, we selected DAAGCN, AGCRN and STG-NCDE for comparison in PEMSD4 and PEMSD8. The state of the flow data to change over time is not as smooth as the speed, and there are many bumps on the surface of the curve, which makes traffic flow prediction more challenging. Node 85 in PEMSD8 is the peak of traffic during the 5-8h time frame. DAAGCN is capable of learning the shifting spatial-temporal patterns, while AGCRN and STG-NCDE clearly deviate from the ground truth of traffic. In addition, in PEMSD4, Node 111 shows a sharp decrease in traffic between 11-12h, which implies a blocked road. Although all of them capture this state of jumping, DAAGCN's pattern-matching ability is more robust due to its prediction curve is closer to the trend of the actual value. Our aim is to pursue a model with low complexity and high generalization ability. It is essential to study the resource consumption and efficiency of STGNNs. The bar chart in Fig. <ref> demonstrates the number of parameters for different models on different datasets versus the training time per epoch. The larger the dataset, the more training time and system resources the model will take. Comparing on the same dataset, we identify that the training time of the model is not strictly correlated with the number of parameters, but also with the model architecture and the operators. For instance, GWNET and DCRNN have comparable model parameters, but the encoding and decoding in DCRNN require significantly more training time. Although DGCRN has the same architecture as DCRNN, the model efficiency is significantly improved by using a curriculum learning strategy. Generally, methods using graph learning (e.g., AGCRN, RGSL) and STSGCN require more training time in large datasets with more nodes. Because it means that these adaptive graph learning methods need to initialize node embeddings of larger size to implicitly model spatial relationships. Additionally, we observe from the Fig. <ref> that STG-NCDE has fewer model parameters in PEMSD4 compared to the PEMSD8 but exhibits comparable training times on both datasets. This indicates that the model's parameter quantity and training time are also dependent on specific hyperparameters. §.§ Usability And Practicality With the growing number of spatial-temporal traffic prediction models, we develop a training pipeline called STG4Traffic to provide a convenient and standardized, scalable project architecture for subsequent research. It is organized as presented in Fig. <ref>. below. STG4Traffic contains two sub-tasks for traffic speed and traffic flow prediction. These two subtasks provide a common standardized data interface and method interface for model design: DATA directory stores raw traffic data resources and pre-processed data files; LIB is a toolkit that designs callable data loading methods, evaluation methods, and some basic methods; LOG stores the project run logs and saves the final training model; MODEL is a model design file, which realizes the decoupling of model, data and training process. Researchers can extend the data and perform self-defined model designs according to their demands. Taking the DCRNN design as an example, we first complete the model definition in MODEL and then create the DCRNN directory in the subtask directory, where we create: DCRNN_Config.py, DCRNN_Utils.py [optional], DCRNN_Trainer.py, DCRNN_Main.py, and DATANAME_DCRNN.conf [multiple]. Among them, DATANAME_DCRNN.conf is the configuration of model parameters and experimental setup, which can vary based on the dataset being used.; DCRNN_Config.py is responsible for reading the preset configuration file; DCRNN_Utils.py defines additional methods that are not available in the public interface; DCRNN_Trainer.py serves as the trainer for the model, handling the training, validation, and testing processes; DCRNN_Main.py acts as the entry point for the project, managing tasks such as data loading, model and parameter setting, and more. To enhance reusability, we have developed 18 models within the STG4Traffic framework. The source code of this open-source project is available on GitHub for access. § DISCUSSIONS ON FUTURE DIRECTIONS Over the years, significant advancements have been achieved in traffic forecasting using STGNNs. However, there are still several challenges that require further attention and resolution. In this section, we highlight some future research directions for addressing the challenges. Data Quality: Data collected from sensors often suffer from noise or contain a certain percentage of missing data (zero-value padding during preprocessing) <cit.>. While we address this issue carefully during test evaluation, the training process is highly susceptible to being influenced by these anomaly samples. Additionally, the narrow time frame of data sampling, represented by parameters P=Q=12, poses challenges in capturing the periodicity of long sequences on one hand, and makes learning similarity graphs of node pairs unreliable on the other <cit.>. Moreover, the datasets collected in the study are limited, lacking the introduction of enriched external information or metadata such as weather, events, dates, etc. <cit.> and the fusion of these heterogeneous data presents a persisting challenge. Some studies have explored effective approaches to addressing data quality concerns in traffic forecasting. For instance, graph contrastive learning methods <cit.> have shown promise in combating training noise and capturing robust spatial-temporal features. Time Series Pre-training techniques <cit.> have also been found effective for long time series data with periodicity. In the ST-MetaNet model <cit.>, meta-knowledge is leveraged to parameterize the model, while considering heterogeneous information in the road network to model spatial-temporal dependencies and achieve improved feedback. These studies provide valuable insights and serve as strong references for addressing concerns related to data quality. Research Perspectives: Research on traffic forecasting mainly revolves around the modeling of temporal and spatial correlations. (1) Temporal Heterogeneity: Existing approaches that model temporal correlation for all nodes often adopt a shared parameter space, ignoring the temporal heterogeneity among different nodes. Spatial-Temporal Self-Supervised Learning (ST-SSL) <cit.> proposes a novel paradigm that enhances the representation of traffic patterns to capture both spatial and temporal heterogeneity by augmenting traffic-level and topology-level data. (2) Dynamic Graph: Although adaptive graphs can compensate for the knowledge bias caused by pre-defined graphs, they can only represent fixed node relations after training and cannot be dynamically adjusted with the data characteristics. Some work as DGCRN <cit.> and D2STGCN <cit.>, among others, take into account the basic fact that the location dependence of a road network changes dynamically with time. They propose a time-driven dynamic connectivity graph that infers instantaneous connection patterns based on the current traffic state. Although some related work exists, the design of dynamic graphs remains a significant technical challenge. (3) Long-Range Dependence: The attention mechanism is flexible in global spatio-temporal modeling, especially capturing long-range temporal dependencies. The calculation of attention scores in existing methods only considers the proximity of discrete values, neglecting the trend similarity of the time series segment <cit.>. ASTGNN <cit.> introduces trend-aware attention to better analyze long-term patterns of sequences. Furthermore, sequence pattern decomposition may also be a potential solution in the future <cit.>. Long-term dependence information is inferred from three structural components of time series trends, periods and residuals to improve prediction performance. Migration Methods: In the realm of STGNNs, the integration of various methods from different fields has shown potential in enhancing model performance. Generative adversarial networks <cit.> combat training generators (STGNNs) and discriminators by comparing the proximity of the predicted information to make the predictions converge to the ground truth as much as possible. Knowledge graphs <cit.> have also emerged as a valuable tool, establishing relationships among different transportation entities and concepts. They enable the consolidation and sharing of knowledge from diverse fields, facilitating comprehensive analysis and prediction of the entire traffic system. Recent studies such as AutoSTG <cit.> and AutoCTS <cit.> propose the use of automated machine learning to streamline the construction and optimization of traffic prediction models. These approaches efficiently build STGNNs that meet performance requirements while emphasizing accuracy. Additionally, transfer learning <cit.> has demonstrated its usefulness by leveraging models trained in other cities or regions with similar traffic patterns to initialize new models. This approach accelerates the training process and alleviates challenges arising from limited data availability. § CONCLUSION In this paper, we first present a systematic survey of graph design and graph computation techniques for traffic prediction. Then we provide a detailed introduction to the key modeling components, technical details, and well-established methods of spatial-temporal graph neural networks. In order to establish a standardized benchmark, we introduce STG4Traffic, providing a thorough overview of the performance and efficiency of various methods within this framework. Finally, we conduct an analysis of the challenges encountered in this study and summarize potential solutions for future investigations. We hope that this research make a positive and impactful contribution to the field of spatial-temporal prediction. § ACKNOWLEDGE Chunjiang Zhu is supported by UNCG Start-up Funds and Faculty First Award. Detian Zhang is partially supported by the Collaborative Innovation Center of Novel Software Technology and Industrialization, the Priority Academic Program Development of Jiangsu Higher Education Institutions. elsarticle-num
http://arxiv.org/abs/2307.01426v1
20230704013441
DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection
[ "Zhiyuan Yan", "Yong Zhang", "Xinhang Yuan", "Siwei Lyu", "Baoyuan Wu" ]
cs.CV
[ "cs.CV" ]
Verifying the magnitude dependence in earthquake occurrence Jiancang Zhuang August 1, 2023 =========================================================== A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark. This issue leads to unfair performance comparisons and potentially misleading results. Specifically, there is a lack of uniformity in data processing pipelines, resulting in inconsistent data inputs for detection models. Additionally, there are noticeable differences in experimental settings, and evaluation strategies and metrics lack standardization. To fill this gap, we present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions: 1) a unified data management system to ensure consistent input across all detectors, 2) an integrated framework for state-of-the-art methods implementation, and 3) standardized evaluation metrics and protocols to promote transparency and reproducibility. Featuring an extensible, modular-based codebase, DeepfakeBench contains 15 state-of-the-art detection methods, 9 deepfake datasets, a series of deepfake detection evaluation protocols and analysis tools, as well as comprehensive evaluations. Moreover, we provide new insights based on extensive analysis of these evaluations from various perspectives (, data augmentations, backbones). We hope that our efforts could facilitate future research and foster innovation in this increasingly critical domain. All codes, evaluations, and analyses of our benchmark are publicly available at <https://github.com/SCLBD/DeepfakeBench>. § INTRODUCTION Deepfake, widely recognized for its facial manipulation, has gained prominence as a technology capable of fabricating videos through the seamless superimposition of images. The surging popularity of deepfake technology in recent years can be attributed to its diverse applications, extending from entertainment and marketing to more complex usages. However, the proliferation of deepfake is not without risks. The same tools that enable creativity and innovation can be manipulated for malicious intent, undermining privacy, promoting misinformation, or eroding trust in digital media, . Responding to the risks posed by deepfake contents, numerous deepfake detection methods <cit.> have been developed to distinguish deepfake contents from real contents, which are generally categorized into three types: naive detector, spatial-based detector, and frequency-based detector. Despite rapid advancements in deepfake detection technologies, a significant challenge remains due to the lack of a standardized, unified, and comprehensive benchmark for a fair comparison among different detectors. This issue causes three major obstacles to the development of the deepfake detection field. First, there is a remarkable inconsistency in the training configurations and evaluation standards utilized in the field. This discrepancy inevitably leads to divergent outcomes, making a fair comparison difficult. Second, the source codes of many methods are not publicly released, which could be detrimental to the reproducibility and comparability of their reported results. Third, we find that the detection performance can be significantly influenced by several seemingly inconspicuous factors, , the number of selected frames in a video. Since the settings of these factors are not uniform and their impacts are not thoroughly studied in most existing works, the reported results and corresponding claims may be biased or misleading. To bridge this gap, we build the first comprehensive benchmark, called DeepfakeBench, offering a unified platform for deepfake detection. Our main contributions are threefold. 1) An extensible modular-based codebase: Our codebase consists of three main modules. The data processing module provides a unified data management module to guarantee consistency across all detection inputs, such that alleviating the time-consuming data processing and evaluation. The training module provides a modular framework to implement state-of-the-art detection algorithms, facilitating direct comparisons among different detection algorithms. The evaluation and analysis module provides several widely adopted evaluation metrics and rich analysis tools to facilitate further evaluations and analysis. 2) Comprehensive evaluations: We evaluate 15 state-of-the-art detectors with 9 deepfake datasets under a wide range of evaluation settings, providing a holistic performance evaluation of each detector. Moreover, we establish a unified evaluation protocol that enhances the transparency and reproducibility of performance evaluation. 3) Extensive analysis and new insights: We provide extensive analysis from various perspectives, not only analyzing the effects of existing algorithms but also uncovering new insights to inspire new technologies. In summary, we believe DeepfakeBench could constitute a substantial step towards calibrating the current progress in the deepfake detection field and promoting more innovative explorations in the future. § RELATED WORK Deepfake Generation Deepfake technology, which generally centers on the artificial manipulation of facial imagery, has made considerable strides from its rudimentary roots. Starting in 2017, learning-based manipulation techniques have made significant advancements, with two prominent methods gaining considerable attention: Face-Swapping and Face-Reenactment. 1) Face-swapping constitutes a significant category of deepfake generation. These techniques typically involve either autoencoder-based or GAN-based manipulations. The former utilizes a distinct autoencoder for each face pair to be swapped, resulting in manipulated videos or images. Notable instances of this approach include UADFV <cit.>, Deepfakes <cit.>, CelebDF <cit.>, and DeeperForensics-1.0 <cit.>. ForgeryNet <cit.>, which relies on DeepFakes, is another exemplification of an autoencoder-based face-swapping technique. On the other hand, GAN-based manipulations involve adversarial training, with the goal to forge the source face or specific facial attributes. 2) Face-reenactment is characterized by graphics-based manipulation techniques that modify source faces imitating the actions or expressions of a different face. NeuralTextures <cit.> and Face2Face <cit.>, utilized in FaceForensics++, stand out as standard face-reenactment methods. Deepfake Detection Current deepfake detection can be broadly divided into three categories: naive detector, spatial detector, and frequency detector. 1) Naive detector employs CNNs to directly distinguish deepfake content from authentic data. Numerous CNN-based binary classifiers have been proposed, , MesoNet <cit.> and Xception <cit.>. 2) Spatial detector delves deeper into specific representation such as forgery region location <cit.>, capsule network <cit.>, disentanglement learning <cit.>, image reconstruction <cit.>, erasing technology <cit.>, . Besides, some other methods specifically focus on the detection of blending artifacts <cit.>, generating forged images during training in a self-supervised manner to boost detector generalization. 3) Frequency detector addresses this limitation by focusing on the frequency domain for forgery detection <cit.>. SPSL <cit.> and SRM <cit.> are other examples of frequency detectors that utilize phase spectrum analysis and high-frequency noises, respectively. Qian et al. <cit.> propose the use of learnable filters for adaptive mining of frequency forgery clues using frequency-aware image decomposition. Related Deepfake Surveys and Benchmarks The growing implications of deepfake technology have sparked extensive research, resulting in the establishment of several surveys and dataset benchmarks in the field. 1) Surveys provide a detailed examination of various facets of deepfake technology. For instance, Westerlund et al. <cit.> present a thorough analysis of deepfake, emphasizing its legal and ethical dimensions. Tolosana et al. <cit.> furnish a comprehensive review of face manipulation techniques, including deepfake methods, along with approaches to detect such manipulations. 2) Benchmarks in this field have emerged as essential tools to provide realistic forgery datasets. For instance, FaceForensics++ (FF++) <cit.> serves as a prominent benchmark, offering high-quality manipulated videos and a variety of forgery types. The Deepfake Detection Challenge Dataset (DFDC) <cit.> introduces a diverse range of actors across different scenarios. While these benchmarking methodologies have made significant contributions, they specifically focus on their own datasets, without offering a standardized way to handle data across different datasets, which may lead to inconsistencies and obstacles to fair comparisons. Also, the lack of a unified framework in some benchmarks could lead to variations in training strategies, settings, and augmentations, which may result in discrepancies in the outcomes. Furthermore, the provision of comprehensive analytical tools is not always prominent, which might restrict the depth of analysis on the potential impacts of different factors. DeepfakeBench, on the other hand, presents a concise but comprehensive benchmark. Its contributions are threefold: introducing a unified data management system for consistency, offering an integrated framework for implementing advanced methods, and analyzing the related factors with a series of analysis tools. § OUR BENCHMARK §.§ Datasets and Detectors Datasets Our benchmark currently incorporates a collection of 9 widely recognized and extensively used datasets in the realm of deepfake detection: FaceForensics++ (FF++) <cit.>, CelebDF-v1 (CDFv1) <cit.>, CelebDF-v2 (CDFv2) <cit.>, DeepFakeDetection (DFD) <cit.>, DeepFake Detection Challenge Preview (DFDC-P) <cit.>, DeepFake Detection Challenge (DFDC) <cit.>, UADFV <cit.>, FaceShifter (Fsh) <cit.>, and DeeperForensics-1.0 (DF-1.0) <cit.>. Notably, FF++ contains 4 types of manipulation methods: Deepfakes (FF-DF) <cit.>, Face2Face (FF-F2F) <cit.>, FaceSwap (FF-FS) <cit.>, NeuralTextures (FF-NT) <cit.>. There are three versions of FF++ in terms of compression level, , raw, lightly compressed (c23), and heavily compressed (c40). The detailed descriptions of each dataset can be seen in the supplementary. Typically, FF++ is employed for model training, while the rest are frequently used as testing data. However, our benchmark allows users to select their combinations of training and testing data, thus encouraging custom experimentation. It is notable that, although these datasets have been widely used in the community, they are not usually provided in a readily accessible and combined format. It often requires a substantial investment of time and effort in data sourcing, pre-processing (, frame extraction, face cropping, and face alignment), and organization of the raw datasets, which are often organized in diverse structures. This considerable data preparation overhead often diverts researchers' attention away from the core tasks like methodology design and experimental evaluations. To tackle this challenge, our benchmark offers a collection of well-processed and systematically organized datasets, allowing researchers to devote more time to the core tasks. Additionally, our benchmark enriches some datasets (, FF++ <cit.> and DFD <cit.>), by including mask data (, the forgery region) that is aligned with the respective facial images in these datasets. It could facilitate more comprehensive deepfake detection studies. In summary, our benchmark provides a unified, user-friendly, and diversified data resource for the deepfake detection community. It eliminates the cumbersome task of data preparation and allows researchers to concentrate more on innovating effective deepfake detection methods. Detectors Our benchmark has implemented a total of 15 established deepfake detection algorithms, as detailed in Tab. <ref>. The selection of these algorithms is guided by three criteria. First, we prioritize methods that hold a classic status (, Xception), or those considered advanced, typically published in recent top-tier conferences or journals in computer vision or machine learning. Second, our benchmark classifies detectors into three categories: naive detectors, spatial detectors, and frequency detectors. Our primary emphasis is on image forgery detection, hence, temporal-based detectors have not yet been incorporated. Moreover, we have refrained from including traditional detectors (, Headpose <cit.>) due to their limited scalability to large-scale datasets, making them less suitable for our benchmark's objectives. Third, we aim to include methods that are straightforward to implement and reproduce. We notice that several existing methods involve a series of steps, some of which are reliant on third-party algorithms or heuristic strategies. These methods usually have numerous hyper-parameters and are fraught with uncertainty, making their implementation and reproduction challenging. Therefore, these methods without open-source codes are intentionally excluded from our benchmark. In summary, we provide a diverse set of detectors that are user-friendly and easily reproducible, such that the researchers' burden of extensive reimplementation could be alleviated. §.§ Codebase We have built an extensible modular-based codebase as the basis of DeepfakeBench. As shown in Fig. <ref>, it consists of three core modules, including Data Processing Module, Training Module, and Evaluation and Analysis Module. Data Processing Module The Data Processing Module includes two pivotal sub-modules that automate the data processing sequence, namely the Data Preprocessing and Data Arrangement sub-modules. 1) Data Preprocessing sub-module presents a streamlined solution. First, Users are provided with a YAML configuration file, enabling them to tailor the preprocessing steps to their specific requirements. Second, we furnish a unified preprocessing script, which includes frame extraction, face cropping, face alignment, mask cropping, and landmark generation. 2) Data Arrangement sub-module further augments the convenience of data management. This sub-module comprises a suite of JSON files for each dataset. Users can execute a rearranged script to create a unified JSON file for each dataset. This unified file provides access to the corresponding training, testing, and validation sets, along with other information such as the frames, landmarks, masks, , related to each dataset. Training Module The Training Module currently accommodates 15 detectors across three categories: naive detector, spatial detector, and frequency detector, all of which are shown in Tab. <ref>. 1) Naive Detector leverages various CNN architectures to directly detect forgeries without relying on additional manually designed features. 2) Spatial Detector builds upon the backbone of CNNs used in the Naive Detector and further explores manual-designed algorithms to detect deepfake. 3) Frequency Detector focuses on utilizing information from the frequency domain and extracting frequency artifacts for detection. Each detector implemented in our benchmark is managed in a streamlined and efficient way, with a YAML config file created for each one. This allows users to easily set their desired parameters, , batch size, learning rate, . These detectors are trained on a unified trainer that records the metrics and losses during the training and evaluation process. Thus, the training and evaluation processes, logging, and visualization are handled automatically, eliminating the need for manual specification. Evaluation & Analysis Module For evaluation, we employ 4 widely used evaluation metrics: accuracy (ACC), the area under the ROC curve (AUC), average precision (AP), and equal error rate (EER) Besides, it is notable that there is an inconsistency in the usage of these evaluation metrics in the community, some are at the frame level, while others are at the video level, leading to unfair comparisons. Our benchmark currently adopts the frame level evaluation to build a fair basis for comparison among detectors. In addition to the evaluation values of these metrics, we also provide several visualizations to facilitate performance comparisons, , the ROC-AUC curve, radar chart, and histogram. For analysis, we provide various visualization tools to gain deeper insights into the detectors' performance. For example, Grad-CAM <cit.> is used to highlight the potential forgery regions detected by the models, providing interpretability and assisting in understanding the underlying reasoning for the model's predictions. To explore the learned features and representations, we employ t-SNE visualization <cit.>. Furthermore, we offer custom visualizations tailored to specific detectors. For example, for Face X-ray <cit.>, we provide visualizations of the detection boundary of the face, as described in its original paper (see the top-right corner of Fig. <ref>). § EVALUATIONS AND ANALYSIS §.§ Experimental Setup In the data processing, face detection, face cropping, and alignment are performed using DLIB <cit.>. The aligned faces are resized to 256× 256 for both the training and testing. In the training module, we employ the Adam optimization algorithm with a learning rate of 0.0002. The batch size is fixed at 32 for all experiments. We sample 32 frames for each video for training and testing. We primarily leverage pre-trained backbones from ImageNet if feasible. Otherwise, we resort to initializing the remaining weights using a normal distribution. We also apply widely used data augmentations, i.e., image compression, horizontal flip, rotation, Gaussian blur, and random brightness contrast. In terms of evaluation, we compute the average value of the top-3 metrics (, average top-3 AUC) as our evaluation metric. We report other metrics (, AP, EER, and ACC) in the supplementary. Further details of experimental settings can be seen in the supplementary. §.§ Evaluations In this section, we focus on performing two types of evaluations: (1) within-domain and cross-domain evaluation, and (2) cross-manipulation evaluation. The purpose of the within-domain evaluation is to assess the performance of the model within the same dataset, while cross-domain evaluation involves testing the model on different datasets. We also perform cross-manipulation evaluation to evaluate the model's performance on different forgeries under the same dataset. Within-Domain and Cross-Domain Evaluations In this evaluation, we specifically train the model using FF++ (c23) as the default training dataset. Subsequently, we evaluate the model on a total of 14 different testing datasets, with 6 datasets for within-domain evaluation and 8 datasets for cross-domain evaluation. Tab. <ref> provides an extensive evaluation of various detectors, divided into Naive, Spatial, and Frequency types, based on both within-domain and cross-domain tests. Regarding the results in Tab. <ref>, we observe that, for the within-domain evaluations, a majority of the detectors performed commendably, evidenced by high within-domain AUC. Remarkably, detectors such as UCF, Xception, EfficientB4, and F3Net registered significant average scores, specifically 95.37%, 94.50%, 93.89%, and 94.49% respectively. Furthermore, an unexpected revelation comes from the performance of Naive Detectors. Astonishingly, Naive Detectors (, Xception and EfficientB4), which essentially rely on a straightforward CNN classifier, register high AUC values that are comparable to more sophisticated algorithms. This could potentially suggest that the performance leap from advanced state-of-the-art methods to Naive Detectors might not be as substantial as perceived, particularly in consistent settings (, pre-training or data augmentation). In other words, the performance gap could be a product of these additional factors rather than the intrinsic superiority of the method. To delve deeper into this phenomenon, we will investigate the impact of data augmentation, backbone architecture, pre-training, and the number of training frames in the following section (see Sec. <ref>). Cross-Manipulation Evaluations We also conduct a cross-manipulation evaluation to assess the model's performance on various manipulation forgeries within the FF++ dataset. Fig. <ref> compares the cross-manipulation detection performance of 10 detectors. Upon examining the figure, it becomes evident that the issue of generalization is prominent. While detectors such as CORE, EfficientB4, SPSL, SRM, and Xception exhibit excellent performance on the FF-DF test data when trained on FF-DF, their performance significantly deteriorates when faced with FF-FS forgeries. Interestingly, Face X-ray and DSP-FWA stand out as the exception, demonstrating exceptional performance in cross-manipulation detection. These detectors utilize blending technology, which dynamically generates forgery data by blending the training data. In summary, the cross-manipulation evaluation suggests that synthesizing data through blending has the potential to learn common features among different types of forgeries. This indicates the effectiveness of data synthesis techniques in improving the model's generalization capabilities in cross-manipulation scenarios. §.§ Analysis Effect of Data Augmentation We assess the influence of various augmentation techniques on the performance of forgery detectors in this section. Specifically, we investigate the impact of rotations, horizontal flips, image compression, isotropic scaling, color jitter, and Gaussian blur on two prototypical detectors: one from the spatial domain (Xception) and one from the frequency domain (SPSL). Fig. <ref> compares the performance when training these detectors with all data augmentations (denoted as “w_All"), without any data augmentations (“wo_All"), and without a specific augmentation. Our findings can be summarized into three main observations: First, in the case of within-domain evaluation (as seen in the FF++_c23 dataset), removing all augmentations appears to improve detector performance by approximately 2% for both Xception and SPSL, suggesting that most augmentations may have a negative impact within this context. Second, for evaluations involving compressed data (FF++_c40), certain augmentations such as Gaussian blur demonstrate effectiveness in both Xception and SPSL detectors, as they simulate the effects of compression on the data during training. Third, in the context of cross-domain evaluations (CelebDF-v2, DFD, and DFDCP), operations like compression and blur may significantly degrade the performance of SPSL in the DFD and DFDCP datasets, possibly due to their tendency to obscure high-frequency details. Similar negative effects of the blur operation are observed for Xception, likely as it diminishes the visibility of visual artifacts. These findings underscore the need for further exploration into identifying a universally beneficial augmentation that can be effectively utilized across a wide range of detectors in generalization scenarios, irrespective of their specific attributes or datasets. Effect of Backbone Architecture We here investigate the impact of different backbone architectures on the performance of forgery detection models. Specifically, we compare the performance of three popular backbones: Xception, EfficientNet-B4, and ResNet34. Each backbone is integrated into the detection model, and its performance is evaluated on both within-domain and cross-domain datasets (see Fig. <ref>). Our findings reveal that Xception and EfficientNet-B4 consistently outperform ResNet34, despite having a similar number of parameters. This indicates that the choice of backbone architecture plays a crucial role in detector performance, especially when evaluating the DeepfakeDetection dataset using CORE. In summary, these results highlight the critical role of carefully selecting a suitable backbone architecture in the design of deepfake detection models. Further research in this direction holds the potential for advancing the field in the future. Effect of Pre-training of the Backbone This analysis focuses on the impact of pre-training on forgery detection models. Following the previous section, we analyze three typical architectures: Xception, EfficientNetB4, and ResNet34. Fig.  <ref> reveals that the pre-trained models can largely outperform their non-pre-trained counterparts, especially in the case of Xception (about 10% in DFDCP) and EfficientB4 (about 10% in DeepFakeDetection). This can be attributed to the ability of pre-trained models to capture and leverage meaningful low-level features. However, the benefits of pre-training are less pronounced for ResNet34, mainly due to its architectural design, which may not fully exploit the advantages offered by pre-trained weights. Overall, our findings underscore the importance of both architectural choices and the utilization of pre-trained weights in achieving optimal forgery detection performance. Effect of Number of Training Frames In this section, we investigate the impact of the number of training frames on the performance of forgery detection models. We consider the following number of training frames: 4, 8, 16, 32, 64, 100, and 270. Our analysis involves three representative methods: Xception, SPSL, and Face X-ray. Xception focuses on learning forgery artifacts in the spatial domain, SPSL captures frequency-domain artifacts, and Face X-ray employs a blending operation to dynamically generate forgery images using the training data. The results presented in Fig. <ref> indicate that all three methods tend to exhibit overfitting as the number of frames increases, particularly for Xception and SPSL. Specifically, with an increasing number of training frames (e.g., from 32 to 270), the performance of these detectors improves or plateaus in the within-domain evaluation, but declines in the cross-domain evaluation. Face X-ray, which generates data through blending, exhibits fluctuations in performance as the number of frames varies. Notably, approximately 32 frames yield better results for this method. Conversely, Xception and SPSL consistently experience a performance decline with an increasing number of frames. l0.4 < g r a p h i c s > t-SNE visualization of Xception between 270 frames (left) and 8 frames (right). We can see from the figure that Xception with 270 frames learns forgery-irrelated features. To further investigate the nature of overfitting and understand whether it is specific to the detection method or unrelated to forgery content, we conduct t-SNE visualizations (see Fig. <ref>). The visualization demonstrates that as the number of training frames increases, the models struggle to differentiate between genuine and forged instances, as there is more overlap between the forgery and genuine data points in the t-SNE space. These findings underscore the significance of carefully selecting the number of training frames to achieve optimal performance and generalization in forgery detection models. Increasing the number of frames beyond a certain threshold can lead to overfitting, hampering the models' ability to generalize to unseen data. It is crucial to strike a balance to avoid overreliance on specific forgery artifacts present in the training data, thus ensuring robust detection performance across various domains. § CONCLUSIONS, LIMITATIONS & FUTURE PLANS, AND SOCIETAL IMPACTS Conclusions We have developed DeepfakeBench, a groundbreaking and comprehensive framework, emphasizing the benefits of a modular architecture, including extensibility, maintainability, fairness, and analytical capability. We hope that DeepfakeBench could contribute to the deepfake detection community in various ways. First, it provides a concise yet comprehensive platform that incorporates a tailored data processing pipeline, and accommodates a wide range of detectors, while also facilitating a fair and standardized comparison among various models. Second, it assists researchers in swiftly comparing their new methods with existing ones, thereby facilitating faster development and iterations. Last, the in-depth analysis and comprehensive evaluations performed through our benchmark have the potential to inspire novel research problems and drive future advancements in the field. Limitations & Future Plans To date, DeepfakeBench primarily focuses on providing algorithms and evaluations at the frame level. We will further enhance the benchmark by incorporating video-level detectors and evaluation metrics. We also plan to carry out evaluations for detecting images directly produced by diffusion or GANs, using the existing benchmark. Additionally, we aim to include a wider range of typical detectors and datasets to offer a more comprehensive platform for evaluating the performance of detectors. DeepfakeBench will continue to evolve as a valuable resource for researchers, facilitating the development of advanced deepfake detection technologies. Societal Impacts DeepfakeBench provides a robust platform to develop and evaluate deepfake detection techniques. Facilitating the development of reliable detection strategies, it assists in curbing the spread of misleading deepfake, safeguarding societal trust, and promoting the responsible use of such technology. Furthermore, it may prompt legislative discussions aimed at preventing the misuse of deepfake technologies. § CONTENTS IN SUPPLEMENTARY MATERIAL The Supplementary Material accompanying this paper provides additional details and information that could not be included in the main paper due to space constraints. Our Supplementary Material is organized as follows: (1) Details of Data Processing This section provides further elaboration on the data processing steps, including face detection, face cropping, alignment, and . (2) Details of Algorithms Implementation and Visualizations This section dives into the implementation details of the algorithms used in the study. It also includes additional visualizations to help readers gain a deeper understanding of the experimental results. (3) Training Details and Full Experimental Results: This section presents comprehensive details of the training process, including additional evaluation metrics beyond those reported in the main paper. plain § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? * Did you discuss any potential negative societal impacts of your work? * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? See the Github repository of DeepfakeBench at <https://github.com/SCLBD/DeepfakeBench>. * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? We specify all the training details in the supplementary. * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? The implementations of some existing algorithms are modified based on their original source codes, and we clearly describe the original link (see <https://github.com/SCLBD/DeepfakeBench>). * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a URL? See the Github repository of DeepfakeBench <https://github.com/SCLBD/DeepfakeBench>. * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
http://arxiv.org/abs/2307.01148v2
20230703163928
Investigating Data Memorization in 3D Latent Diffusion Models for Medical Image Synthesis
[ "Salman Ul Hassan Dar", "Arman Ghanaat", "Jannik Kahmann", "Isabelle Ayx", "Theano Papavassiliu", "Stefan O. Schoenberg", "Sandy Engelhardt" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Dar et al. Memorization in 3D Latent Diffusion Models for Medical Image Synthesis Department of Internal Medicine III, Group Artificial Intelligence in CardiovascularMedicine, Heidelberg University Hospital, 69120 Heidelberg, Germany SalmanUlHassan.Dar@med.uni-heidelberg.de DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany AI Health Innovation Cluster, Heidelberg, Germany Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany First Department of Medicine-Cardiology, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany Investigating Data Memorization in 3D Latent Diffusion Models for Medical Image Synthesis Salman Ul Hassan Dar1,2,3 Arman Ghanaat1,2 Jannik Kahmann4 Isabelle Ayx 4Theano Papavassiliu5 Stefan O. Schoenberg 4 Sandy Engelhardt 1,2 August 1, 2023 =============================================================================================================================================== Generative latent diffusion models have been established as state-of-the-art in data generation. One promising application is generation of realistic synthetic medical imaging data for open data sharing without compromising patient privacy. Despite the promise, the capacity of such models to memorize sensitive patient training data and synthesize samples showing high resemblance to training data samples is relatively unexplored. Here, we assess the memorization capacity of 3D latent diffusion models on photon-counting coronary computed tomography angiography and knee magnetic resonance imaging datasets. To detect potential memorization of training samples, we utilize self-supervised models based on contrastive learning. Our results suggest that such latent diffusion models indeed memorize training data, and there is a dire need for devising strategies to mitigate memorization. § INTRODUCTION Contemporary developments in deep generative modeling have lead to performance leaps in a broad range of medical imaging applications <cit.>. One promising application is generation of novel synthetic images <cit.>. Synthetic images can be used for data diversification by synthesizing samples belonging to underrepresented classes for training of data-driven models or sharing of synthetic data for open science without compromising patient privacy. State-of-the art generative models are based on latent diffusion <cit.>. These models first project data onto a compressed latent space, learn latent space distribution through a gradual denoising process, and synthesize novel latent space samples followed by projection onto a high dimensional pixel space <cit.>. Despite the ability to synthesize high quality samples, recent studies in computer vision suggest that latent diffusion models (LDMs) are prone to training data memorization <cit.>. This can be more critical in medical imaging, where synthesizing real patient data defeats the whole purpose of preserving data privacy. These computer vision studies further suggest that the phenomenon of data memorization is more prevalent in low data regimes <cit.>, which is very often the case in the medical domain. Despite the importance of patient privacy, it is surprising that data memorization in generative models has received little attention in the medical imaging community. Here, we investigate the memorization capacity of 3D-LDMs in medical images. To this end, we train LDMs to generate 3D volumes (Fig. <ref>) and compare novel generated samples with real training samples via self-supervised models (Fig. <ref>) for potential memorization. For assessment, we perform experiments on an in-house photon-counting coronary computed angiography (PCCTA) dataset and a public knee MRI (MRNet) dataset <cit.>. Our results suggest that LDMs indeed suffer from training data memorization. §.§ Data Generation via LDMs LDMs belong to a family of generative models that learn to generate novel realistic samples by denoising normally distributed noise in a compressed lower dimensional latent space <cit.>. LDMs consist of two models: §.§.§ Latent Encoding Model First, an encoder learns to project samples onto lower dimensional latent space. This lower dimensional latent space is typically learned using an autoencoder. The autoencoder is trained to encode the image x ∈ℝ^L × H × W to a latent space z ∈ℝ^L' × H' × W' using an encoder ℰ having parameters θ_ℰ (z = ℰ(x)), followed by reconstruction via a decoder 𝒟 having parameters θ_𝒟 (x̂ = 𝒟(z)). Overall, the training is performed to minimize the following reconstruction loss function: ℒ_rec(θ_ℰ, θ_𝒟) = 𝔼_p(x)[x - x̂_1] where 𝔼_p(x) denotes expectation with respect to data distribution p(x). Since the encoder and decoder are trained simultaneously with the aim to recover the original image from a lower dimensional representation, the encoder learns to project the data onto semantically meaningful latent space without loosing much information. §.§.§ Diffusion Model Afterwards, a deep diffusion probablistic model (DDPM) is trained to recover meaningful latent space from normally distributed noise. DDPMs consist of a forward and reverse diffusion step. In the forward step normally distributed noise is added to the latent representation (z) in small increments. At any time t, the relation between z_t and z_t-1 can be expressed as: q( z_t|z_t-1) = 𝒩( x_t; √(1-β_t)z_t-1,β_tI) where β_t is the variance schedule <cit.> and q( z_t|z_t-1) is the conditional distribution. In the reverse step, a model is trained to approximate q( z_t-1|z_t) as p_θ( z_t-1|z_t). Once trained, the model can be used to synthesise novel representations (z_0) given z_T∼𝒩( 0,I). The latent code z_0 can then be fed as input to the decoder (𝒟) to generate novel samples from data distribution p(x). §.§ Memorization Assessment Although LDMs have outperformed their counterpart generative models in medical image synthesis in terms of image quality and diversity <cit.>, their capacity to memorize training samples remains relatively unexplored. This is surprising, considering that one of the main goals of sharing synthetic data is to preserve patient privacy. Memorization of patient data defeats this purpose, and the quality of the synthesized samples becomes secondary. Since the primary focus of this work is memorization, it is important to first define what constitutes “memorization”. Akbar et al. <cit.> define memorization as a phenomenon where generative models can generate copies of training data samples. However, they do not explicitly define what a “copy” means, and their use of “copy” seems to be limited to a synthesised sample that is identically oriented to a training sample and shares the same anatomical structures with minor differences such as image quality. They detect potential copy candidates by computing pairwise normalized correlation between synthesized and training samples. This fails to take into account that synthetic samples can also be flipped or rotated versions of the training samples, which can easily result as a consequence of data augmentation strategies typically used for training deep models. In our work, we expand the definition of “copy” to further include rotated and flipped versions of a training sample. This definition can further be broadened to include other forms of variations such as slight deformation. However, for simplicity here we limit the additional variations to flipping and rotation. To detect potential copies, we first train a self-supervised model based on the contrastive learning approach (Fig. <ref>). The aim is to have a low dimensional latent representation of each sample such that the augmented versions of a sample have similar latent representations, and different samples have distinct representations. The model is trained to minimize the following loss function <cit.>: ℒ_con(θ_con) = 𝔼_p(x)[ max(0, f_θ_con(x) - f_θ_con(x^+) _2 - f_θ_con(x) - f_θ_con(x^-) _2) ] where x corresponds to a training volume, x^+ is the similar sample which is an augmented version of x, x^- denotes the dissimilar sample which is just a different volume, and f_θ_con(.) is the networks with trainable parameters θ_con that maps input x to a low dimensional representation. After training f_θ_con(.), we compare the embeddings of the synthesized samples with the real samples in the low dimensional representational space. § METHODS §.§ Datasets To demonstrate memorization in medical imaging, we selected two datasets covering a range of properties in terms of imaging modalities, organs, resolutions, 3D volume sizes, and dataset sizes. We conducted experiments on in house photon-counting coronary computed tomography angiography (PCCTA) dataset and a publicly available knee MRI (MRNet) dataset <cit.>. PCCTA images were acquired from 65 patients on a Siemens Naeotom Alpha scanner at the xxx xxx xxx. Ethics approval was approved by the ethics committee of xxx (ID xxx). Images were acquired with a resolution of approximately 0.39mm×0.39mm×0.42mm. In all patients, coronary artery plaques were annotated by an expert radiologist. Sub-volumes of size 64×64×64 surrounding plaques were extracted, resulting in 242 sub-volumes for training and 58 sub-volumes for validation in total. In MRNet, T2-weighted knee MR images of 1130 subjects were analyzed, where 904 subjects were used for training and 226 for validation. All volumes were cropped or zero-padded to have sizes of 256×256×32. In both datasets, each volume was normalized to have voxel intensity in the range [-1, 1]. §.§ Networks LDM architecture, training procedures and loss functions were directly adopted from Khader et al. <cit.>. For the training of the diffusion and autoencoder models, all hyperparameters were matched with the ones selected in Khader et al. <cit.>. The only exception was the batch size in the diffusion models, which was set to 10 to fit models into the GPU VRAM. For contrastive learning, network architecture was adopted from the encoder in the latent encoding model. The encoder was used to reduce the sub-volume dimensions to 4×4×4 and 8 channels. Afterwards, flattening was performed followed by two densely connected layers to reduce the latent space embeddings to dimensions 32×1. All hyperparameters except for the learning rate and epochs were identical to the latent encoding model. Learning rate and epochs were tuned using a held out validation data. § RESULTS §.§ Memorization Assessment First, 1000 synthetic PCCTA samples (approx. 4 × training data) were generated using the trained LDM. All synthetic and training samples were then passed through the self-supervised models (Section 1.2) to obtain corresponding lower dimensional embeddings. Next, mean square distance (MSD) was computed between all training and synthetic embeddings. For each training sample, the closest synthetic sample was considered as a copy candidate. Fig. <ref>a shows MSD distribution of the candidate copies. To get a better idea of the MSD scale, for each training sample the closest real validation sample in the embedding space was also considered (Fig. <ref>a). Low values on the x-axis denote lower distance or high similarity. The MSD distribution of synthetic samples is more concentrated near zero compared to the MSD distribution of real validation samples. To further assess if the candidate synthesized samples are indeed copies, each candidate copy was also labelled manually as a copy or a novel synthetic sample via visual assessment by consensus of two users. As shown in Fig. <ref>b, most of the candidates with low MSD values are copies. Upon comparing copies with novel samples in <ref>b, we also observe that 59% of the training data has been memorized. This number is alarming, as it indicates memorization at a large scale. It is also important to note that this percentage is based on just 1000 synthesized samples. Increasing the synthetic samples could lead to an increased number of copies. Fig. <ref>a shows some copy candidates. It can be seen that synthetic samples show stark resemblance with the training samples. We then assess memorization in the MRNet dataset, which is relatively a larger dataset containing 904 training volumes. 3600 synthetic samples (approx. 4 × training data) were generated using the LDM trained on the MRNet dataset. Fig. <ref>c shows MSD distribution of synthetic candidate copies and validation samples. We observe similar patterns in the MRNet dataset. However, MSD distribution of synthetic candidate copies in MRNet is further away from zero compared to the PCCTA dataset. This can be explained by the training data size, as models with large training datasets get to learn distribution from many diverse samples and thus are less likely to memorize the data. We also annotated 150 randomly selected copy candidates as copy or novel samples (Fig. <ref>d). We find 33% of the copy candidates to be copies. Fig. <ref>-b also shows representative samples. §.§ Data Augmentation We also analyze the effect of data augmentation on memorization by comparing MSD distribution of copy candidates generated by models trained with augmentation (augmented models) and without augmentation (non-augmented models) on the PCCTA dataset. Fig. <ref> compares the MSD distributions. MSD distribution of the non-augmented model tends to have higher density near zero. Overall, 41% of the training dataset is memorized in augmented models compared to 59% in non-augmented models. This suggests that the non-augmented models tend to memorize more than the augmented models. One possible explanation could be artificial expansion of datasets through augmentation, which can in turn lead to fewer repetitions of identical forms of a sample during training. § DISCUSSION There has been a considerable amount of focus on generative models in medical image synthesis. Here, we tried to assess if these models actually learn to synthesize novel samples as opposed to memorizing training samples. Our results suggest that LDMs indeed memorize samples from the training data. This can have broad implications in the medical imaging community, since leakage of patient data in the form of medical images can lead to violation of patient privacy. An interesting future prospect could be to understand the underlying reasons leading to memorization. Somepalli et al. <cit.> suggests that data duplication during training could be an important factor, as a repeated sample is seen many more times during training. They further suggest that unconditional models primarily suffer from data memorization in low data regimes, which is similar to what we observe when we compare memorization in PCCTA and MRNet datasets. Nonetheless, it is an important research direction which warrants future work. To our knowledge, this is the first study assessing memorization in 3D LDMs for medical image synthesis. Another independent study recently investigated memorization in deep diffusion models <cit.>. There are several differences between our study and Akbar et al. : 1) Akbar et al. trained 2D models on medical images, whereas we trained 3D models which are more coherent with the nature of the medical images. 2) Akbar et al. is based on diffusion models in pixel space, which are not easily applicable to 3D medical images due to high computational demands. To the contrary, here we used LDMs, which first project the data onto a low dimension latent space to reduce computational complexity while ensuring that the relevant semantic information is preserved. 3) Akbar et al. used correlation between images in the pixel space to assess memorization. While this approach detects identical copies, it cannot detect augmented or slightly different copies. Here, we trained a self-supervised model that can also account for augmented versions of the training samples, which might be missed by computing regular correlations in pixel space. 4) We also assessed memorization on models that were trained using augmented data. This is a more realistic scenario while training deep models on medical images due to data scarcity. §.§.§ Acknowledgment: This work supported through state funds approved by the State Parliament of Baden-Württemberg for the Innovation Campus Health + Life Science Alliance Heidelberg Mannheim. In addition, this work was supported by the BMBF-SWAG Project 01KD2215D. The authors also gratefully acknowledge the data storage service SDS@hd supported by the Ministry of Science, Research and the Arts Baden-Württemberg (MWK) and the German Research Foundation (DFG) through grant INST 35/1314-1 FUGG and INST 35/1503-1 FUGG. splncs04
http://arxiv.org/abs/2307.01279v2
20230703180830
The ChromaStar+ modelling suite and the VALD line list
[ "C. Ian Short" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP", "astro-ph.IM" ]
Department of Astronomy & Physics and Institute for Computational Astrophysics, Saint Mary's University, Halifax, NS, Canada, B3H 3C3 ian.short@smu.ca We present Version 2023-02-04 (ISO) of the Chroma+ atmospheric, spectrum, and transit light-curve modelling suite, which incorporates the VALD atomic line list. This is a major improvement as the previous versions used the much smaller NIST line list. The NIST line list is still available in Chroma+ for those projects requiring speed over completeness of line opacity. We describe a procedure for exploiting the ”Array job” capability of the slurm workload manager on multi-cpu machines to compute broadband high resolution spectra with the VALD line list quickly using the Java version of the code (ChromaStarServer (CSS)). The inclusion of a much larger line list more completely allows for the many weaker lines that over-blanket the blue band in late-type stars and has allowed us to reduce the amount of additional ad hoc continuous opacity needed to fit the solar spectral energy distribution (SED). The additional line opacity exposed a subtle bug in the spectrum synthesis procedure that was causing residual blue line wing opacity to accumulate at shorter wavelengths. We present our latest fits to the observed solar SED and to the observed rectified high resolution visible band spectra of the Sun and the standard stars Arcturus and Vega. We also introduce the fully automated Burke-Gaffney Observatory (BGO) at Saint Mary's University (SMU) and compare our synthetic spectra to low resolution spectra obtained with our grism spectrograph that is available to students. The fully automated BGO, the spectrograph, and the BGO spectrum reduction procedure are fully described in a companion paper. All codes are available from the OpenStars www site: www.ap.smu.ca/OpenStars. § INTRODUCTION In <cit.> and papers in that series we have described an integrated cross-platform atmospheric modelling, spectrum synthesis and transit light-curve modelling code (the Chroma+ suite) developed in platform-independent languages including Python (ChromaStarPy (CsPY)), Java (ChromaStarServer (CSS)), and Javascript. In particular, <cit.> describe incorporation of the GAS package into CsPy and CSS, providing the codes with a mature, competitive module for handling the combined chemical equilibrium, ionization equilibrium, and equation-of-state (EOS) problem for over a hundred atomic, ionic, diatomic, and polyatomic species, which in turn allows for more realistic calculation of the line and continuum extinction distributions, κ_λ(τ). The motivation has been to provide a more widely accessible numerical laboratory for rapid numerical experiments in spectrum synthesis and light-curve modelling, and a responsive electronic spectral atlas for quick spectral reconnaissance with ad hoc stellar parameters. To expedite our proof-of-concept, we initially prepared a limited atomic line list based on the NIST atomic database <cit.> containing ∼ 26 000 lines in the λ 260 to 2600 nm region (2.9 Mbytes). With Version 2023-02-04, we have prepared a new, larger atomic line list based on the VALD database (<cit.>, <cit.>) containing ∼ 613 000 lines in the λ 250 to 2600 nm region (36 Mbytes) for as many as six ionization stages of all elements up to and including Ge (z=32) and additional select elements up to La (z=57). Because responsiveness is one of the key distinguishing characteristics of the Chroma+ suite, CsPy and CSS provide a new input parameter that allows the user to select between the NIST or VALD lists. Molecular band opacity continues to be treated in the Just-Overlapping-Line-Approximation (JOLA, <cit.>) and we do not require a molecular line list. § RELATED IMPROVEMENTS The VALD line list includes ∼ 70 000 lines of Fe1 and ∼ 80 000 lines of Fe2 and represents a ∼ 24× increase in the number of potential lines considered in the CSS spectrum synthesis procedure. This additional line opacity allows the Chroma+ suite to more accurately model the effect of over-blanketing in the blue band of GK stars. As reported in <cit.> the Chroma+ suite uses an unusual method of sampling the λ range over which each spectral line profile, ϕ_λ(λ-λ_ 0), is distributed about line center (λ_ 0) based on how spectral lines are treated in non-local thermodynamic equilibrium (NLTE) by the PHOENIX code <cit.>. As each spectral line that passes an initial test of line-to-continuum opacity at line center (κ^ l_λ_0κ^ c_λ_0 > ϵ) at three reference logτ_λ_0 values throughout the atmosphere is added to the extinction distribution spectrum (κ_λ(τ)), its own line-specific grid of λ_0-centered λ points is inserted into the master λ grid. After all lines have been added, the grid is swept of λ points for which Δλ is less than a Δλ_ min value of ∼ 0.001 nm, approximately consistent with a numerical resolving power of ∼ 500 000. This has the advantage of minimizing the number of λ points at which the κ_λ(τ) distribution and the corresponding monochromatic emergent intensity, I_λ (τ_λ = 0) must be computed in cases where there are under-blanketed spectral regions. The large increase in the density of spectral lines provided by the VALD line list revealed a subtle bug in our procedure: As each line is added, the previous κ(λ) distribution is interpolated onto the updated one, and a small residual line opacity at the first λ-point in each line-specific grid was accumulating at all λ values of λ < λ_0 each time a new line was added, eventually amounting to significant spurious excess continuum opacity progressively at smaller λ values. This bug has now been fixed. We also improved our λ interpolation procedure to ensure a symmetric distribution of λ points about the λ_0 value of all lines more reliably with the result that broad lines now have wings that appear more symmetric. We have used the more complete spectrum synthesis to perform the following, in order: 1) Reduce the amount of ad hoc extra continuous opacity needed throughout the visible band (”opacity fudge”) to fit the overall observed SED of the Sun with a model of solar input parameters from a factor of ∼ 3 to a factor of ∼ 1.5., and 2) Re-calibrate the fine tuning of the linear Stark broadening of H1 Balmer lines in stars of spectral class B to F V. Fig. <ref> shows the synthetic spectra in the λ 400 to 700 nm region for 31 models spanning the T_ eff range 3600 to 22 000 K with log g = 4.5 and [ A H] = 0.0, with Δ T_ eff intervals of 200 K for T_ eff≤ 8000 K and 400 K for T_ eff > 8000 K. We have also computed structures and spectra for models of log g = 2.0 at select T_ eff values less than 5000 K. § PERFORMANCE The Slurm workload manager provided on the Digital Research Alliance of Canada (DRAC) parallel computing clusters allows for ”Array” jobs in which parts of the execution that can be automatically associated with a CPU identifier are automatically scattered to the corresponding CPUs. The spectrum synthesis calculations at a given λ value are independent of those at another, and we use the Array job capability to automatically parallelize the wavelength domain of the spectrum synthesis. Because the density of spectral lines per unit Δλ interval increases with decreasing λ value, we distributed the splice points in the wavelength parallelization in approximately equal Δlogλ intervals, finessed ad hoc to avoid splice points falling in the wings of lines that are broad in stars of the input stellar parameter values. For our performance test, we divided the λ 400 to 750 nm range into eight logarithmic sub-ranges, each of which was handled by one of eight CPUs. Because spectrum synthesis is by far the slowest major stage in our integrated modeling process, we achieved a significant reduction in wall-clock time. The slurm ”sacct” and ”seff” reporting tools indicate CPU times and memory allocations ranging from 22 minutes and 0.7 Gbytes to 80 minutes and 1.2 Gbytes with both values generally increasing with decreasing λ values of the assigned λ range. § COMPARISON TO STANDARD STARS §.§ The Sun Fig. <ref> shows the continuum-normalized synthetic surface flux spectrum, F_λ(λ)/F^ C_λ(λ), for a model of canonical grid input parameters close to those of the Sun of (T_ eff/log g/[ A H]) = (5800 K/4.5/0.0) as computed with CSS and the NIST and with the VALD line lists. We have used the abundance distribution of <cit.>, adopted a microturbulent velocity dispersion, ξ_ T, of 1 km s^-1, and a Van der Waals broadening enhancement parameter, γ_ VW, of ∼ 3. Also shown is the observed solar flux spectrum of <cit.> of spectral resolution, R, of 300 000 in a representative region around the Na1 D_ 2 lines where the spectrum is relatively uncrowded. For this comparison we have smoothed the computed F_λ(λ) distribution with with a Gaussian kernel with a σ value of 2.0 km s^-1. Fig. <ref> shows the same comparison for the more crowded Ca1 4227 line region. Fig. <ref> shows the spectral energy distribution (SED) of the same model, projected from the Sun's effective surface to the Earth's distance to the Sun along with the observed solar irradiance spectrum of <cit.>. For this comparison, both the observed and computed F_λ distributions were broadened with a Gaussian kernel with a σ value of 250 km s^-1 to allow an assessment of the realism with which the spectral structure is being modelled on the broadband scale. Fig. <ref> shows the same comparison for the λ 400 to 500 nm region where the Sun's F_λ distribution peaks. As seen in Figs. <ref> and <ref>, the new larger line list provides for a more realistically line blanketed SED for GK stars in the λ < 500 nm range where the SED becomes heavily blanketed and eventually over-blanketed with decreasing λ value. Fig. <ref> also shows the location of the ”Array job” splice points for the case of the Sun's spectrum (see section <ref>). B-V color index The additional line opacity has a small effect on the computed B-V color, decreasing its value by ∼ 0.015 mag. Fig. <ref> shows the relative difference spectrum (F^ VALD_λ - F^ NIST_λ)/F^ NIST_λ along with the BVR transmission curves and reveals that the additional VALD line opacity has a complicated distribution throughout the B and V bands. The synthetic F_λ spectra were each broadened by convolution with a Gaussian kernel function with a σ value of 100 km s^-1 before subtraction. The calibrated synthetic B-V, V-R, V-I, and R-I color indices in the Johnson-Bessell system computed with the smaller NIST line list are (0.365, 0.717, 1.355, 0.277) whereas those computed with the new VALD line list are (0.351, 0.854, 1.530, 0.293). In both cases, the indices are calibrated with a single-point correction with a model of Vega computed with the VALD line list and input parameters of (T_ eff/log g/[ A H]) = (9550 K/3.95/-0.5) <cit.>. For comparison, the color indices computed with our procedure from the observed solar irradiance spectrum of <cit.> are (0.477, 0.846, 1.408, 0.244). §.§ Arcturus Figs. <ref> and <ref> show the F_λ(λ)/F^ C_λ(λ) distribution for a model of canonical grid point input parameters and scaled solar abundances of (T_ eff/log g/[ A H]) = (4200 K/1.5/-0.5) as computed with the VALD line list and smoothed with a Gaussian kernel with a σ value of 4.0 km s^-1 along with the observed flux spectrum of <cit.> of R value 150 000 in the same Na1 D_ 2 and Ca1 4227 regions. <cit.> carried out a careful spectral analysis of Arcturus with an atomic line list calibrated with a solar model of the Sun's spectrum, and found parameter values of (T_ eff/log g/[ Fe H]) = (4300 K/1.5/-0.5) with enhanced abundances of the α elements of +0.3 dex. We note that to avoid damped spectral lines with wings that are grossly over-broadened, we reduced the γ_ VW value to one (ie. no enhancement). §.§ Vega Fig. <ref> shows the comparison between the observed high resolution spectrum of Vega of <cit.> and the synthetic spectrum computed with the VALD line list and the input parameters of <cit.> and the updated H1 broadening throughout the visible band. As reported in Section <ref> we have re-tuned the H1 Balmer line linear Stark broadening strengths to match the spectrum of Vega. § COMPARISON TO BURKE-GAFFNEY OBSERVATORY (BGO) SPECTRA The Burke-Gaffney Observatory (BGO, lat. +44 37 50, long. -63 34 52) at Saint Mary's University (SMU) consists of a 0.6 m (24”) f/6.5 Planewave CDK24 telescope and is equipped with an f/4 model PF0035 ALPY 600 grism spectrograph from Shelyak Instruments. We operate the spectrograph with a slit width of 23 μm corresponding to a spectral resolving power, R, of ∼ 600, equivalent to a spectral resolution element of Δ v ∼ 500 km s^-1 at λ∼ 6000 Å. The camera is a model Atik 314L+ CCD camera with 1391× 1039 imaging pixels of size 6.45× 6.45 μm. The setup provides a reciprocal linear dispersion, Δλ /Δ x, of ∼ 420 Å mm^-1 and a spectral range, Δλ, of ∼ 3750 Å, effectively covering the entire visible band from ∼ 4000 to ∼ 7000 Å  after accounting for edge effects. §.§ Observations Table <ref> and Fig. <ref> present a set of commissioning spectra of seven bright stars acquired on 15 April 2021 by the BGO Director and Astronomy Technician at the time, Mr. David Lane. The set includes six luminosity class V stars spanning the range of spectral class from K5 to B4 and one luminosity class III star of spectral class M3. §.§ Reduction procedure This is the first presentation of BGO spectra in the research literature and in a companion paper <cit.> we describe in detail our own locally developed BGO post-processing pipeline in the Python programming language for SMU students and present a brief summary here. The pipeline consists of automatic bias, dark current and flat-field corrections performed by software shipped with the ALPY 600 spectrograph. That is followed up by 1) Wavelength calibration by fitting a 2^ nd-order polynomial to the relation ship between pixel columns and the known λ value of Ar-Ne lines, 2) Subtraction of a 0^ th-order fit to the remaining residual background to reduce the pedestal signal to zero, 3) Automatic location of the brightness peak in the cross-dispersion profile in sample columns across the chip, 4) Generation of a root-N model cross-dispersion weight profile that is centered on the cross-dispersion peak in each column, 5) Formation of a 1D spectrum by summing columns weighted by the model profile from Step 4), 6) A two-step automatic approximate continuum rectification of the ∼ 4200 to 6800 Å  region that is designed for spectra unaffected by emission features or deep TiO bands (spectral classes B to K). §.§ Comparison of BGO and CSS spectra In Figs. <ref>, <ref>, and <ref> we show the comparison between observed spectra from our BGO observing run and synthetic spectra from our model grid bracketing the nominal T_ eff value corresponding to the spectral type listed in <cit.>. Nominal T_ eff values were taken from Appendix G of <cit.>. Because we can only achieve an approximate continuum rectification for late type stars with high Δλ/Δ x broadband data, we do not attempt a quantitative fit based on minimizing a fitting statistic, but only a perform a visual inspection of the fit quality. At the low R and high ΔλΔ x values of these spectra, the main features at which we can assess the fit within the ∼ 4200 to ∼ 6800 Å  rectification range for GK stars are the Na1 D_ 2 doublet at λ 5900 Å  and the TiO C_ 3Δ-X_ 3Δ (α system, λ_ 00  5170 Å) and the B_ 3Π-X_ 3Δ (γ' system, λ_ 00  6193 Å) bands. For A and B stars, at our R and ΔλΔ x values, the main features at which we can judge the fit within the rectification range are the H1 β and γ lines. This is sufficient to allow students to do projects at the undergraduate honours level in which they carry out and reduce their own BGO spectroscopy to coarsely classify stars to within a few spectral subclasses accuracy. In the process, they will gain valuable experience with the procedures of observational and computational stellar spectroscopy within a Python IDE running on commonplace Windows or Linux computer. This work was made possible by the ACENET research computing consortium (ace-net.ca/) and the Digital Research Alliance of Canada (alliancecan.ca). This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna (vald.astro.uu.se/). We gratefully acknowledge the valuable guidance of the BGO technician, Tiffany Fields, and useful discussion with Brian Skiff of Lowell Observatory. 00 [Allard & Hauschildt (1995)]phoenix Allard, F. & Hauschildt, P. H., 1995, The Astrophysical Journal, 445, 433 [Carroll & Ostlie (2007)]appendixG Carroll, B.W. & Ostlie, D.A., 2007, An Introduction to Modern Astrophysics, 2^ nd Ed., Pearson - Addison Wesley (San Francisco) [Castelli & Kurucz(1994)]vega Castelli, F. & Kurucz, R.L., 1994,. Model atmospheres for VEGA, , 281, 817 [Grevesse & Sauval (1998)]grevs98 Grevesse, N., Sauval, A.J., 1998, Space Science Reviews, 85, 161 [Harris et al. (2020)]numpy Harris, C.R., Millman, K.J., van der Walt, S.J. et al., 2020, Nature, 585, 357 [Hinkle et al. (2000)]hinkleHinkle, K., Wallace, L., Livingston, W., Ayres, T., Harmer, D., Valenti, J., 2003, The Future of Cool-Star Astrophysics: 12th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, eds. A. Brown, G.M. Harper, and T.R. Ayres, University of Colorado, p. 851 [Hoffleit & Warren (1991)]hoffleit Hoffleit, D. & Warren, Jr W.H., 1991, The Bright Star Catalogue, 5th Rev. Ed. [Kramida et al.(2015)]nistKramida, A., Ralchenko, Yu., Reader, J., and NIST ASD Team (2015). NIST Atomic Spectra Database (ver. 5.3), [Online]. Available: http://physics.nist.gov/asd [2015, November 26]. National Institute of Standards and Technology, Gaithersburg, MD. [Kurucz (2006)]solarsed Kurucz, R. L., 2006, arXiv:astro-ph/0605029 [Kurucz (2005)]solarflux Kurucz, R. L., 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 189 [Kurucz et al. (1984)]solarspec Kurucz, R. L., Furenlid, I., Brault, J., & Testerman, L. 1984, Solar Flux Atlas from 296 to 1300 nm (NSO Atlas 1; Sunspot: NSO) [Pakhomov, Ryabchikova & Piskunov (2019)]VALD19Pakhomov, Yu. V., Ryabchikova, T.A. & Piskunov, N.E., 2019, Astronomy Reports, 63, 1010 [Peterson, Dalle Ore & Kurucz (1993)]pdk Peterson, R.C., Dalle Ore, C.M., Kurucz, R.L., 1993, , 404, 333 [Piskunov & Valenti (2017)]SME2Piskunov, N. & Valenti, J.A., 2017, Astronomy & Astrophysics, 597, A16 [Ryabchikova et al. (2015)]VALD15Ryabchikova, T., Piskunov, N., Kurucz, R. L., Stempels, H. C., Heiter, U., Pakhomov, Yu, Barklem, P. S., 2015, Physica Scripta, 90, 054005 [Short, Lane & Fields (2023)]bgoShort, C.I., Lane, D.J. & Fields, T., 2023, in press [Short (2016)]short16 Short, C.I., 2016, , 128, 104503 [Short & Bennett (2021)]shortb21 Short, C.I. & Bennett, P.D., 2021, , 133, 064501 [Takeda, Y. et al. (2007)]vegaspec Takeda, Y., Kawanomoto, S. & and Ohishi, N., 2007, Publ. Astron. Soc. Japan, 59, 245 [Zeidler-K.T. & Koester (1982)]jola Zeidler-K.T, E.M. & Koester, D., 1982, Astronomy & Astrophysics, 113, 173
http://arxiv.org/abs/2307.02614v1
20230705193331
Information-Based Heavy Hitters for Real-Time DNS Data Exfiltration Detection and Prevention
[ "Yarin Ozery", "Asaf Nadler", "Asaf Shabtai" ]
cs.CR
[ "cs.CR" ]
red Information-Based Heavy Hitters for Real-Time DNS Data Exfiltration Detection and Prevention Yarin Ozery Ben-Gurion University of the Negev Akamai Technologies, inc. yarinoz@post.bgu.ac.il Asaf Nadler Ben-Gurion University of the Negev asafnadl@post.bgu.ac.il Asaf Shabtai Ben-Gurion University of the Negev shabtaia@bgu.ac.il ================================================================================================================================================================================================================================================== Data exfiltration over the DNS protocol and its detection have been researched extensively in recent years. Prior studies focused on offline detection methods, which although capable of detecting attacks, allow a large amount of data to be exfiltrated before the attack is detected and dealt with. In this paper, we introduce (), a real-time detection method which is based on live estimations of the amount of information transmitted to registered domains. uses constant-size memory and supports constant-time queries, which makes it suitable for deployment on recursive DNS servers to further reduce detection and response time. In our evaluation, we compared the performance of the proposed method to that of leading state-of-the-art DNS exfiltration detection methods on real-world datasets comprising over 250 billion DNS queries. The evaluation demonstrates 's ability to successfully detect exfiltration rates as slow as 0.7B/s, with a false positive alert rate of less than 0.004, with significantly lower resource consumption compared to other methods. § INTRODUCTION Data exfiltration is performed by malicious actors in order to steal data from a network. Once the data is collected, adversaries often utilize various techniques, including compression and encryption, to obfuscate the information and evade detection when removing it. The methods used to extract data from a targeted network typically involve transmitting it through a covert communication channel. Adversaries may also impose limitations on the data transmission size to minimize the risk of detection. The data exfiltration stage usually marks the final phase in the malware lifecycle <cit.>. The Domain Name System (DNS) protocol <cit.> is a crucial network protocol, which is primarily used to translate easy-to-remember domain names into corresponding IP addresses. While enterprises and government organizations employ various defensive measures to safeguard their users and networks against malicious actors (e.g., antivirus software, firewalls, and network traffic monitoring), the DNS protocol is typically left unblocked and inadequately monitored due to its critical role in facilitating users' Internet access <cit.>. Given the vulnerable and exploitable characteristics of the DNS protocol, malicious actors targeting enterprise and government organizations for data theft often choose to exploit it as a means of data exfiltration and covert communication. This practice is known as DNS exfiltration <cit.>. Initiating DNS exfiltration is a straightforward and cost-effective process, as the attacker simply needs to register a domain (e.g., attacker.com) with a domain name registrar and assign an authoritative name server under their control to that domain. This allows the malware installed on the compromised host to exfiltrate data by encoding it within DNS packets directed towards the registered domain. In the DNS query resolution process the queries are forwarded from the client to the authoritative name server associated with the queried domain. As a result, queries aimed at attacker.com are forwarded to the attacker-controlled authoritative name server, enabling successful data exchange between the malware and the attacker. In addition, the attacker can utilize DNS responses to encode messages sent to the malware, enabling a bidirectional covert communication channel through the DNS exfiltration tunnel. This channel can be exploited for other purposes, such as command and control (C&C) operations. There are various ways to exfiltrate data over the DNS: (1) encoding data within the DNS query name (i.e., the target domain name to be resolved); (2) using the DNS query type (qtype) field, which indicates the type of DNS resource record (RR) the client is trying to resolve, in order to encode a small amount of information (up to 16 bits) in the DNS packet; and (3) timing-based exfiltration, in which the query arrival time is used as a means of conveying information <cit.>. Numerous instances of malicious actors leveraging DNS for data exfiltration have been documented, including activities by state-sponsored threat groups <cit.>. Moreover, the use of DNS for data exfiltration and communication has become increasingly prevalent among ransomware actors <cit.>. Therefore, it is no surprise that significant research attention has been dedicated to DNS exfiltration and its detection in recent years <cit.>. Many DNS exfiltration methods have been proposed to date, ranging from rule-based techniques <cit.>, statistical-based techniques <cit.>, supervised machine learning techniques <cit.>, unsupervised machine learning techniques <cit.>, and even deep learning techniques <cit.>. Despite the extensive body of research on DNS exfiltration, limited attention has been directed towards real-time detection methods. The majority of existing solutions are ill-suited for online deployment, and the proposed approaches have predominantly operated in an offline manner. Offline detection methods are a major concern as they allow for substantial data exfiltration to occur before the attack is identified and thwarted. In contrast, real-time DNS exfiltration detection enables a rapid response from network operators, effectively reducing the potential damage inflicted by breaches. By quickly detecting and responding to DNS exfiltration attempts in real time, the negative consequences of such attacks can be mitigated. In order to provide true real-time detection capabilities, a solution should not rely on an external data collection process but rather be executed directly on the DNS queries stream resolved by the resolver. By integrating the detection functionality within the resolver itself, the solution can effectively analyze and classify queries without introducing delays or disruptions to the resolution process. Given that a true real-time detection solution should run directly on the resolver, it is crucial to ensure that the DNS resolution process remains unaffected by the detection mechanism. An effective solution should therefore possess low memory and computational demands while maintaining the ability to process and classify a large volume of queries per second. In this paper, we introduce (), a novel and interpretable method for real-time DNS exfiltration detection. Our approach utilizes a threshold-based method to quantify the unique information transmitted through subdomains within DNS queries and raise alert if the suspected amount of data exfiltrated exceeds that threshold, making it transparent and explainable. To achieve true real-time detection, we employ a fixed-size data structure that efficiently processes a continuous stream of DNS queries, inspired by the concept of identifying heavy hitters in data streams <cit.>. incorporates the HyperLogLog sketching algorithm <cit.> from the field of big data <cit.> and leverages weighted sampling techniques. This combination enables our method to accurately estimate the volume of information transmitted from clients to registered domains through subdomains. By comparing this estimated quantity against a predefined and configurable detection threshold, can raise an alert when the transmitted information surpasses the threshold. A detailed description of is provided in Section <ref>, and an overview of the method, along with a possible DNS exfiltration scenario, is presented in Figure <ref>. In order to enable reproducibility of our results and allow further research, we provide an open-source Python implementation of our proposed solution. Our experiments demonstrate 's high performance and its ability to process over 600,000 queries per second. This makes it well-suited for deployment in large-scale networks where real-time processing and performance are crucial. Additionally, maintains its efficacy in resource-constrained environments with limited computational and memory resources. Therefore, it can be effectively employed in small-scale environments without straining available resources. An additional advantage of is that its operation does not rely on labeled training data. This characteristic makes the need for data annotation redundant and facilitates easier deployment and maintenance of the method. To handle false positive cases, we propose two reputation-based allowlisting approaches, which are described in Section <ref>. We performed a thorough evaluation of to assess its effectiveness in detecting DNS exfiltration and its ability to handle false positive cases and compare it with three state-of-the-art (SOTA) methods proposed in prior studies. To ensure the reliability of our results, we collected a real-world DNS query dataset containing over 50 billion queries. This extensive dataset serves as a robust foundation for our evaluation, allowing us to make reliable assessments of 's performance. In addition to the real-world dataset, we also performed the evaluation on a publicly available dataset. This enables reproduction of our results and external validation of our findings. To further enhance the validity and robustness of our evaluation, we concluded the assessment by using a second real-world dataset. This additional dataset consists of over 250 billion queries, collected over a period of three weeks. Our evaluation on this dataset is discussed in Section <ref>. We also evaluated the resource utilization of the compared methods, specifically focusing on memory consumption and compute time, and the results of this evaluation are presented in Section <ref>. To the best of our knowledge, this evaluation is the most extensive and rigorous evaluation conducted in the field. Detailed information about the evaluation process can be found in Section <ref>, where we provide a comprehensive overview of our datasets and the methodology used to assess 's performance and the compared methods. To summarize, the contribution of our work is: * - A lightweight and simple real-time DNS exfiltration detection method, which is appropriate for large-scale high throughput networks as well as small networks; * Evaluation of our method against state-of-the-art methods on large-scale real-world datasets as well as a publicly available dataset; * Open-source Python implementation of our method, which will enable reproduction of our results and support further research. § BACKGROUND §.§ DNS stream model We model DNS stream queries on the data stream model presented in <cit.>: A Data Stream S is an ordered set of elements x_1, x_2, ... x_n where each element is observed exactly once. Given definition  <ref>, in the scope of this research, each element x_i is comprised of a pair of values (k_i, v_i), where the values are taken from domains K and V, respectively. k_i is called the key of the pair, while v_i is called the subkey of the pair. More specifically, each element in the DNS stream is a DNS query's qname  <cit.>, extracted to (domain, subdomain) pair, where domain is the second-level domain (which we denote as primary domain, or just domain, for the rest of the paper) and subdomain is the concatenation of the rest of the labels of the qname. For example, given the qname a.b.example.com, the domain is example.com., the subdomain is a.b, and the DNS query stream element is (example.com., a.b). §.§ Real-time DNS exfiltration detection The DNS protocol has traditionally been low-demanding  <cit.>; therefore, DNS resolvers are usually deployed on limited hardware  <cit.>. Given that knowledge, we define criteria which must be satisfied in order for a DNS exfiltration detection algorithm to be considered appropriate for real-time detection: * Given a DNS stream of length n, the amount of space required should be sublinear with regard to n, i.e., have space complexity of o(n). * The classification of a given DNS query should have time complexity of Θ(1). These criteria are designed to ensure that a real-time DNS exfiltration detection solution can run on the DNS resolver server, without impacting the DNS resolution throughput of the server or have a large memory footprint, which is needed for the DNS protocol's caching mechanism  <cit.>. § RELATED WORK The topic of detecting data exfiltration over the DNS protocol has been the subject of nearly 30 recent studies <cit.>. Offline detection methods by design. There is a wide variety of methods whose design limits them from being applied in real time. The most notable design limitation is time-based aggregation feature extraction; for example, Nadler et al. <cit.> proposed an anomaly-based isolation forest <cit.> model to detect both high and low throughput DNS exfiltration, based on features extracted over a sliding window of size λ hours; classification is done based on the latest n_s windows, meaning up to n_s*λ hours can pass by the the time of detection. This also means that at any given moment, the number of DNS queries that need to be stored in memory is Ω(λ· n_s), and therefore it inherently cannot run in real time. Ishikura et al. <cit.> proposed a DNS exfiltration detection solution based on what they called cache-property-aware features. For each client in an enterprise network, they suggested maintaining a list of the last n Fully Qualified Domain Names (FQDNs) the client accessed, in order to calculate a property called Access Miss Count, denoted as AMC^t, which indicates the number of FQDNs queried by the client in the last t seconds. The authors proposed both a rule-based model and a long short-term memory (LSTM) based model to identify DNS exfiltration activity. The memory requirements of the solution grow linearly with the amount of clients in the network (both for the access list and the LSTM state of each client). Moreover, the solution does not identify which FQDN is suspected as being used for DNS exfiltration but rather only determines if exfiltration has occurred in a given time window (which ranged between 100 and 1,200 seconds in their experiments); this makes it difficult for security operations teams to identify the malicious domain. Paxson et al. <cit.> presented an information-based method that provides an upper bound on the amount of information that can possibly be transmitted via DNS queries. The method groups DNS queries per primary domain and DNS source IP (referred to as “client“ in their paper) on a daily basis, which is followed by lossless compression of the different possible information vectors (query name, query timing, and query type); the minimal value of these is output as the upper bound on the amount of information. The upper bound is then compared with a predefined threshold, and alerts are raised for any communication which exceeds this value. This method is designed to run on window-aggregated data of size w and has the benefit of being able to detect DNS exfiltration regardless of the information vector. However, both the time and space complexity of the method are Ω(w), because all of the queries in the window w need to be kept for the information estimation stage, and the time complexity is Ω(w) due to the compression performed to estimate the information. Compute-intensive detection methods. In recent years, deep learning-based (DL) DNS exfiltration detection methods have been proposed <cit.>. Chen et al. <cit.> proposed a DL architecture based on the combination of a convolutional neural network (CNN) and Long short-term memory networks (LSTM). For the LSTM layer, the authors used one hidden layer with length 128, as they assume that the first 128 characters of the DNS query may be used for exfiltration. Three convolution layers, two max-pooling layers, and one softmax layer were used for the CNN layer. Their model is then trained on labeled DNS queries (benign and malicious). Wu et al. <cit.> proposed TDAE, an autoencoder DL DNS exfiltration detection method based on semi-supervised learning, which means it requires some labeled data. While DL models generally provide high accuracy and automatic feature extraction, they come with the cost of requiring larger training datasets than traditional machine learning methods. More importantly, they are known to be hardware-intensive <cit.>, which makes them unsuitable for deployment on the network perimeter. Supervised learning methods. Several methods that rely on labeled data for training <cit.> have been proposed. This is reasonable for identifying a predetermined set of known DNS exfiltration tools, but as shown in <cit.>, the absence of high-quality publicly-available datasets prevents these methods from identifying unfamiliar DNS exfiltration malware. Our proposed method does not require any labeled data for training. Real-time detection methods. To the best of our knowledge, only two previous studies focused on real-time detection <cit.>. Qi et al. <cit.> suggested a detection technique based on bigram (subsequent pairs of characters) frequency statistics. The authors described a score mechanism based on the expected value of the bi-gram character frequency as the score of the primary domain. The model is trained on labeled benign and malicious data to determine the score threshold that will make the classifier produce the least number of false positive alerts, which is then used by the classifier in the online phase. While it is reasonable to expect this kind of classifier to be run in real time, it suffers from the same limitation of other supervised learning methods – it has difficulty generalizing to unfamiliar DNS exfiltration techniques. Ahmed et al. <cit.> proposed an unsupervised isolation forest model, which is based on classifying queries on a per packet basis. Many features are extracted and used, such as the length of the query name, length of the query subdomain, query name entropy <cit.>, count of numerical characters in the query name, count of uppercase letters in the query name, number of DNS query labels, maximum label length, and average label length. While the isolation forest model <cit.> is relatively lightweight in terms of classification time and memory requirements, it is questionable how this method can scale to large-scale networks which may reach millions of queries per second <cit.>, given the number of features that need to be extracted to perform classification. A table summarizing related studies and their methods' compliance with the real-time criteria defined in Subsection <ref> is provided in Table <ref>. § INFORMATION-BASED HEAVY HITTERS FOR DNS EXFILTRATION DETECTION §.§ Definitions Given a stream of elements S, the information weight of an element (k_i, v_i), I_k_i, v_i is the quantity of information conveyed by v_i. In a stream of elements S, for a given k_i ∈ K, the distinct information weight I_k_i is the total information conveyed by distinct elements with key k_i, i.e., I_k_i = ∑_{v |(k_i,v)∈ S } I_k_i,v Key k_i is a distinct information heavy hitter if its information weight I_k_i is at least ϵ-fraction of the total distinct information weight of the stream, where ϵ∈ (0,1), i.e., I_k_i≥ϵ·∑_y∈ K I_y §.§ () is a novel method for real-time detection of DNS exfiltration, which is based on identifying domains associated with a large amount of distinct information conveyed through subdomains in a DNS query stream and inspired by the work of Afek et al. <cit.> in which heavy hitter detection algorithms were proposed for the detection of DDoS attacks. Essentially, the task of identifying heavy hitters in a data stream is to find the most frequent keys in a stream, while the purpose of identifying distinct heavy hitters is to find keys that appear with the most distinct subkeys in a stream. Despite the usefulness of solving these two problems for various tasks, such as DDoS detection <cit.> and traffic load balancing <cit.>, they do not fully capture the complexity of DNS exfiltration detection, where there is a need to account for the amount data being exfiltrated via DNS queries. In order to model this complexity, we introduce the new concepts of information weight (see Definition <ref>) and information-based distinct heavy hitters (see Definition <ref>), which describe elements associated with large amounts of unique information in a stream. quantifies the amount of information conveyed through DNS query subdomains to domains, identifies domains associated with large amounts of unique information, and raises alerts for these domains as suspected for DNS exfiltration. The input for is a stream of DNS queries, such that for each DNS query subdomain.example.com, the domain and subdomain are extracted to obtain the element: (example.com, subdomain). consists of a fixed-size cache (Counters) whose size (k) is a parameter of the algorithm; a random hash function Hash   U[0,1] that allows us to sample the distinct DNS query stream; detection_threshold, which is a parameter of the algorithm; and a threshold value τ (initialized to 1), which represents the probability of a domain's inclusion in the cache. Each entry in the cache stores an information counter, which calculates the total unique information weight for domain, i.e., I_domain and seed_domain, whose value is the minimum Hash(domain,subdomain) of all elements with key domain in the stream. §.§.§ Information Quantification According to Definition <ref>, we need to quantify the amount of unique information encoded in subkeys and calculate this amount per key. In order to do so, we need a function I: V → I_V, where I(subdomain) is the information weight of subdomain. There are different ways that the information weight can be defined, but in the scope of this research, we denote I(subdomain) = |subdomain|, i.e., the information weight of a subkey is its length. We acknowledge that this choice does not provide an exact quantification of the information encoded in the subdomain, but it imposes an upper bound on the quantity of information that can be conveyed through it, and as will be shown in our experiments in Section <ref>, it provides an adequate approximation for practical purposes. We also experimented with using entropy <cit.> to estimate the information, but the results were inferior. §.§.§ Optimized Counting with HLL++. In order to calculate the exact amount of unique information, for each domain we need to store a set of all its associated subdomains in a stream, which requires linear space complexity. Since our solution is intended to run on DNS resolvers with limited memory capabilities, we cannot use exact information weight counters. Instead, we employ count-distinct approximation algorithms from the world of big data. The count-distinct problem is a well-studied problem <cit.>, and many extremely accurate and high-performing approximation algorithms exist for it. One of the state-of-the-art solutions for this problem is HyperLogLog (HLL) <cit.>. Essentially, HyperLogLog takes advantage of a clever property of multisets that the cardinality of a multiset (the number of distinct elements) of uniformly distributed random numbers can be estimated by calculating the maximum number of leading zeros in the binary representation of each number in the set. If the maximum number of leading zeros observed is l, then the number of distinct elements in the multiset is approximately 2^l. In order to obtain a uniformly distributed random number multiset, a hash function is applied to the elements in the original multiset, which is the DNS stream in the scope of this paper. HLL's data is stored in counter arrays, which are called registers, and the size of the arrays depends on the number of bits allocated for the registers, p. In this research, we fixed p at 12, in order to achieve optimal memory consumption while achieving highly accurate cardinality approximations. In the proposed method, we use a variation of the original HLL, known as HyperLogLog++ (HLL++) <cit.>, which provides better accuracy and uses less memory than the original design. For each cached domain, we store an instance of HLL++ which is used to approximate the total distinct information weight of subdomains of DNS queries. HLL++ approximates the distinct number of elements in a stream, while our intention is to approximate the amount of distinct information in a stream, we manipulate the input for HLL++, such that for each DNS query stream element (domain, subdomain), instead of adding subdomain to domain's HLL++ instance, for each integer i in the range of (0, length(subdomain)), we add the concatenated string subdomain || i to domain's HLL++ instance; thus, we are able to approximate the amount of information conveyed to domain. If the value of the approximation information counter for domain exceeds detection_threshold, an alert is raised. When a (dom, sub) element is processed, we check whether dom is in Counters; If dom is not in the Counters cache, we calculate the value h = Hash(dom, sub). Then, if h<τ, we initialize the HLL++ instance for dom, increase its information counter appropriately, and initialize a seed value for it, denoted as seed_dom, to be h. Otherwise, we insert the element by inserting it into the HLL++ instance above, calculate h = Hash(dom, sub), and seed_dom is updated to be the minimum between seed_dom and h. If the size of Counters exceeds k after the insertion of dom, we evict the domain with the largest seed value from the cache, and τ is updated to be the evicted key's seed value; thus τ acts as the probability of being included in the cache sample. This idea was first introduced in the work of Gibbons et al <cit.>. By updating the seed_dom value whenever we process a pair with dom as the key and considering that the value can only decrease, it suggests that the evicted domain is likely to be the least information-heavy hitter. This approach increases the probability that the cache primarily contains the most significant domains in terms of information volume. 's pseudocode is provided in Algorithm <ref>, and an overview of is presented in Figure <ref>. §.§ Space and time complexity analysis One of 's benefits is the fact that it has sublinear (in fact, logarithmic) space complexity in the DNS stream length n, as well as constant query classification time complexity, which means that it satisfies the real-time criteria defined in Section <ref>: §.§.§ Memory Analysis For our cache structure, we store k HLL++ instances. Using the (ϵ, δ) model <cit.>, each HLL++ instance requires 𝒪(ϵ^-2loglog(m_dom)+log(m_dom)) space <cit.>, where m_dom is the number of distinct elements associated with key dom. We denote m = max_ dom ∈ Counters m_dom for readability; thus the total space complexity of is 𝒪(k ·ϵ^-2loglog(m_)+log(m)), which is logarithmic in the cardinality of the data stream and therefore logarithmic in the entire data stream size, given that m=O(n). §.§.§ Time Analysis (processing a query) When processing a DNS stream element (domain, subdomain), we calculate the hash value h=Hash(domain, subdomain), which has a time complexity of O(1). If domain is already cached, we proceed with adding subdomain to domain's HLL++ instance. The add operation of the HLL++ algorithm has a time complexity of O(1), and given that a domain name is limited to 255 characters, as described in the original DNS RFC <cit.>, the subdomain is also necessarily limited to 255 characters; therefore the time complexity of the add operation is O(1). Following the add operation, performs a count operation to determine if an alert should be raised for the domain. The count operation has a time complexity that depends on the number of register bits allocated to the HLL++ instance p. We fix p at 12; therefore this operation has a time complexity of O(1). We conclude that processing a query and classifying it has a constant time complexity. §.§ Reset mechanism Given the conditional probabilistic nature of the method (which derives from the need to calculate a hash function and compare it to the value of τ), domains that appear earlier in the stream are much more likely to be included than later ones (due to the decreasing nature of τ); thus the confidence interval of the counters decrease over time. In order to avoid missing information heavy hitters that appear later in the stream, we propose a reset mechanism, where the cache is flushed and reset at constant intervals, which allows us to identify DNS exfiltration events that occur long after the deployment of the algorithm in the network. §.§ Allowlists Patterns of legitimate use resembling that of DNS exfiltration makes it difficult to distinguish between benign and malicious DNS traffic. Anti-malware agents are known to use the DNS protocol to send signatures of suspected files to the DNS zone of the anti-malware service provider for inspection <cit.>. Other services, such as search engines, social networks, and streaming services are known to use disposable domains for purposes of signaling <cit.>. In order to handle false positive domains and avoid raising many false alarms, we present two simple lightweight allowlisting approaches based on the concept of global and local reputation, as described in <cit.>. Globally reputable domains are domains listed in publicly available lists associated with benign domains (such as TRANCO) and therefore should be trusted. Locally reputable domains, are domains queried by a large portion of hosts in the local enterprise network and therefore should be trusted. Using the allowlists in a pre-filtering phase is also important for preventing known benign domains that have many distinct subdomains from being cashed and ensuring that they do not take up cache space. In our evaluation, we apply the allowlists as a post-filter (instead of a pre-filter), in order to evaluate the effectiveness of the proposed approaches on the false positive rate of and the compared methods. §.§.§ TRANCO The use of top-ranking domain lists for allowlist purposes is very common in DNS exfiltration detection <cit.>. In this paper, we use TRANCO <cit.>, an approach for ranking websites' popularity, to generate a top 1M list that allows us to filter out popular websites, reducing the number of false positive domains. §.§.§ Peacetime/Wartime Using the peacetime/wartime model, which was first introduced in <cit.>, we execute in a non-enforcing mode for a limited period of time. During this execution (called peacetime), we assume that the presence of DNS exfiltration traffic in the network is negligible (inspired by the idea of <cit.>), and therefore any domain that is detected as malicious by during this time is actually benign and should be allowlisted. We collect these domains in an allowlist called the peacetime allow list. After that, is deployed in an enforcing mode (called Wartime), filtering out domains that appear in the peacetime allowlist. This approach is model-agnostic; therefore we use it for and the compared methods in our evaluation. As will be shown, this approach is simple yet very effective. The amount of time the algorithm should run in peacetime mode depends on the network. We found that even an hour's worth of data results in a useful allowlist; therefore, we recommend running it for at least an hour. § EVALUATION §.§ Overview The evaluation is divided into two parts, namely parameter tuning and comparison with other methods. Parameter tuning. We compare the effect of different detection threshold values on the number of alerted queries and domains, as well as the effect of employing the proposed allowlist techniques. The method of Paxson et al. is tuned similarly and serves as a baseline. In addition, we present two deployment settings for ; one simulates consolidation of data to a single point and its analysis (similar to other offline methods), and the other is a deployment setting that simulates 's execution right on the enterprise DNS gateway, and compare their performance. Comparison with other methods. In this step, we evaluate our method's detection capabilities and compare it with the capabilities of methods proposed in earlier studies, namely the studies of Paxson et al. <cit.>, Nadler et al. <cit.>, and Ahmed et al. <cit.>. Our evaluation includes both detection efficacy comparison (the ability to properly identify DNS exfiltration domains and avoid misclassifcation of benign DNS domains), as well as a performance evaluation, comparing the classification time and memory use of the compared methods. §.§ Datasets The DNS traffic datasets used in this study were collected from DNS servers operated by a large CDN (content delivery network) provider. In addition, a publicly available dataset published by Ziza et al. in November 2022 <cit.> is used as an independent dataset. First dataset. The first dataset, denoted as DS_f, consists of 50.85 billion DNS queries from 753 real-world enterprise organizations whose traffic is monitored by the CDN provider. The queries were collected over the course of eight days, beginning on December 28, 2021. Accordingly, the average number of queries per hour is 260 million. The dataset contains one domain suspected of DNS exfiltration (as alerted by the CDN provider's proprietary DNS exfiltration algorithm), <joinsanjose[.]com>. Due to the scarcity of data exfiltration events and the fact that the dataset consists of monitored data, we presume that the rest of the dataset has at most just a negligible amount of malicious traffic, and we thus treat it as benign. Identifiable dataset. We sample a subset of DS_f, denoted as DS_p, which contains 5.06 billion DNS queries. In contrast to the full dataset, all of the DNS queries in DS_p can be attributed to a specific host of nearly 130,000 specific IP addresses. The IP addresses are hashed to preserve the privacy of the end users. The DS_p subset is generated to evaluate the detection methods' abilities to detect compromised hosts. Although DS_p accounts for 10% of the total traffic obtained, this dataset is still larger than most datasets used in previous studies. <joinsanjose[.]com> queries are not included with this dataset. Real-world dataset. The second dataset, denoted as DS_r, consists of 255 billion DNS queries collected in a similar fashion to the collection process of DS_f and is provided by the same source. The queries were collected over the course of 21 days, beginning on February 28, 2023. Accordingly, the number of queries per hour is 507 million; to the best of our knowledge, this is the largest dataset ever used to evaluate DNS exfiltration detection methods. This dataset will be used in our real-world evaluation of and the compared methods in Section <ref>. Public dataset. We perform our evaluation on a third, publicly available dataset <cit.>. This dataset was constructed by collecting more than 35M DNS queries from an Internet service provider's (ISP) DNS server over the course of 26 hours. Accordingly, the average number of queries per hour is 1.9 million. The dataset also contains exfiltration queries to three distinct domains. The queries are generated by the Iodine <cit.> and DNSExfiltrator tools<cit.>, both freely available on GitHub. For the rest of the paper the public dataset is denoted as ZIZA. A summary of the datasets used in this study is provided in Table <ref> §.§ Parameter tuning The objective of this tuning phase is to find the lowest detection threshold that produces a practical number of false positive domains. While this definition may vary between different enterprises, in this study we consider 15 false alerts per week (just over two alerts per day) to be practical. The detection threshold was tuned between 0 bytes/sec (B/s) and 400 B/s. As a baseline, we chose the method of Paxson et al. <cit.>, a similar information-based threshold detection method, which is the state-of-the-art for such methods. 's cache size was fixed at 1,000, and the reset interval was fixed at 120 seconds. In addition, we examine the effect of using the allowlisting methods we described in Section <ref>. To do so, we generate a PT allowlist for both and the method of Paxson et al. method using data from the first day. As can be seen in Figures  <ref>  <ref>, consistently produces less false positive detections than the method of Paxson et al., across all allowlist combinations. with the TRANCO and PT/WT allowlists obtains a total of 10 false alerts over all 753 organizations, but it also results in a high threshold of 250 B/s. In Figure  <ref>, it can be seen that a much lower threshold of 15 B/s results in about 80 alerts over the course of seven days (an average of 0.015 alerts per day per organization), which is an acceptable alert rate for many organizations. We can also see an exponential growth in the number of alerted domains in the lower threshold values. §.§ Analysis of alerted domains Based on the results presented in Section <ref>, we manually inspect the domains for which raised an alert (see Table <ref> which summarizes our findings). This analysis is performed on the positive alerts produced by the + TRANCO + PT/WT with a detection threshold of 250 B/s configuration. As noted, 10 domains out of 43 million unique domains in the dataset (0.00002% of the unique domains in the dataset) were marked as suspected for DNS exfiltration. Out of the 10 domains, six domains are registered and operated by known security vendors. <sophosxl[.]com>, <appsechcl[.]com>, <barracudabrts[.]com>, <dnsbl[.]org>, and <softsqr[.]com> are domains used by DNS anti-malware list (DNSAML) service providers <cit.>. <cnr[.]io> is a domain used for honeypot <cit.> services. Security vendors' AV clients send DNS requests with their current signature ruleset version or suspicious file hashes encoded within the DNS request subdomains, which results in a high number of subdomains  <cit.>. Three domains, <pldtgroup[.]net>, <cnsevrx[.]com>, and <kfcmsp[.]com>, are domains associated with a large number of unique (and occasionally, long) subdomains; therefore, the method marked them as suspected DNS exfiltration domains. We looked these domains up with the WHOIS  <cit.> protocol (a query/response protocol that is widely used to obtain information about registered domain names). All three domains were registered at least five years ago (which considerably lowers the likeliness that they have been used for DNS exfiltration), and <pldtgroup[.]net> is registered to the Philippine Long Distance Telephone Company, a reputable company. We thus conclude that these cases are false positive alerts. The last alerted domain, <joinsanjose[.]com>, was also classified as DNS exfiltration by the CDN provider's algorithm. We were not able to conclude its maliciousness, and therefore we classify it as unknown, until further investigation is done. §.§ Mitigating the need for data consolidation To provide real-time DNS exfiltration detection, the solution needs to avoid collecting and consolidating data into a single point. In order to demonstrate that satisfies this requirement, we compare two deployment settings: * Global - All the enterprise's data is processed by a single instance of (simulates consolidation of data to a single point). * Local - An instance of is allocated per enterprise (simulates deployment of right on the DNS gateway of the enterprise, i.e., data is not consolidated). Similar to Section <ref>, we tune the detection threshold and compare the number of alerted domains produced for each deployment setting. TRANCO and PT/WT allowlists are applied in both settings. As can be seen in Figure <ref>, the results are almost identical; thus we conclude that both deployment settings are equally viable. Therefore for , the data does not need to be consolidated, and it can be deployed right on the enterprise DNS gateway. §.§ Sensitivity analysis We perform a sensitivity analysis of ’s detection window size and cache size parameters, while setting the detection threshold at 10 B/s. The values examined for the window size are 120, 600, and 1,800 seconds, and the examined cache size values are 100, 1,000, and 10,000. This analysis complements the detection threshold tuning step described in Section <ref>. It can be seen that increasing the detection window causes a reduction in the number of alerts. This is expected, as a longer detection window results in a higher detection threshold for the window. This can have both a positive effect on the detection of false positives (reducing the number of false positive alerts) and a negative effect on the detection of true positives (when the attacker exfiltrates data in short bursts), as can be seen in the case in which the detection window is 1,800 and the cache size is 100. Increasing the cache size also affects the number of alerts. This can be attributed to the fact that in the case of a smaller cache size, a cached domain is more likely to be evicted, and thus it might not remain in the cache long enough to be considered an information heavy hitter. On the other hand, an increased cache size also means that more memory is required to store the cache. Based on this analysis, we recommend setting the detection window in the range of 120 to 600 seconds, and the cache size should be set between 1,000 and 10,000 (see Table <ref> for a summary of the results of our analysis). §.§ Compared methods §.§.§ for DNS Exfiltration Detection This is the method proposed in this paper. The reset interval was fixed at 120 seconds, and the cache size was set to 1,000 entries. §.§.§ Practical Comprehensive Bounds on Surreptitious Communication over DNS The method of Paxson et al., which was used in the parameter tuning section <ref>, is also used to evaluate our proposed method; it will be denoted as Paxson for the rest of the paper. Paxson is selected as it is the SOTA information estimation-based DNS exfiltration detection technique, and the most similar to our proposed method. §.§.§ Detection of Malicious and Low Throughput Data Exfiltration Over the DNS Protocol Nadler et al. <cit.> presented an unsupervised anomaly detection model based on the isolation forest algorithm <cit.> In this method, DNS queries are collected from recursive DNS servers at a frequency of λ time units, and feature extraction is performed on a window size of λ*n_s, and fed to the pre-trained isolation forest model. Despite not being a real-time solution, we chose to compare our method's performance to it, since it has the ability to detect DNS exfiltration campaigns with exfiltration rates as slow as 0.11 B/s, up to six hours after the traffic is collected. To the best of our knowledge, this is the SOTA in terms of near-real-time detection capabilities. In the remainder of the paper, we refer to this method as IF. We configured IF according to the authors' recommendation, setting λ = 60 and n_s = 6. §.§.§ Real-Time Detection of DNS Exfiltration and Tunneling from Enterprise Networks Ahmed et al. <cit.> presented an unsupervised anomaly detection model for real-time DNS exfiltration from enterprise networks based on the isloation forest algorithm. As noted in Section  <ref>, it is a true real-time method, and to the best of our knowledge, it is the SOTA real-time detection solution. In the remainder of the paper, this method will be referred to as RT-IF. We configured IF according to the authors' recommendation, fixing the number of trees at two and limiting the tree height to 18. All of the methods described were implemented in Python, and the experiments on the DS_p dataset were performed on Azure Databricks Runtime version 10.3 <cit.>. IF and RT-IF were both implemented using SynapseML <cit.>. §.§ Methodology DS_p is divided into training, peacetime, and test sets. The training set consists of 790M queries from 112K unique hosts from the first day in the dataset. The peacetime set consists of 720M queries from the next day in the dataset. The rest of the dataset (six days of data) composes the test set, with a total of 3.8B queries from 115K unique hosts. Malicious DNS exfiltration traffic is synthetically generated, similarly to previous research <cit.>. The attacks are generated based on three well-known DNS exfiltration tools and attacks: * Iodine <cit.> - Iodine is an open-source DNS tunneling tool, mainly used to bypass Wi-Fi paywalls, like the ones that can be found in hotels. This simulates high throughput DNS exfiltration campaigns. * FrameworkPOS <cit.> - the FrameworkPOS malware was used in a targeted attack on the American retailer, Home Depot. Over the course of six months, the malware leaked the details of 56 million credit cards. The malware extracted credit card information from compromised machines' memory, encoded the data, and sent it to a remote server in the following format: <encoded_credit_card>.domain.com. We generate FrameworkPOS queries at a frequency of three queries per second to simulate the original malware's throughput of 56 million credit cards in six months. * Backdoor.Win32.Denis - The Trojan malware Backdoor. Win32.Denis (which will be referred to as Denis for the remainder of the paper) was used in Operation Cobalt Kitty, a large-scale Asian APT <cit.>. Denis enables an intruder to manipulate the file system and run arbitrary commands and loadable modules. Denis uses the DNS as a bidirectional C&C communication channel with its operator. There are 16 predefined instructions to allow the C&C operator to take control of a compromised machine. In this paper, we simulate the malware's keep-alive instructions. We generate the requests every 1.5 seconds, conforming to Cobalt Kitty's operation security report analysis. In each experiment, 1% of the client hosts (i.e., 1,300 hosts) are sampled. Queries are generated using one of the DNS exfiltration tools, with the sampled client hosts as the source of the DNS queries. We evaluate the detection abilities of each of the methods based on the following metrics: number of overall hosts alerted (i.e., number of hosts suspected as being infected), hosts' TPR (true positive rate, hosts which are truly infected and for which alerts are raised for), hosts' FPR (hosts which are not infected but for which alerts are raised). Each host is infected with a random number of malicious queries in the range of 100 to 10,000, where the queries are injected at random start times in the test set. To identify the infected hosts, each host is associated with a distinct malicious primary domain. We want to compare the methods' abilities to detect compromised client hosts. Therefore the compared methods are trained with different acceptable false positive rate (FPR) values: 0.01 (1,300 clients), 0.001 (130 clients), 0.0001 (13 clients), 0.00001 (1-2 clients). In each experiment, IF and RT-IF are trained by setting the isolation forest's contamination rate to be the experiment's acceptable FPR value. For and Paxson, the algorithms are executed on the training dataset, and the detection threshold is tuned to be the minimum value for which the acceptable FPR is achieved. For each of the compared methods, we generate a peacetime allowlist by feeding the peacetime dataset to the trained model. The peacetime allowlist is composed of alerted domains in the peacetime dataset. The TRANCO allowlist is applied to all the compared methods. §.§ Results A summary of the results is presented in Table <ref>, which presents the compared methods' detection abilities with different acceptable FPR values. For an acceptable FPR of 0.01 (1%), 's detection threshold is 0.7B/s, meaning it can detect exfiltration rates as slow as 0.7 B/s while producing 1% FP alerts on the training set. To provide context, based on our evaluation, it can be inferred that the detection time for DNS exfiltration is quick. In fact, considering the Track 2 format commonly used for credit card data, which requires at least 40 bytes of information per credit card <cit.>, our method is capable of detecting and raising an alert within a timeframe that would typically allow for the exfiltration of at most three credit card details. This highlights 's effectiveness and efficiency in limiting the potential impact of data exfiltration incidents. It should be noted that the FPR on the test set is just under 0.004, which is less than the acceptable FPR of the training set. This difference can be attributed to the allowlists, and it is observed in the rest of the methods. It should be noted that for the very low acceptable FPR value of 0.00001, RT-IF is unable to detect any exfiltration events except for Iodine, while is able to detect the slower exfiltration rate domains used for FrameworkPOS and Denis communication. Unsurprisingly, the lower the acceptable FPR is set, the higher the detectable exfiltration rate (DER) gets.   and Paxson's have a similar DER, which is to be expected, as they are both based on a similar idea of quantifying the amount of information and comparing it against a predefined threshold. Measuring the DER value of IF and RT-IF is tricky, since they are traffic-based and payload-based machine learning models, respectively. IF collects features in a sliding window of size λ time units, and classification is performed based on the last n_s time windows. This means that exfiltration events are detected between λ and n_s ·λ time units after they occur (disregarding the time it takes to consolidate the logs into a single point, as well as the feature extraction and classification time). In the authors' recommended setup, λ is set to 60 minutes, and n_s is set to six, meaning up to six hours can pass until the detection time. Still, the theoretical detectable exfiltration rate is as slow as 0.11 B/s, making IF a practical complementary solution for offline analysis. Because RT-IF operates on on single DNS queries, it can detect DNS exfiltration events on the first packet inspected (so theoretically, it can stop DNS exfiltration by the time it starts); in practice we can see that only the model trained for an acceptable FPR of 0.01 achieves a competitive detection rate, and the method becomes less practical for acceptable FPR values of 0.001 and lower. While this method may be useful on small networks where 1% FPR may be acceptable, it is not practical for large-scale networks like the one the dataset used in this research comes from, where an FPR of 1% results in over 40,000 false alerts per week. §.§ Evaluation on the ZIZA dataset We perform an additional evaluation on a public dataset, ZIZA (described in Section  <ref>). Because ZIZA was collected over the course of 26 hours, we use the first 10 hours as a training set; the following two hours are used to generate the peacetime allowlist, and the final 14 hours are used as the test set. We train the different detection methods similarly to the way described in Section  <ref> and evaluate their ability to detect DNS exfiltration domains with different acceptable FPR values. The methods were compared on a single machine in order to evaluate the resource use of each method. The results on the public dataset are similar to those obtained on DS_p. For an acceptable FPR of 1%, the detection threshold of ibHH is 0.6 B/s, and it is able to detect all three DNS exfiltration domains with 62 false positive domains. While this is quite a large number, it is significantly better than that obtained by both IF (140 false positive domains), RT-IF (119), and Paxson (87). For the acceptable FPR of 0.01%, ibHH has only a single false positive while still detecting all the exfiltration domains. IF is able to detect only one malicious domain, and Paxson detects two. RT-IF is unable to detect any malicious activity in this scenario. See Table  <ref> for a summary of the results. §.§ Real-world evaluation We perform a real-world evaluation of the compared methods on the DS_r dataset described in Section <ref>. We partition the dataset into training, peacetime allowlist generation, and test sets, similar to the partitioning described in Section <ref>. The first seven days serve as training data, followed by one day of peacetime allowlist generation, and the rest of the data is dedicated to test the trained models. Because this evaluation is performed on non-labeled real-world DNS queries, we cannot provide TPR and FPR estimations. Instead, we perform manual inspection of alerted domains and queries and determine if the alerts are true positives (TPs) or false positives (FPs) and provide these counts. All methods have been trained under an acceptable FPR of 0.001 (0.1%), as it showed promising results in the synthetic dataset evaluation results for all the compared methods in Section <ref>. The detectable exfiltration rate (DER) obtained in the training phase is 6 B/s (18 credit card numbers), while Paxson's DER is 11B/s (33 credit card numbers). generated a total of 17 alerts, out of which two are true positive detections (so, 15 are false positive alerts; this is the lowers number of FP alerts for all the compared methods). IF and Paxson both successfully detected the two domains, yet with more FP alerts, while RT-IF successfully detected only one of the domains. The results show that and IF have a similar number of TP queries, where IF classifies about 400 more TP queries, however with the cost of over 55,000 more FP queries than . An analysis of the TP domains is provided in Section <ref>, and a summary table of the results is provided in Table <ref>. §.§.§ True Positive Domain Analysis The first TP domain we inspect is cymulatedlp[.]com which was detected by all models except RT-IF. This domain name is registered by a cybersecurity vendor of a similar name and is used by enterprises to simulate exfiltration campaigns to assess data exfiltration defense mechanisms employed by the enterprises. While this is a simulated attack, we treat it as a true positive detection given the fact that the data is unlabeled. This attack consists of quite short subdomains (of length 64). The average time between two consecutive queries is approximately two seconds. This might explain why RT-IF was not able to detect it, as it simulates a rather stealthy DNS exfiltration campaign. The second domain, detected by all methods, is q2t[.]nl. The data seems to be base64 encoded, with subdomain's lengths ranging between 30 and 144, and queries are sent quickly one after the other (an average of 0.01 second between consecutive queries). This domain represents a high throughput exfiltration campaign and is detected by all methods. We discovered with WHOIS that the domain is owned by your-freedom[.]net  <cit.>, a VPN provider that supports tunneling over DNS, which further supports the classification of the domain as TP. §.§ Resource Use We evaluate the average runtime and average memory consumption of each method on a single machine with a 6 core CPU and 16GB RAM, representing a high-performance DNS server hardware specification. IF and RT-IF were both implemented with the scikit-learn library  <cit.>. We measured the total runtime for each method to train, generate the peacetime allowlist, and classify the test dataset. The ZIZA dataset was used in this evaluation. and RT-IF, as methods with real-time capabilities, both use about 1.5MB memory, but ibHH is significantly faster, with an average runtime of 58 seconds compared to the 857 second runtime of RT-IF. This can be explained by the fact that RT-IF needs to generate a large number of features to classify a DNS query. The offline methods (Paxson and IF) have a notable disadvantage in that they need to store all the queries in a specified inspection window. This requirement leads to a considerably larger memory footprint and longer runtime, rendering them unsuitable for real-time deployment on the resolver. Implementing these methods on the resolver would negatively affect the rate at which the DNS resolver performs DNS resolution. A summary of the resource analysis is provided in Table  <ref>. § DISCUSSION §.§ Limitations §.§.§ Allowlisting To distinguish between benign and malicious data exchange over DNS, we described two allowlisting methods in Section <ref>. As noted in Figure <ref>, our allowlisting approaches significantly reduce the number of FP alerts. Difficulty in distinguishing between malicious and benign DNS exfiltration traffic is common among DNS exfiltration detection algorithms, and allowisting methods are often employed to cope with this issue <cit.>. Maintenance of these lists can be automated thus easing the process of incorporating them in the DNS exfiltration detection pipeline. §.§.§ Resilience against an aware attacker An aware attacker can circumvent detection by configuring malware to exfiltrate data at rates below the detection threshold. While this is a valid concern, we show that exfiltration campaigns as slow as 0.7 B/s can be detected by  with less than 0.04% of benign domains misclassified. An IT organization that wants to detect very slow campaigns, may choose to lower the detection threshold; this will come with the cost of possibly having to deal with more false alarms. Another challenge is the attacker's ability to use encrypted DNS requests, such as DNS over TLS <cit.> and DNS over HTTPS <cit.>, to evade detection. Enterprises can deal with this issue by blocking encrypted DNS traffic that is not resolved by an enterprise DNS resolver, as recommended by the National Security Agency <cit.>, and only allow encrypted DNS if it is resolved by the enterprise's internal recursive DNS resolver (which allows inspection of the raw DNS packet). While primarily focuses on detecting DNS exfiltration through the query name, it is important to note that attackers can use other information vectors like query type and timing, thus avoid detection by . However, these vectors have limitations, such as restricted information capacity and vulnerability to inaccuracies. Despite these alternatives, the query name remains the most commonly exploited vector. By effectively detecting information conveyed through the query name, serves as a valuable defense against DNS exfiltration. To the best of our knowledge, the query name has been the primary (or only) information vector utilized by all publicly available DNS exfiltration tools and known DNS exfiltration campaigns. The method of Paxson et al. is designed to detect DNS exfiltration events regardless of the information vector, which is a strength of that approach. Another way the attacker can try to bypass is to break the exfiltrated data down into single characters queries (e.g., instead of sending “exfiltration.domain.com," the attacker would send “e.domain.com," “x.domain.com," ... , “n.domain.com"). Because only accounts for distinct subdomains, it might miss this exfiltration scenario. However, it should be noted that this approach also results in a significantly lower rate of data exfiltration. In addition, DNS resolvers have a cache structure <cit.> that stores resolved DNS queries for a limited time. That can be problematic for the attacker, because subsequent requests for the same query will be served from the cache instead of being sent to the authoritative DNS server. Finally, the number of requests required to exfiltrate a given message increases linearly based on the message size. This increases the risk of DNS queries failing to reach the attacker because of DNS throttling, which is widely employed on public DNS servers <cit.> (and can easily be implemented on internal enterprise DNS resolvers). §.§ Wildcard subdomain resolution There are multiple services that use subdomains to host multiple services or deliver user-generated content (UGC). Notable examples include dropbox.com and googledocs.com, which organize their content under different subdomains and URLs for better isolation and network load distribution. These UGC services are sometimes incorrectly classified as DNS exfiltration despite being legitimate due to their extensive use of unique subdomains, as reported by <cit.>. This is a limitation of all existing methods given their inherent inability to distinguish between legitimate and malicious cases of DNS exfiltration, and it also applies to our proposed method, which attempts to overcome this limitation by using allowlists. This situation is suboptimal but arguably acceptable, since the rate of false alerts reported in our real-world, representative dataset of 750 organizations indicates there are, on average, less than 0.1 cases like this weekly per organization. §.§ Deployment considerations Given the low time and memory complexity of (theoretically proven in Section <ref> and shown in practice in Section <ref>), an organization may benefit from the deployment of multiple instances with different threshold values in order to cover different potential data exfiltration attacks over DNS and improve performance. This approach is also aligned with our evaluation results (see Section <ref>), where we present different models with different detection thresholds. § CONCLUSIONS AND FUTURE WORK In this work, we present , a simple yet effective method capable of both detecting DNS exfiltration events in real time, by estimating the amount of unique information conveyed to registered domains through query subdomains, and providing explainable results (an alert is raised only if the suspected exfiltration rate exceeds a predefined threshold). We perform an extremely comprehensive evaluation, comparing the proposed method's performance to that of prominent state-of-the-art methods, including the state-of-the-art real-time machine learning based solution that was shown to outperform. In the future, we plan to adapt for the detection of cross-domain exfiltration events, for example, by changing the information quantification so that it is per source user IP instead of per target registered domain. We also plan to explore a possible variation of capable of detecting other information vectors used for DNS exfiltration (such as the data exfiltration based on the query type field), as well as consider the DNS response (which can help in the detection of bidirectional communication). We also plan to deploy on DNS resolvers and evaluate its performance on online DNS query streams. IEEEtran
http://arxiv.org/abs/2307.02086v1
20230705074946
A $p$-step-ahead sequential adaptive algorithm for D-optimal nonlinear regression design
[ "Fritjof Freise", "Norbert Gaffke", "Rainer Schwabe" ]
math.ST
[ "math.ST", "stat.TH", "62L05 (Primary), 62F12, 62J12 (Secondary)" ]
A p-step-ahead sequential adaptive algorithm for D-optimal nonlinear regression design Fritjof Freise^1, Norbert Gaffke^2 and Rainer Schwabe^2 13cm^1University of Veterinary Medicine Hannover and ^2University of Magdeburg August 1, 2023 =================================================================================================================================================== Under a nonlinear regression model with univariate response an algorithm for the generation of sequential adaptive designs is studied. At each stage, the current design is augmented by adding p design points where p is the dimension of the parameter of the model. The augmenting p points are such that, at the current parameter estimate, they constitute the locally D-optimal design within the set of all saturated designs. Two relevant subclasses of nonlinear regression models are focused on, which were considered in previous work of the authors on the adaptive Wynn algorithm: firstly, regression models satisfying the `saturated identifiability condition' and, secondly, generalized linear models. Adaptive least squares estimators and adaptive maximum likelihood estimators in the algorithm are shown to be strongly consistent and asymptotically normal, under appropriate assumptions. For both model classes, if a condition of `saturated D-optimality' is satisfied, the almost sure asymptotic D-optimality of the generated design sequence is implied by the strong consistency of the adaptive estimators employed by the algorithm. The condition states that there is a saturated design which is locally D-optimal at the true parameter point (in the class of all designs). § INTRODUCTION Sequential adaptive design and estimation in nonlinear regression models were considered by Lai and Wei <cit.>, Lai <cit.>, and Chen, Hu and Ying <cit.>. In those fundamental contributions fairly general conditions on the adaptive design ensure consistency and asymptotic normality of adaptive least squares or maximum quasi-likelihood estimators. However, it remains open whether particular sequential adaptive design schemes are covered, like the adaptive version of the algorithm of Wynn <cit.> for D-optimal design, which we have called the `adaptive Wynn algorithm'. Pronzato <cit.> was the first who studied the asymptotics of the adaptive Wynn algorithm, that is, the asymptotic properties of the adaptive designs and adaptive least squares and maximum likelihood estimators under the algorithm. Crucial assumptions in that paper are a finite experimental region and a condition of “saturated identifiability” (see below) on the regression model. Extensions of results in <cit.> to any compact experimental region, and further results on the adaptive Wynn algorithm have been obtained by the authors in <cit.> and <cit.>. In the present paper a sequential adaptive design algorithm is proposed and studied which we call “p-step-ahead algorithm” since at each step a batch of p further design points is collected. For a special model a related concept of “batch sequential design” was employed by Müller and Pötscher <cit.>. An idea of the algorithm was sketched by Ford, Torsney and Wu <cit.>, p. 570, in the introduction of their paper. Note that the adaptive Wynn algorithm collects one design point at each step and was therefore called “one-step ahead algorithm” in <cit.>. Actually, in dimension p=1 both algorithms coincide. When p≥2, a practical advantage of the adaptive p-step-ahead algorithm over a strictly sequential 1-step-ahead sampling scheme like the adaptive Wynn algorithm might be that it allows some parallel response sampling (batches of size p) and thus reduces the total duration of data collection. The paper is organized as follows. In Section 2 the general framework is outlined, and various conditions on the nonlinear regression model are introduced which will later be assumed for some results but not throughout. Some examples of frequently used nonlinear models are discussed. In Section 3 the p-step-ahead algorithm is described. In Section 4 some basic asymptotic properties of the design sequence generated by the algorithm are derived. Sections 5 and 6 address consistency and asymptotic normality of the adaptive least squares and maximum likelihood estimators in the algorithm. An appendix contains supplementary results to two examples (parts A.1 and A.2 of the appendix) and the proofs of the lemmas and theorems (parts A.3 and A.4). § GENERAL FRAMEWORK Let a nonlinear regression model be given with univariate mean response function μ(x,θ), x∈ X, θ∈Θ, where X and Θ are the experimental region and the parameter space, respectively. Also, a family of ℝ^p-valued functions f_θ, θ∈Θ, defined on X is given such that the p× p matrix f_θ(x) f_θ^(x) constitutes the elemental information matrix of x∈ X at θ∈Θ. Note that a vector a∈ℝ^p is written as a column vector and a^ denotes its transposed which is a p-dimensional row vector. An approximate design, for short: design, is a probability measure ξ on X with finite support. The support of a design ξ is denoted by supp(ξ), which is a nonempty finite subset of X. The weights ξ(x) for x∈ supp(ξ) are positive real numbers with ∑_x∈ supp(ξ)ξ(x) =1. The information matrix of a design ξ at θ∈Θ is defined by M(ξ,θ) = ∑_x∈ supp(ξ)ξ(x) f_θ(x) f_θ^(x), which is a nonnegative definite p× p matrix. Throughout, as in <cit.> the following basic conditions (b1) to (b4) are assumed. (b1) The experimental region X is a compact metric space. (b2) The parameter space Θ is a compact metric space. (b3) The real-valued mean response function (x,θ)↦μ(x,θ), defined on the Cartesian product space X×Θ, is continuous. (b4) The family f_θ, θ∈Θ, of ℝ^p-valued functions on X satisfies: (i) for each θ∈Θ the image f_θ( X) spans ℝ^p; (ii) the function (x,θ)↦ f_θ(x), defined on X×Θ, is continuous. More specific conditions will be employed later which, however, will not be assumed throughout. Next we formulate some of them: condition (SI) on “saturated identifiabiliy” as in <cit.>, condition (GLM) taking up particular features of a generalized linear model as in <cit.>, and a slightly stronger condition (GLM^*). Condition (SI) For all pairwise distinct points x_1,…,x_p∈ X the ℝ^p-valued function on Θ, θ↦(μ(x_1,θ),…,μ(x_p,θ))^, is an injection, that is, if θ,θ'∈Θ and μ(x_j,θ)=μ(x_j,θ') for j=1,…,p, then θ=θ'. Condition (GLM) f_θ(x) = ψ(x,θ) f(x) for all (x,θ)∈ X×Θ, where ψ : X×Θ⟶( 0 , ∞) and f: X⟶ℝ^p are given continuous functions. Condition ( GLM^*) (GLM) holds and, moreover: Θ⊆ℝ^p, μ(x,θ)=G(f^(x) θ) for all (x,θ)∈ X×Θ, where G : I⟶ℝ is a continuously differentiable function on an open interval I⊆ℝ with positive derivative G'>0, and f^(x) θ∈ I for all (x,θ)∈ X×Θ. A further condition refers to some given parameter point ∈Θ, which might be called a condition of “saturated local D-optimality at ”, abbreviated by (SD^*)(). Note that a design ξ^*_ is called locally D-optimal at if ξ^*_ maximizes M(ξ,) over the set of all designs ξ. A design is called saturated if its support size is equal to p. Condition ( SD^*)() There exists a locally D-optimal design at which is saturated. For some results a weaker condition (SD)() will be employed, which addresses the saturated designs maximizing the D-criterion locally at over the set of all saturated designs. For short, we call such designs “locally D-optimal saturated designs at ”. Note that a locally D-optimal saturated design at has uniform weights, since for any saturated design η with support points x_1,…,x_p∈ X one gets from (<ref>) M(η,) = (∏_j=1^pη(x_j)) ([f_(x_1),…,f_(x_p)])^2, and the product of the weights is maximized iff η(x_j)=1/p for all j=1,…,p. Thus, a locally D-optimal saturated design at is an equally weighted design on p points x_1^*,…,x_p^*∈ X which maximize ([f_(x_1),…,f_(x_p)])^2 over x_1,…,x_p∈ X. This motivates the iteration rule of the p-step-ahead algorithm, see (<ref>) in Section 3. Condition ( SD)() The information matrices M(η^*,) of all locally D-optimal saturated designs η^* at coincide, and are thus equal to one matrix (), say. As it is well-known, the locally D-optimal information matrix at is unique, M_*() say. Therefore, if condition (SD^*)() holds then condition (SD)() holds as well and ()=M_*(). There are several relevant nonlinear models which satisfy condition ( SD^*)() for most or all parameter points , and locally D-optimal saturated designs at are known. Some models are presented in the following three examples. Moreover, the models in these examples satisfy condition (SI) or condition (GLM^*). In a fourth example the model satisfies (GLM^*) and for almost all parameter points the locally D-optimal saturated design at is unique and hence condition (SD)() holds. Condition (SD^*)() holds on a relevant subset of parameter points while on another subset (SD^*)() does not hold, and for very special points (if included in Θ) condition (SD)() does not hold. Example 1: Michaelis-Menten model. p=2, Θ⊆( 0 , ∞)^2, X=[ a , b ] where 0≤ a<b<∞, and μ(x,θ)=ϑ_1x/ϑ_2+x, f_θ(x)=(∂/∂ϑ_1μ(x,θ) , ∂/∂ϑ_2μ(x,θ))^= (x/ϑ_2+x , -ϑ_1x/(ϑ_2+x)^2)^ for all x∈[ a , b ] and θ=(ϑ_1,ϑ_2)^∈Θ. For a given parameter point =(ϑ_1,ϑ_2)^∈Θ, the unique locally D-optimal design at is the equally weighted two-point design ξ^*_ supported by x_1^*() and b, where x_1^*()=max{ϑ_2b/(2ϑ_2+b) , a}, see Bates and Watts <cit.>, pp. 125-126. In fact, in that reference the design was shown to be the locally D-optimal saturated design at . Using the Kiefer-Wolfowitz equivalence theorem it can be checked that ξ^*_ is locally D-optimal at . So the present model satisfies condition ( SD^*)() for all ∈Θ. Moreover, the model satisfies condition (SI), see <cit.>. Example 2: Exponential decay model. p=2, Θ⊆( 0 , ∞)^2, X=[ a , b ] where 0≤ a<b<∞, and μ(x,θ)=ϑ_1exp(-ϑ_2x), f_θ(x)=(∂/∂ϑ_1μ(x,θ) , ∂/∂ϑ_2μ(x,θ))^ =exp(-ϑ_2x) ( 1 , -ϑ_1x )^ for all x∈[ a , b ] and θ=(ϑ_1,ϑ_2)^∈Θ. For a given parameter point =(ϑ_1,ϑ_2)^∈Θ, the unique locally D-optimal design at is the equally weighted two-point design ξ^*_ supported by a and x_2^*(), where x_2^*()=min{a+1/ϑ_2 , b}, see Box and Lucas <cit.>, p. 85. In fact, in that reference the design was shown to be the locally D-optimal saturated design at . Again, by the Kiefer-Wolfowitz equivalence theorem it can be verified that ξ^*_ is locally D-optimal at . So the present model satisfies condition ( SD^*)() for all ∈Θ. Moreover, the model satisfies condition (SI), see <cit.>. Example 3: Generalized linear models with binary response. Let p=2, Θ⊆ℝ^2, X=[ a , b ] where -∞<a<b<∞. Consider the class of generalized linear models given by μ(x,θ)=G(ϑ_1+ϑ_2 x) f_θ(x)=φ(ϑ_1+ϑ_2 x) (1 , x)^, θ=(ϑ_1,ϑ_2)^, where G is a continuously differentiable distribution function on the real line with positive derivative G'>0, and φ(u)=G'(u) / √(G(u)(1-G(u))), u∈ℝ. The inverse function G^-1 is called the link function. The models refer to binary response variables, and thus μ(x,θ) equals the probability of a positive response at x. In particular, condition (GLM^*) is met, where ψ(x,θ)=φ(ϑ_1+ϑ_2 x). Consider four particular members of that class of models: (i) G(u) = 1/(1+exp(-u)), (logit link); (ii) G(u) = 1-exp{-exp(u)}, (complementary log-log); (iii) G(u) = (2π)^-1/2∫_-∞^u exp(-t^2/2) dt, (probit) (iv) G(u) = 1/(1+exp(-u))^m, m>0 fixed, (skewed logit). It was shown in Biedermann, Dette and Zhu <cit.> that, under each of models (i) to (iv), the locally D-optimal design at any given ∈Θ is unique and is an equally weighted two-point design. Actually, in that paper a different parametrization of the models was employed and the results on local optimality were obtained for a greater class of optimality criteria (Kiefer's criteria). For the D-criterion the locally D-optimal designs are equivariant under a parameter transformation, and therefore the results of <cit.> apply to the present models (i)–(iv), that is, the models satisfy condition ( SD^*)() for all ∈Θ. For finding the support points of the locally D-optimal designs the results in Ford, Torsney and Wu <cit.>, Section 6, will be helpful. However, their derivations on p. 582 of the D-optimal saturated (two point) designs are not conclusive. So we have included the result along with a proof in the appendix as a supplement to this example (Appendix A.1). Example 4: Poisson regression model with two covariates. Let p=3, Θ⊆ℝ×(-∞ , 0 ]^2, X=[ 0 , b_1]×[ 0 , b_2], where b_1>0 and b_2>0. Consider a generalized linear model with Poisson distributed response variables, μ(x,θ)=exp(ϑ_0+ϑ_1x_1+ϑ_2x_2) f_θ(x)= exp{1/2(ϑ_0+ϑ_1x_1+ϑ_2x_2)} (1 , x_1 , x_2)^, where x=(x_1,x_2)∈ X and θ=(ϑ_0,ϑ_1,ϑ_2)^∈Θ. In particular, condition (GLM^*) is met, where ψ(x,θ)= exp{1/2(ϑ_0+ϑ_1x_1+ϑ_2x_2)}, f(x)=(1 , x_1 , x_2)^, and G(u)=exp(u), u∈ℝ. Let =(ϑ_0,ϑ_1,ϑ_2)^∈Θ be given. By our assumption on the parameter space the slope components ϑ_1,ϑ_2 are nonpositive. We consider three cases. (i) ϑ_1<0, ϑ_2<0; (ii) ϑ_1<0, ϑ_2=0; (iii) ϑ_1=ϑ_2=0. By standard arguments, the problem of finding a locally D-optimal saturated design at can equivalently be transformed to that of finding a D-optimal saturated design for the linear regression model given by f_0(z), z=(z_1,z_2)∈ Z=[0,c_1]×[0,c_2], where in case (i): z_j=|ϑ_j|x_j, c_j=|ϑ_j|b_j, j=1,2, and f_0(z) = exp{-1/2(z_1+z_2)} (1 , z_1 , z_2)^; in case (ii): z_1=|ϑ_1|x_1, z_2=x_2, c_1=|ϑ_1|b_1, c_2=b_2, and f_0(z) = exp{-1/2z_1} (1 , z_1 , z_2)^; in case (iii): z_j=x_j, c_j=b_j, j=1,2, and f_0(z)=(1 , z_1 , z_2)^. Lemma <ref> in Appendix A.2 yields the D-optimal saturated designs in terms of the z-variable, which are easily transformed back to the locally D-optimal saturated designs in the original model. In case (i) the locally D-optimal saturated design is unique and hence condition (SD)() holds; in cases (ii) and (iii) there are infinitly many locally D-optimal saturated designs and, as it is easily seen, their information matrices vary, hence condition (SD)() does not hold. Furthermore, in case (i) the following holds (see Lemma <ref> in Appendix A.2). If |ϑ_j|≥2/b_j for j=1,2 then the locally D-optimal saturated design is locally D-optimal and hence condition ( SD^*)() holds. On the other hand, if |ϑ_1| and |ϑ_2| are small in the sense that |ϑ_j|≤2/b_j for j=1,2 and (1+exp(-|ϑ_1| b_1) (1+exp(-|ϑ_2| b_2) >2, then the locally D-optimal saturated design is not locally D-optimal and hence condition ( SD^*)() does not hold. Poisson models with two or more covariates were considered by Russell et al. <cit.> and more general results on locally D-optimal designs were obtained. In their Remark 3 on p. 724 a result on locally D-optimal saturated designs covering case (i) of the present model was stated but no proof was given. We give a proof in Appendix A.2 for the present situation of two covariates. § ADAPTIVE P-STEP-AHEAD ALGORITHM Let ℕ denote the set of all positive integers. By δ[x], for any x∈ X, we denote the one-point distribution on X concentrated at the point x. The adaptive algorithm described next generates iteratively (in batches of size p) a sequence of design points. For each batch of design points the responses are observed, and the parameter estimate is updated based on all design points and responses obtained so far. The estimate is used for choosing the next batch of design points, and so on. Along with the sequences of design points and response values, a sequence of designs and a sequence of parameter estimates emerge. Algorithm (o) Initialization (k=1): A number n_1∈ℕ and design points x_1,…,x_n_1∈ X are chosen forming the initial design ξ_1=1/n_1∑_i=1^n_1δ[x_i]. Observations y_1,…,y_n_1 of responses at the design points x_1,…,x_n_1, respectively, are taken. Based on the current data a parameter estimate θ_1∈Θ is computed, θ_1=θ_1(x_1,y_1,…,x_n_1,y_n_1). (i) Iteration: Let k≥1 and n_k=n_1+(k-1)p, let the current data be given by the points x_1,…,x_n_k∈ X forming the current design ξ_k=1/n_k∑_i=1^n_kδ[x_i], and by the observed responses y_1,…,y_n_k at x_1,…,x_n_k, respectively, and let θ_k=θ_k(x_1,y_1,…,x_n_k,y_n_k) be the current parameter estimate on the basis of the current data. Then, a batch of p design points x_n_k+1,…,x_n_k+p∈ X is chosen such that ([f_θ_k(x_n_k+1),…,f_θ_k(x_n_k+p)])^2 = max_z_1,…,z_p∈ X([f_θ_k(z_1),…,f_θ_k(z_p)])^2. Observations y_n_k+1,…,y_n_k+p of responses at x_n_k+1,…,x_n_k+p, respectively, are taken and, based on the augmented data, a new parameter estimate θ_k+1∈Θ is computed, θ_k+1 = θ_k+1(x_1,y_1,…,x_n_k+p,y_n_k+p). Set n_k+1=n_k+p and ξ_k+1=(1/n_k+1)∑_i=1^n_k+1δ[x_i]. Iteration step (i) is repeated with k replaced by k+1. Remarks. 1. Obviously, in the iteration step (i) we have ξ_k+1=(n_k/n_k+1) ξ_k + (p/n_k+1) η_k, where η_k = (1/p)∑_j=1^pδ[x_n_k+j], and by (<ref>) η_k is a locally D-optimal saturated design at θ_k. 2. For the initial design of the algorithm, ξ_1=(1/n_1)∑_i=1^n_1δ[x_i], the number n_1 of points (and the points themselves) may be arbitrary. In practice, one might prefer some saturated design and thus n_1=p. The choice n_1=p will also simplify some theoretical derivations in Sections 5 and 6. In fact, in our proofs of the theorems we will assume n_1=p to cut down the technical effort. However, the results hold for any choice of n_1. 3. The adaptive Wynn algorithm studied in <cit.> and <cit.> requires that the initial design ξ_1 is such that its information matrix M(ξ_1,θ) is non-singular for all θ∈Θ, which implies that all subsequently generated designs ξ_k, k≥2, have that property as well. The iteration rule of the adaptive Wynn algorithm is given by x_k+1 = max_x∈ Xf_θ_k^(x) M^-1(ξ_k,θ_k) f_θ_k(x). In the (nearly) trivial case p=1 this becomes x_k+1=max_x∈ X(f_θ_k(x))^2 which coincides with the iteration rule of the present p-step-ahead algorithm in case p=1. So, for p=1, the present algorithm coincides with the adaptive Wynn algorithm. Note also, that for p=1 condition (SD^*)() holds for any ∈Θ, since a locally D-optimal design at is given by the one-point design δ[x^*_], where x^*_=max_x∈ X(f_(x))^2. The algorithm uses observations of responses which are values of random variables (response variables). So the generated sequences x_i, y_i (i∈ℕ) and ξ_k, θ_k (k∈ℕ) are random and should be viewed as paths of corresponding sequences of random variables. This will be modeled appropriately in Sections 5 and 6. In Section 4 we focus on some properties of the algorithm which do not require a specific stochastic model. The proofs of the results have been transferred to the appendix (parts A.3 and A.4). § SOME BASIC PROPERTIES OF THE ALGORITHM The Euclidean norm in ℝ^p is given by ‖ a‖=(a^ a)^1/2. The Frobenius norm in the space ℝ^p× p of all p× p matrices is given by ‖ A‖ = (∑_i,j=1^pa_ij^2)^1/2 for A=(a_ij)_1≤ i,j≤ p. For a symmetric p× p matrix A the smallest eigenvalue of A is denoted by λ_ min(A). The distance function in the (compact) metric space Θ is denoted by d_Θ, and the set of all designs on X is denoted by Ξ. We start with an auxiliary lemma which does not specifically refer to the algorithm. Let ρ_k, τ_k∈Θ, k∈ℕ, be two sequences of parameter points such that lim_k→∞ d_Θ(ρ_k,τ_k) =0. Then lim_k→∞(sup_ξ∈Ξ‖ M(ξ,ρ_k) - M(ξ,τ_k)‖) = 0. As a consequence, if Φ is a real-valued continuous function on the set of all nonnegative definite p× p matrices, then lim_k→∞( sup_ξ∈Ξ|Φ(M(ξ,ρ_k))-Φ(M(ξ,τ_k)) |) = 0. For B∈ℝ^p× p and ∅≠ A⊆ℝ^p× p, we denote by dist(B, A) the distance of the point B and the set A, that is, dist(B, A)=inf_A∈ A‖ B-A‖. As it is well-known, the function B↦ dist(B, A) on ℝ^p× p is continuous, and if the set A is convex then this function is convex. For any nonempty subset A⊆ℝ^p× p we denote by Conv A the convex hull of A, that is, Conv A = {∑_i=1^rα_iA_i : α_i≥0, A_i∈ A (1≤ i≤ r), ∑_i=1^rα_i=1, r∈ℕ }. As a particular set A we consider the set of information matrices at of all locally D-optimal saturated designs at , for a given parameter point ∈Θ. We denote M_ s*() = {M(η^*,) : }. In the following lemma an arbitrary path of the p-step-ahead algorithm is considered yielding a sequence ξ_k of designs and a sequence θ_k of parameter estimates. If lim_k→∞θ_k=θ for some θ∈Θ, then for every sequence θ_k'∈Θ, k∈ℕ, such that lim_k→∞θ_k'=θ one has dist(M(ξ_k,θ_k') , Conv M_ s*(θ)) ⟶ 0 Under condition ( SD)(θ) the latter convergence is the same as lim_k→∞M(ξ_k,θ_k') = M_ s*(θ), with M_ s*(θ) according to condition ( SD)(θ). We denote the distance function in the (compact) metric space X by d_ X. Again, we consider any path of the p-step-ahead algorithm, and now we focus on the sequences x_i (i∈ℕ) and ξ_k (k∈ℕ) of design points and designs, respectively. (i) There exists a constant Δ_0>0 such that d_ X(x_n_k+ℓ,x_n_k+m) ≥Δ_0 (ii) Under condition (GLM), there exists a constant ε_0>0 such that ξ_k({x∈ X : |a^ f(x)|≥ε_0}) ≥ (k-1)/n_k Remark. The constants Δ_0 and ε_0 constructed in the proof of Lemma <ref> (see Appendix A.3) depend only on the family f_θ, θ∈Θ, but they do not depend on the particular path generated by the p-step-ahead algorithm. So Δ_0 and ε_0 in the lemma can be chosen simultaneously for all possible paths of the algorithm. A desirable property of a sequence of estimators of θ is strong consistency, that is, almost sure convergence to the true parameter point θ. For a sequence of random variables W_k, k∈ℕ, and a random variable W defined on some probability space with values in some metric space, the notation W_k W stands for almost sure convergence of W_k to W as k→∞. Under the assumption that the estimators θ_k, k∈ℕ, employed by the algorithm are strongly consistent, aymptotic properties of the designs ξ_k, k∈ℕ, generated by the algorithm are stated as a corollary below. A desirable property is “asymptotic local D-optimality at (almost surely)”, that is, M(ξ_k,) d_*() where d_*() denotes the maximum value of M(ξ,) over all designs ξ. It is not difficult to show that asymptotic local D-optimality at of the sequence ξ_k is equivalent to M(ξ_k,θ) M_*(θ), where M_*(θ) is the unique information matrix at θ of a locally D-optimal design at θ. Since the concept of the p-step-ahead algorithm is based on locally D-optimal saturated designs, one cannot expect asymptotic local D-optimality at (a.s.) of the design sequence ξ_k in general, unless condition ( SD^*)(θ) holds. The following corollary is a fairly direct consequence of Lemmas <ref> and <ref>, and we thus state it without a proof. Recall notations M_ s*(θ) for the set of information matrices at of all locally D-optimal saturated designs at and, in the case that condition ( SD)(θ) holds, (θ) for the unique element of M_ s*(θ). Furthermore, let d_ s*() be the maximum value of M(η,) over all saturated designs η. Assume that the sequence of adaptive estimators θ_k, k∈ℕ, employed by the p-step-ahead algorithm is strongly consistent, that is, θ_k where ∈Θ is the true parameter point. Then, for the sequence of designs ξ_k, k∈ℕ, generated by the algorithm one has: (i) dist(M(ξ_k,) , Conv M_ s*(θ)) 0 and hence lim inf_k→∞ M(ξ_k,) ≥ d_ s*() a.s. (ii) If condition ( SD)(θ) holds, then M(ξ_k,) (θ) and M(ξ_k,) d_ s*(). (iii) If condition (SD^*)(θ) holds, then the designs ξ_k are asymptotically locally D-optimal at (a.s.), that is, M(ξ_k,θ) M_*(θ) and M(ξ_k,) d_*(). In Sections 5 and 6 we will show that adaptive least squares estimators and maximum likelihood estimators in the p-step-ahead algorithm are strongly consistent, under appropriate assumptions. In particular, the models in Examples 1 to 4 of Section 2 will be covered with adaptive least squares estimation in the Michaelis-Menten model (Example 1) and the exponential decay model (Example 2), and with adaptive maximum likelihood estimation in the generalized linear models of Examples 3 and 4. So for those models, when the algorithm employs least squares estimators and maximum likelihood estimators, respectively, by Corollary <ref> the adaptive design sequence ξ_k generated by the algorithm is asymptotically locally D-optimal at (a.s.) for any true parameter point ∈Θ in Examples 1 to 3, and for any true parameter point ∈Θ'⊆Θ in Example 4 with a relevant subset of Θ'. § ADAPTIVE LEAST SQUARES ESTIMATORS In this section and in the next, we will examine the asymptotic properties (strong consistency and asymptotic normality) of adaptive least squares and adaptive maximum likelihood estimators in the p-step-ahead algorithm. To this end, appropriate stochastic models for the algorithm will be employed. Let X_i and Y_i, i∈ℕ, be two sequences of random variables defined on a common probability space (Ω, F,ℙ_θ) where θ∈Θ denotes the true parameter point (which is unknown). The random variables X_i have their values in X and the Y_i are real valued. A run of the algorithm generates paths x_i and y_i, i∈ℕ, of the sequences X_i and Y_i, respectively. An appropriate adaptive version of a regression model is stated by the following two assumptions (a1) and (a2), cf. <cit.>, Section 3. Later, some further strengthening assumptions will be added. (a1) Let a nondecreasing sequence F_0⊆ F_1⊆ … ⊆ F_k⊆ … of sub-sigma-fields of F be given such that for each k∈ℕ the multivariate random variable X_k=(X_n_k-1+1,…,X_n_k) is F_k-1-measurable, and the multivariate random variable Y_k=(Y_n_k-1+1,…,Y_n_k)^ is F_k-measurable. Here we define n_0:=0. (a2) Y_i = μ(X_i,θ) + e_i for all i∈ℕ with real-valued square integrable random errors e_i, i∈ℕ, such that the multivariate error variables e_k:=(e_n_k-1+1,…,e_n_k)^, k∈ℕ, satisfy: E(e_k | F_k-1) =0 for all k∈ℕ, and sup_k∈ℕ E(‖e_k‖^2 | F_k-1) <∞ Since n_k=n_1+(k-1)p for all k≥1, the dimensions of the multivariate random variables X_k, Y_k, and e_k introduced in (a1) and (a2) are given by n_k-n_k-1=p for all k≥2 and n_1-n_0=n_1. In the proofs of consistency and asymptotic normality we will restrict to the case n_1=p. The adaptive least squares estimators (adaptive LSEs) θ_k^( LS)=θ_k^( LS)(X_1,Y_1,…,X_n_k,Y_n_k), k≥1, are defined pathwise by θ_k^( LS)(x_1,y_1,…,x_n_k,y_n_k) = min_θ∈Θ∑_i=1^n_k(y_i-μ(x_i,θ))^2. Note that we do not generally assume that the adaptive estimators employed by the algorithm, θ_k=θ_k(X_1,Y_1,…,X_n_k,Y_n_k), k≥1, are given by the adaptive LSEs. Under condition (SI) of `saturated identifiability' or, alternatively, condition ( GLM^*) of `generalized linear model', strong consistency of the adaptive LSEs is shown by the next result. Note that the adaptive estimators θ_k employed by the algorithm may be arbitrary. Assume model (a1), (a2), and assume one of conditions (SI) or ( GLM^*). Then: θ_k^( LS)θ. For achieving asymptotic normality further conditions are needed. Firstly, the basic conditions (assumed throughout) (b1)-(b4) are augmented by the `gradient condition' (b5) on the family of functions f_θ, θ∈Θ, and the mean response μ. (b5) Θ⊆ℝ^p (endowed with the usual Euclidean metric), int(Θ)≠∅, where int(Θ) denotes the interior of Θ as a subset of ℝ^p, the function θ↦μ(x,θ) is twice differentiable on the interior of Θ for each fixed x∈ X, with gradients and Hessian matrices denoted by ∇μ(x,θ)= (∂/∂ϑ_1μ(x,θ),…,∂/∂ϑ_pμ(x,θ))^ and ∇^2μ(x,θ)=(∂^2/∂ϑ_i∂ϑ_jμ(x,θ))_1≤ i,j≤ p, respectively, where θ=(ϑ_1,…,ϑ_p)^. The functions (x,θ)↦∇μ(x,θ) and (x,θ)↦∇^2μ(x,θ) are continuous on X× int(Θ), and f_θ(x) = ∇μ(x,θ) Two additional conditions (L) and (AH) on the error variables of model (a1), (a2) are imposed, where `L' stands for `Lindeberg' and `AH' for `asymptotic homogeneity'. For an event A in the underlying probability space we denote by (A) the dichotomous random variable which yields the value 1 if the event A occurs, and yields the value 0 otherwise. (L) 1/k∑_j=1^k E(‖e_j‖^2(‖e_j‖>ε√(k)) | F_j-1) 0 for all ε>0. (AH) E(e_ke_k^| F_k-1)σ^2(θ) I_p for some positive real constant σ^2(θ), where I_p denotes the (p× p) identity matrix. Each of the following two conditions (L') and (L”) implies (L), which can be seen by similar arguments as in <cit.>, Section 3. (L') sup_k∈ℕ E(‖e_k‖^α| F_k-1) < ∞ a.s. for some α>2. (L”) The random variables e_k, k≥2, are identically distributed, and e_k, F_k-1 are independent for each k≥2. The m-dimensional normal distribution with expectation 0 and covariance matrix C is denoted by N(0,C), where C is a positive definite m× m matrix. For a sequence W_k of ℝ^m-valued random variables, convergence in distribution of W_k (as k→∞) to the m-dimensional normal distribution N(0,C) is abbreviated by W_k N(0,C). Assume model (a1), (a2), and assume conditions (b5), (L), (AH), and ( SD)(θ). Moreover, assume that the sequence θ_k of adaptive estimators employed by the algorithm and the sequence of adaptive LSEs θ_k^ (LS) are both strongly consistent, that is, θ_kθ and θ_k^ (LS), and let θ∈ int(Θ). Then: √(n_k) (θ_k^ (LS)-θ) N(0,σ^2() ^-1()), with () according to condition ( SD)(θ). § ADAPTIVE MAXIMUM LIKELIHOOD ESTIMATORS In this section we consider an adaptive version of a generalized linear model. Let a one-parameter exponential family P_τ, τ∈ J be given, where J⊆ℝ is an open interval and τ is the canonical parameter. The P_τ are probability distributions on the Borel sigma-field of the real line with densities w.r.t. some Borel-measure ν, p_τ(y) = K(y) exp(τ y - b(τ)), y∈ℝ, τ∈ J, where K is a nonnegative measurable function on ℝ and b is a real-valued function on J. The function b is infinitely differentiable, and for its first and second derivatives one has b'(τ)= E_τ(Y) and b”(τ)= Var_τ(Y)>0, the expectation and the variance of P_τ, respectively, see Fahrmeir and Kaufmann <cit.>, Section 2. In particular, the first derivative b' is a smooth and strictly increasing function and hence a bijection, b' : J⟶ b'(J), where the image b'(J) is an open interval of the real line and equals the set of expectations {E_τ(Y) : τ∈ J}. Condition (GLM^*) is assumed where the scalar-valued function ψ(x,θ) in (GLM) is given by ψ(x,θ)=φ(f^(x) θ) φ(u) = G'(u)/√(b”((b')^-1(G(u)))) and where it is assumed that G(I)⊆ b'(J). As in Section 5 let X_i and Y_i, i∈ℕ, be two sequences of random variables defined on a probability space (Ω, F,ℙ_θ) and with values in X and ℝ, respectively, where θ∈Θ denotes the true (but unknown) parameter point. The stochastic model for the adaptive algorithm is given by assumption (a1) from Section 5 plus the following (a2'), which is stronger than (a2) from Section 5. Recall the multivariate random variables X_k=(X_n_k-1+1,…,X_n_k) and Y_k=(Y_n_k-1+1,…,Y_n_k)^, k∈ℕ. (a2') For each k∈ℕ the conditional distribution of Y_k given F_k-1 is equal to the product of the distributions P_τ_i, n_k-1+1≤ i≤ n_k, where τ_i=(b')^-1(G(f^(X_i) θ)). To interprete the random variables τ_i in (a2') we note that for any x∈ X the parameter value τ(x)= (b')^-1(G(f^(x) θ)) selects that distribution P_τ(x) from the exponential family whose expectation equals G(f^(x) θ), according to condition (GLM^*). Note that for the canonical link, that is I=J and G=b', formulas simplify to τ(x)=f^(x), τ_i=f^(X_i), and φ(u)=√(b”(u)). Note further that (GLM^*) together with (<ref>) ensures that the information matrices from (<ref>) yield the Fisher information matrices, see Atkinson and Woods <cit.>, p. 473, see also Fahrmeir and Kaufmann <cit.>, p. 347. Example 3 (continued). Consider the class of generalized linear models with binary response from Example 3 in Section 2. The family of binomial-(1,π)-distributions (where 0<π<1) rewrites in canonical form (<ref>) with canonical parameter τ=log(π/(1-π))∈ℝ and b(τ)=log(1+exp(τ)). The densities refer to the two-point Borel measure ν=δ[0]+δ[1], and K(y)=1 if y∈{0,1}, and K(y)=0 else. By straightforward calculation, b'(τ)=exp(τ)/(1+exp(τ)), (b')^-1(π)=log(π/(1-π)), b”(τ)= exp(τ)/(1+exp(τ))^2. Hence b”((b')^-1(G(u))) = G(u) (1-G(u)), which shows that the function φ employed in Example 3 of Section 2 corresponds to (<ref>). Note that the logit model (i) of the example employs the canonical link, G=b', and hence for this model φ(u)=√(b”(u))=exp(u/2)/(1+exp(u)). As in <cit.>, Section 3, one concludes from (a1), (a2') that the joint log-likelihood of X_1,Y_1,…,X_n_k,Y_n_k (up to an additive term not depending on θ) is given by L_n_k(θ) = ∑_i=1^n_k(log(K(Y_i)) + τ_i(θ) Y_i - b(τ_i(θ))), τ_i(θ) = (b')^-1(G(f^(X_i) θ)). The adaptive maximum likelihood estimator =(X_1,Y_1,…,X_n_k,Y_n_k) maximizes L_n_k(θ) over θ∈Θ. Its strong consistency is shown by the next result. Note that the adaptive estimators θ_k employed by the algorithm may be arbitrary. Assume (a1), (a2'), and ( GLM^*) with (<ref>). Then θ_k^( ML)θ. The next result on asymptotic normality of the adaptive MLEs requires condition (SD)(). Assume (a1), (a2'), ( GLM^*) with (<ref>), and ( SD)(). Assume further that the inverse link function G is twice continuously differentiable, ∈ int(Θ), and the adaptive estimators employed by the algorithm are strongly consistent, that is, θ_k. Then √(n_k) (θ_k^( ML)-) N(0,^-1()), where () is given by condition ( SD)(). Example 5: Simulation. We illustrate the results on consistency and asymptotic normality of the maximum likelihood estimators (Theorems <ref> and <ref>) and the asymptotic D-optimality of the generated designs (Corollary <ref>) by simulations under the logit model (i) of Example 3 in Section 2. The experimental interval was chosen as X=[-4 , 4 ] and the parameter space as a rectangle Θ=[-10 , 10 ]× [ 0.1 , 10 ]. By simulations 10,000 paths (more precisely: pieces of paths up to k=250) of the 2-step algorithm were generated for each of two cases of true parameter points: =(0,1)^ and =(4,1)^. The maximum likelihood estimators were employed, that is, θ_k=. The starting design ξ_1 was always the three point design with support points -4, 0, 4 and uniform weights 1/3. So after step k the total number of observations included is n=n_k=2k+1. In fact, n rather than k is used when comparing to the adaptive Wynn algorithm which is a 1-step algorithm. To this end, also 10,000 paths of the adaptive Wynn algorithm employing adaptive maximum likelihood estimates were simulated, again for each of the two cases =(0,1)^ and =(4,1)^. Addressing the (almost sure) asymptotic D-optimality of the designs generated by the algorithms the development (as n grows) of the D-efficiencies of the generated designs from the simulated paths is focussed (see top pictures in Figure 1). The D-efficiency (at the true parameter point ) of a design ξ is defined by { M(ξ,)/ M_*()}^1/2, where M_*() is the information matrix of the locally D-optimal design at . For the two cases of considered here the locally D-optimal design ξ^*_ and the inverse of its information matrix at are given by =(0,1)^ : ξ^*_=([ -1.543 1.543; 1/2 1/2 ]), M_*^-1() = ([ 6.899 0; 0 2.894 ]); =(4,1)^ : ξ^*_= ([ -4 -1.601; 1/2 1/2 ]), M_*^-1() = ([ 76.415 20.438; 20.438 5.943 ]), see Example 3 in Section 2 and Appendix A.1. The consistency and asymptotic normality of the adaptive maximum likelihood estimators stated in Theorem <ref> and Theorem <ref>, respectively, should imply for the simulations that n times the mean squared error matrix of the simulated parameter estimates converges to M_*^-1(). This is illustrated in Figure 1 (middle plots) restricting to the diagonal entries of the matrices. Again, the adaptive 2-step and the adaptive Wynn algorithm are considered for a comparison. A further illustration of the asymptotic normality of maximum likelihood estimators from the adaptive 2-step algorithm is given by QQ-plots in Figure 1 (bottom). The comparison of the two adaptive algorithms by our simulations suggests that both algorithms yield about the same convergence behavior of the generated designs and maximum likelihood estimators. Note that the computation time of the adaptive Wynn was about double as large as that of the adaptive 2-step since the adaptive Wynn, as a `1-step ahead algorithm', carries out the optimization procedures (maximizing the likelihood function and the sensitivity) twice as often as the adaptive 2-step. However, for practical purposes it might be of greater importance that the adaptive 2-step algorithm allows some parallel response sampling (two observations at a time) while the adaptive Wynn prescribes strictly sequential sampling (one observation at a time). In particular, when observations are time consuming the adaptive 2-step may provide a substantial reduction of the total duration of data collection. § APPENDIX §.§ Supplement to Example 3. Consider a model from Example 3 with a transformed design variable z=ϑ_1+ϑ_2x, where θ=(ϑ_1,ϑ_2)^∈Θ with ϑ_2≠0 is a given parameter point. Hence z∈ Z=[ α , β ], the transform of the design interval X=[ a , b ], that is, α<β are given by ϑ_1+ϑ_2a and ϑ_1+ϑ_2b arranged in increasing order. Denote f_0(z)=φ(z) (1,z)^. Since, for any z_1,z_2∈ Z, {[f_0(z_1) , f_0(z_2)]}^2 = φ^2(z_1) φ^2(z_2) (z_2-z_1)^2 a D-optimal saturated design (in the transformed model) can be viewed as an optimal solution to the problem h(z)=lnφ(z_1) + lnφ(z_2) + ln(z_2-z_1) z∈ D_α,β, D_α,β=[ α , β ]^2∩ D, D={z=(z_1,z_2)∈ℝ^2: z_1<z_2}. We will also look at the unbounded problem of saturated D-optimality without the bounds α and β, h(z) z∈ D. An optimal solution to (<ref>) exists, whereas an optimal solution to (<ref>) may or may not exist in general. We will restrict to the case that (<ref>) has an optimal solution, see Lemma <ref> below. This includes the models in Ford, Torsney and Wu <cit.> listed in Table 4 of their paper along with the optimal solutions to the unrestricted problems (<ref>). In particular, the models (i) to (iv) of our present Example 3 are included. Lemma <ref> below gives a description of the D-optimal saturated designs, that is, the optimal solutions to (<ref>), under the assumption that φ is continuously differentiable and lnφ is strictly concave on ℝ. This covers models (i), (ii), and (iv) of our Example 3. The result of the lemma is not new: Table 3 of [FTW-1992] presents a slightly more general result. Unfortunately, the proof in Section 6.6 of that paper is incomplete and somewhat ambiguous: the authors assume in their case (c) on p. 582 that φ^2(u) (u-z_1)^2 is non-decreasing in u∈[ z_1, β ] for any z_1≥α. But this is not necessarily met if α≤ z_1<z_1^** even under log-concavity of φ. Here z^**=(z_1^**,z_2^**) denotes the optimal solution to the unbounded problem (<ref>). Similarly, in their case (d) on p. 582 the authors assume that φ^2(u) (z_2-u)^2 is non-increasing in u∈[ α , z_2] for any z_2≤β, but this may fail if z_2^**<z_2≤β despite log-concavity of φ. For example, for the logistic model (i) φ^2(u)= exp(u)/(1+exp(u))^2 and for α=-5, β=1.3 case (c) of [FTW-1992] occurs, but for z_1=-4 one finds that φ^2(u) (u-z_1)^2 is decreasing in u when 1≤ u ≤ 1.3. The next lemma restates the result along with a proof, where our slightly stronger assumption of strict concavity of lnφ ensures uniqueness of the optimal solutions to problems (<ref>) and (<ref>). We denote the partial derivatives of h on D by h_j', j=1,2, that is, h_1'(z)=(lnφ)'(z_1) - 1/z_2-z_1, h_2'(z)=(lnφ)'(z_2) + 1/z_2-z_1, z=(z_1,z_2)∈ D, where (lnφ)'=φ'/φ is the derivative of lnφ. Let φ be a positive and continuously differentiable function on the real line and such that lnφ is strictly concave. Assume that there exists an optimal solution z^**=(z_1^**,z_2^**) to the unbounded problem (<ref>). Then: (a) The optimal solution z^**=(z_1^**,z_2^**) to problem (<ref>) is unique, and z^** is the unique solution to the equations h_j'(z)=0, j=1,2, z∈ D. (b) The optimal solution z^*=(z_1^*,z_2^*) to problem (<ref>) is unique, and z^* is obtained as follows, where four cases are distinguished. (1) Let α≤ z_1^** and β≥ z_2^**. Then z^*=z^**. (2) Let α≤ z_1^** and β<z_2^**. If h_1'(α,β)≤0 then z^*=(α,β); if h_1'(α,β)>0 then z^*=(u,β) with α< u<β and h_1'(u,β)=0. (3) Let α> z_1^** and β≥ z_2^**. If h_2'(α,β)≥0 then z^*=(α,β); if h_2'(α,β)<0 then z^*=(α,v) with α< v<β and h_2'(α,v)=0. (4) Let α> z_1^** and β< z_2^**. Then z^*=(α,β). Proof. By strict concavity of lnφ the function h is strictly concave on the convex set D. Hence the optimal solution z^**∈ D to (<ref>) is unique and the optimal solution z^*∈ D_α,β to (<ref>) is unique. Since D is an open convex set, z^** is the unique point of D at which the gradient of h is equal to zero, that is, h_j'(z^**)=0, j=1,2. This proves part (a) of the lemma and uniqueness of z^* in part (b). Case (1) of part (b) means that z^**∈ D_α,β and hence z^*=z^**. In each of the remaining cases (2), (3), and (4) the optimal solution z^* must have at least one component equal to an end point of the interval [ α , β ], since otherwise z^* would be an interior point of D_α,β entailing that the gradient of h at z^* equals zero and thus z^*=z^**. This is excluded in each of the cases (2), (3), and (4). So, either z^*=(u,β) with some α< u<β, or z^*=(α,v) with some α< v <β, or z^*=(α,β). As it is well-known, if C⊆ D is a given nonempty convex subset of D then a point z=(z_1, z_2)∈ C maximizes h(z) over z∈ C if and only if the directional derivatives of h at z are nonpositive for all feasible directions, that is h_1'(z) (z_1-z_1) + h_2'(z) (z_2-z_2) ≤0 z=(z_1,z_2)∈ C. Consider case (2). Suppose that z^*=(α,v) with α<v<β. From condition (<ref>) with C=D_α,β and z=z^* one gets h_1'(z^*)≤0 and h_2'(z^*)=0. By (<ref>) with C=D_α=[ α , ∞)^2 ∩ D and z=z^*, one gets that z^* maximizes h(z) over z∈ D_α. But z^**∈ D_α and thus z^** is the unique maximizer of h(z) over z∈ D_α. Hence z^*=z^** which is a contradiction. So the second component of z^* must be equal to β and, clearly, the first component z_1^* maximizes the function z_1↦ h(z_1,β) over z_1∈[ α , β ). The derivative of that function is given by h_1'(z_1,β), which is decreasing in z_1∈[ α , β ) and h_1'(z_1,β)→-∞ as z_1→β. One concludes: if h_1'(α,β)≤0 then z_1^*=α; otherwise z_1^*=u the unique solution to h_1'(u,β)=0. In case (3) the proof is analogous. Consider case (4). From (<ref>) it is obvious that h_1'(z_1,z_2) is decreasing in z_1<z_2 for fixed z_2, and increasing in z_2>z_1 for fixed z_1. Hence 0=h_1'(z_1^**,z_2^**) ≥ h_1'(α,z_2^**)≥ h_1'(α,β). Similarly, by (<ref>) the partial derivative h_2'(z_1,z_2) is increasing in z_1<z_2 for fixed z_2, and decreasing in z_2>z_1 for fixed z_1. Hence 0=h_2'(z_1^**,z_2^**) ≤ h_2'(α,z_2^**)≤ h_2'(α,β). We have thus obtained that h_1'(α,β)≤0 and h_2'(α,β)≥0. By condition (<ref>) with C=D_α,β and z=(α,β) it follows that z^*=(α,β). §.§ Supplement to Example 4. Transforming the design variable and the design space, for a given parameter point θ=(ϑ_1,ϑ_2)^, to z=(z_1,z_2)∈ Z=[ 0 , c_1]×[ 0 , c_2] as described in Example 4, we have f_0(z)=φ_l(z) (1,z_1,z_2)^, where the index l=1,2,3 refers to the different cases (i), (ii), (iii) and φ_1(z)=exp{-1/2(z_1+z_2)}, φ_2(z)=exp{-1/2z_1}, φ_3(z)=1. A D-optimal saturated design in the transformed model is described by three points x^*=(x_1^*,x_2^*), y^*=(y_1^*,y_2^*), and z^*=(z_1^*,z_2^*) in the rectangle Z which maximize the function g_l(x,y,z) = φ_l(x) φ_l(y) φ_l(z) | C(x,y,z)|, C(x,y,z)= [[ 1 1 1; x_1 y_1 z_1; x_2 y_2 z_2 ]] over all x=(x_1,x_2), y=(y_1,y_2), z=(z_1,z_2) ∈ Z. Note that, again, the index l=1,2,3 refers to the different cases (i), (ii), and (iii). The next lemma shows the D-optimal saturated designs for each of the three cases. Consider the functions g_1, g_2, and g_3 on Z^3 defined by (<ref>) and (<ref>), where Z= [ 0 , c_1]×[ 0 , c_2] with given c_1>0 and c_2>0. Then: (i) The points which maximize g_1(x,y,z) over (x,y,z)∈ Z^3 are the triples with components (0,0), (c_1^*,0), (0,c_2^*) (arranged in any order), where c_j^*:=min{c_j,2}, j=1,2. (ii) The points which maximize g_2(x,y,z) over (x,y,z)∈ Z^3 are the triples with components (0,0), (c_1^*,β), (0,c_2), where 0≤β≤ c_2 is arbitrary and, as above, c_1^*:=min{c_1,2}. (iii) The points which maximize g_3(x,y,z) over (x,y,z)∈ Z^3 are the triples with components (0,0), (c_1,0), (α,c_2), the triples with components (0,0), (c_1,β), (0,c_2), the triples with components (0,β), (c_1,0), (c_1,c_2), and the triples with components (α,0), (0,c_2), (c_1,c_2), where 0≤α≤ c_1 and 0≤β≤ c_2 are arbitrary. Remark. Geometrically, the solutions in part (iii) of the lemma are the triples consisting of two adjacent vertices of the rectangle Z and any point from the edge of Z opposite to the edge joining the two vertices. Proof. Clearly, for (x,y,z)∈ Z^3 the product φ_l(x) φ_l(y) φ_l(z) in (<ref>) is equal to exp{-1/2∑_j=1^2(x_j+y_j+z_j)} in case l=1, equal to exp{-1/2(x_1+y_1+z_1)} in case l=2, and equal to 1 in case l=3. For later use, we show the following. min{x_j,y_j,z_j}=0 | C(x,y,z)| ≤ max{x_1,y_1,z_1} max{x_2,y_2,z_2}. To see this, after a suitable permutation of x, y, and z, the following two cases have to be considered. Case 1: x_1=0 and y_2=0; Case 2: x_1=0 and x_2=0. Assume Case 1. Then C(x,y,z)=x_2(z_1-y_1)+y_1z_2. If y_1≤ z_1 then |x_2(z_1-y_1)+y_1z_2| = x_2(z_1-y_1)+y_1z_2≤max{x_2,z_2} (z_1-y_1+y_1) = z_1max{x_2,z_2}, hence (<ref>). If y_1> z_1 then |x_2(z_1-y_1)+y_1z_2| ≤ max{x_2(y_1-z_1) , y_1z_2} ≤ y_1 max{x_2,z_2}, and hence (<ref>). Now assume Case 2. Then | C(x,y,z) | = |y_1z_2-z_1y_2| ≤ max{y_1z_2 , z_1y_2}≤max{y_1,z_1} max{y_2,z_2}, and hence (<ref>). Below we will use the fact that the function exp{-1/2 t} t increases for 0≤ t≤ 2 and decreases for 2≤ t<∞, and hence for j=1,2, exp{-1/2t} t ≤ exp{-1/2c_j^*} c_j^* and the inequality is strict unless t=c_j^*. (i) Let any (x,y,z)∈ Z^3 be given such that C(x,y,z) is nonsingular. Denote a_j=min{x_j,y_j,z_j}, j=1,2, and a=(a_1,a_2). Define x=x-a, y=y-a, and z=z-a. Then x,y,z∈ Z, C(x,y,z)= C(x,y,z), and g_1(x,y,z)≥ g_1(x,y,z) with strict inequality unless a_1=a_2=0. So, the maximizers of g_1(x,y,z) over Z^3 are among those points (x,y,z)∈ Z^3 such that min{x_1,y_1,z_1}=0 and min{x_2,y_2,z_2}=0. Denote g^* = exp{-1/2(c_1^*+c_2^*)} c_1^*c_2^* which is the common value g_1(x^*,y^*,z^*) of the claimed maximizers (x^*,y^*,z^*) in part (i) of the lemma. Let (x,y,z)∈ Z be given such that min{x_j,y_j,z_j}=0 for j=1,2. Then, by (<ref>), and (<ref>), g_1(x,y,z) ≤ exp{-1/2(max{x_1,y_1,z_1}+max{x_2,y_2,z_2})} max{x_1,y_1,z_1} max{x_2,y_2,z_2} ≤ exp{-1/2(c_1^*+c_2^*)} c_1^*c_2^* = g^*, and the equality g_1(x,y,z)=g^* implies that for each j=1,2 two of x_j,y_j,z_j must be zero, from which one concludes {x,y,z}={(0,0) , (c_1^*,0) , (0,c_2^*)}. (ii) Let any (x,y,z)∈ Z^3 be given such that C(x,y,z) is nonsingular. For j=1,2 denote a_j=min{x_j,y_j,z_j}, and a_2=max{x_2,y_2,z_2} and λ=c_2/(a_2-a_2). Note that a_2<a_2≤ c_2 and λ≥1. Define x=(x_1-a_1,λ(x_2-a_2)), y=(y_1-a_1,λ(y_2-a_2)), and z=(z_1-a_1,λ(z_2-a_2)). Then, (x,y,z)∈ Z^3 and C(x,y,z) = λ C(x,y,z) and x_1+y_1+z≤ x_1+y_1+z_1. It follows that g_2(x,y,z)≥ g_2(x,y,z) and the inequality is strict unless a_1=a_2=0 and a_2=c_2. So the maximizers of g_2(x,y,z) over (x,y,z)∈ Z^3 must satisfy min{x_j,y_j,z_j}=0 for j=1,2, and max{x_2,y_2,z_2}=c_2. Let (x,y,z)∈ Z^3 be any such triple. By (<ref>) and (<ref>) g_2(x,y,z) ≤ exp{-1/2max{x_1,y_1,z_1})} max{x_1,y_1,z_1} c_2 ≤ exp{-1/2c_1^*} c_1^*c_2 = g^*, and the equality g_2(x,y,z)=g^* implies that two of x_1,y_1,z_1 are equal to zero and one is equal to c_1^*, and max{x_2,y_2,z_2}=c_2, min{x_2,y_2,z_2}=0. From this the result of part (ii) follows. (iii) Let any (x,y,z)∈ Z^3 be given such that C(x,y,z) is nonsingular. With a_j=min{x_j,y_j,z_j}, a_j=max{x_j,y_j,z_j} and λ_j=c_j/(a_j-a_j), j=1,2, define x=(λ_1(x_1-a_1),λ_2(x_2-a_2)), y=(λ_1(y_1-a_1),λ_2(y_2-a_2)), and z=(λ_1(z_1-a_1),λ_2(z_2-a_2)). Then x,y,z∈ Z, C(x,y,z)=λ_1λ_2 C(x,y,z), and since λ_j≥1 for j=1,2 one has g_3(x,y,z)≥ g_3(x,y,z) with strict inequality unless a_j=0 and a_j=c_j for j=1,2. So the maximizers of g_3(x,y,z) are among the triples (x,y,z)∈ Z^3 such that after a suitable permutation of x,y,z, one has (x_1,y_1,z_1)=(0,c_1,α) for some 0≤α≤ c_1, and x_2,y_2,z_2 is some (other) permutation of 0,c_2,β for some 0≤β≤ c_2. Checking the six possible permutations and maximizing g_3(x,y,z) w.r.t. the remaining variables α and β the four types of triples as stated in part (iii) of the lemma appear as the maximizers. In case (i) the lemma yields the uniformly weighted design on the three points (0,0), (c_1^*,0), (0,c_2^*) as the unique D-optimal saturated design in the transformed model f_0(z)=exp{-1/2(z_1+z_2)} (1,z_1,z_2)^, z=(z_1,z_2)∈ Z=[ 0 , c_1]× [ 0 , c_2]. Let us denote this design by ζ^*. We ask whether ζ^* is D-optimal (in the class of all designs ζ). Some answers are given by the next lemma the first part of which is covered by a more general result in <cit.>, see the lemma on p. 723 of that paper. We present though an alternative (short) proof. Assume case (i) and the transformed model, and let ζ^* be the design with support points (0,0), (c_1^*,0), (0,c_2^*) and uniform weights 1/3. If c_j≥2 for j=1,2 then ζ^* is D-optimal. If c_j≤2 for j=1,2 and (1+exp(-c_1)) (1+exp(-c_2)) >2 then ζ^* is not D-optimal. Proof. Denoting C=[f_0(0,0) , f_0(c_1^*,0) , f_0(0,c_2^*)], the information matrix of ζ^* is given by (1/3)CC^. Denote g_0(z)=C^-1f_0(z), z∈ Z. The condition for D-optimality of ζ^* from the Kiefer-Wolfowitz equivalence theorem can be written as g_0^(z) g_0(z) ≤ 1 By straightforward calculation, g_0(z)=exp(-(z_1+z_2)/2) (1-(z_1/c_1^*) -(z_2/c_2^*), exp(c_1^*/2) z_1/c_1^*, exp(c_2^*/2) z_2/c_2^*)^ for all z=(z_1,z_2)∈ Z, and hence g_0^(z) g_0(z) = exp(-(z_1+z_2)) {(1-z_1/c_1^* - z_2/c_2^*)^2 + exp(c_1^*) (z_1/c_1^*)^2 + exp(c_2^*) (z_2/c_2^*)^2}. Consider the case that c_j≥2 for j=1,2, hence c_j^*=2 for j=1,2. To prove (<ref>), observing (<ref>), it suffices to show that for any t∈[ 0 , ∞] one has e^-t {(1-1/2z_1- 1/2z_2)^2 + e^2/4 (z_1^2 + z_2^2)} ≤ 1 H_t={z=(z_1,z_2)∈[ 0 , ∞)^2 : z_1+z_2=t}. On H_t, for any fixed t, the function on the l.h.s. of (<ref>) is convex and hence attains its maximum at an end-point of the line segment H_t, that is, at (t,0) or (0,t) whose common function value is h(t) = e^-t {(1-1/2t)^2 + e^2/4 t^2}. We have to show that h(t)≤1 for all t≥0. For the derivative of h one calculates h'(t)= -1+e^2/4 e^-t {t^2 - 2 3+e^2/1+e^2 t + 8/1+e^2}. It is easily seen that h'(t)≤0 for t≤ t_1, h'(t)≥0 for t_1≤ t≤ t_2, and h'(t)≤0 for t≥ t_2, where t_1=4/(1+e^2) and t_2=2 are the zeros of h'. From this and by h(0)=h(2)=1, one gets h(t)≤max{h(0), h(2)}=1 for all t≥0. Now consider the case that c_j≤2 for j=1,2, hence c_j^*=c_j for j=1,2. For the vertex z=(c_1,c_2) of Z one gets by (<ref>) g_0^(z) g_0(z) = exp(-(c_1+c_2)) {1 + exp(c_1) + exp(c_2)} = (1+exp(-c_1)) (1+exp(-c_2)) -1. So, if (1+exp(-c_1)) (1+exp(-c_2)) >2 then the Kiefer-Wolfowitz condition (<ref>) is violated at z=z and thus ζ^* is not D-optimal. §.§ Proofs of the lemmas Proof of Lemma <ref> By the uniform continuity of the function (x,θ)↦ f_θ(x) f_θ^(x) on the compact metric space X×Θ and by the assumption that lim_k→∞ d_Θ(ρ_k,τ_k)=0, one gets lim_k→∞(max_x∈ X‖ f_ρ_k(x) f_ρ_k^(x) - f_τ_k(x) f_τ_k^(x)‖) = 0. Let any ξ∈Ξ be given. Then ‖ M(ξ,ρ_k) - M(ξ,τ_k)‖ = ‖∑_x∈ supp(ξ)ξ(x)[f_ρ_k(x) f_ρ_k^(x) - f_τ_k(x) f_τ_k^(x)]‖ ≤ ∑_x∈ supp(ξ)ξ(x) ‖ f_ρ_k(x) f_ρ_k^(x) - f_τ_k(x) f_τ_k^(x)‖ ≤ max_x∈ supp(ξ)‖ f_ρ_k(x) f_ρ_k^(x) - f_τ_k(x) f_τ_k^(x)‖. Hence sup_ξ∈Ξ‖ M(ξ,ρ_k) - M(ξ,τ_k)‖ ≤ max_x∈ X‖ f_ρ_k(x) f_ρ_k^(x) - f_τ_k(x) f_τ_k^(x)‖, and together with (<ref>) the first result follows. We observe that the set of all information matrices, M={M(ξ,θ) : ξ∈Ξ, θ∈Θ}, is bounded since for all ξ∈Ξ and θ∈Θ ‖ M(ξ,θ)‖≤max_x∈ X‖ f_θ(x) f^_θ(x)‖=max_x∈ X‖ f_θ(x)‖^2≤γ^2 where γ:=max_(x,β)∈ X×Θ‖ f_β(x)‖<∞. So there is a compact subset A of the set of all nonnegative definite p× p matrices such that M⊆ A. Now the second statement of the lemma follows using the uniform continuity of Φ on the compact set A. Proof of Lemma <ref> By (<ref>) and η_k=1/p∑_j=1^pδ[x_n_k+j], one gets ξ_k = n_1/n_kξ_1 + p/n_k∑_j=1^k-1η_j, , and hence, for all θ∈Θ and all k≥2, M(ξ_k,θ)=n_1/n_kM(ξ_1,θ) + p/n_k∑_j=1^k-1 M(η_j,θ). Let η^* be any saturated design which maximizes M(η,θ) over all saturated designs η. Since η_k maximizes M(η,θ_k) over all saturated designs η, one has M(η_k,θ_k) ≥ M(η^*,θ_k) . For k→∞ the r.h.s. of the latter inequality converges to M(η^*,θ) by Lemma <ref>, and hence lim inf_k→∞ M(η_k,θ_k) ≥ M(η^*,θ). Again by Lemma <ref>, M(η_k,θ_k) - M(η_k,θ) → 0 as k→∞, hence lim inf_k→∞ M(η_k,θ_k) and lim inf_k→∞ M(η_k,θ) coincide, and thus lim inf_k→∞ M(η_k,θ) ≥ M(η^*,θ). On the other hand, M(η_k,θ) ≤ M(η^*,θ) for all k and hence lim sup_k→∞ M(η_k,θ) ≤ M(η^*,θ). It follows that lim_k→∞ M(η_k,θ) = M(η^*,θ). Denoting by d_ s*(θ) the common value of the determinants on M_ s*(θ), we have obtained that lim_k→∞ M(η_k,θ) = d_ s*(θ). Consider the compact subset of information matrices M_ s,u(θ) = {1/p∑_j=1^pf_^(z_j) f_(z_j) : z_1,…,z_p∈ X}, which constitutes the closure of the set of information matrices at of all saturated designs with uniform weights. For any given ε>0, consider the compact (or empty) subset of that set {M∈ M_ s,u(θ) : dist(M, M_ s*(θ))≥ε}. The maximum value of the determinant on the latter set (where max∅:=-∞) is strictly less than d_ s*(θ), and therefore dist(M(η_k,), M_ s*(θ))<ε for k large enough. We have thus shown that dist(M(η_k,θ), M_ s*(θ)) ⟶ 0 Trivially, (<ref>) remains true when enlarging the set M_ s*(θ) to its convex hull. Since the function B↦ dist(B, Conv M_ s*(θ)) is convex on ℝ^p× p, one gets for all k≥2, dist(1/k-1∑_j=1^k-1M(η_j,θ), Conv M_ s*(θ) ) ≤ 1/k-1∑_j=1^k-1 dist(M(η_j,θ), Conv M_ s*(θ) ), and the r.h.s. goes to zero as k→∞ by (<ref>). It follows that dist(1/k-1∑_j=1^k-1M(η_j,θ), Conv M_ s*(θ)) ⟶ 0 By (<ref>), observing n_k=n_1+(k-1)p, ‖ M(ξ_k,) - 1/k-1∑_j=1^k-1M(η_j,)‖ ⟶0 and together with (<ref>), dist(M(ξ_k,θ), Conv M_ s*(θ)) ⟶ 0 Let θ_k', k∈ℕ, be any sequence in Θ which converges to θ. Then, by (<ref>) and Lemma <ref>, dist(M(ξ_k,θ_k'), Conv M_ s*(θ)) ⟶ 0 If condition (SD)(θ) holds then Conv M_ s*(θ) = {M_ s*(θ)} and hence lim_k→∞M(ξ_k,θ_k') = M_ s*(θ). Proof of Lemma <ref> (i) Consider the real-valued function F on X^p×Θ defined by F(z_1,…,z_p;θ) = ([f_θ(z_1),…,f_θ(z_p)])^2. Clearly, F is continuous and hence, by compactness of X^p×Θ, uniformly continuous. Let c_0 = min_θ∈Θmax_z_1,…,z_p∈ XF(z_1,…,z_p;θ). In fact, continuity of θ⟼ F(z_1,…,z_p;θ) for every fixed (z_1,…,z_p)∈ X^p implies lower semi-continuity of the function θ⟼max_z_1,…,z_p∈ XF(z_1,…,z_p;θ), and hence this function attains its minimum on Θ. The function is strictly positive on Θ (by the basic assumption (b4), (i)), hence its minimum value is positive, i.e., c_0>0. By the uniform continuity of F there exists a Δ_0>0 such that max_θ∈Θ|F(z_1,…,z_p;θ)-F(z_1',…,z_p';θ)| <c_0 For any k≥1 and ℓ,m∈{1,…,p}, ℓ<m, consider the particular points (x_n_k+1,…,x_n_k+p) and (z_1',…,z_p') where the latter is given by z_j'=x_n_k+j z_ℓ'=x_n_k+m, and consider the parameter point θ=θ_k. Since the matrix [f_θ_k(z_1'),…,f_θ_k(z_p')] has two identical columns (the ℓ-th and the m-th columns) one has F(z_1'…,z_p';θ_k)=0. By (<ref>) F(x_n_k+1,…,x_n_k+p;θ_k)=max_z_1,…,z_p∈ XF(z_1,…,z_p;θ_k), and hence by (<ref>) F(x_n_k+1,…,x_n_k+p;θ_k)≥ c_0. Together with F(z_1',…,z_p';θ_k)=0 and d_ X(x_n_k+j,z_j')=0 for j≠ℓ, and d_ X(x_n_k+ℓ,z_ℓ')= d_ X(x_n_k+ℓ,x_n_k+m), ones gets from (<ref>) that d_ X(x_n_k+ℓ,x_n_k+m)≥Δ_0, which proves part (i) of the lemma. (ii) The function F from the proof of part (i) may be written as F(z_1,…,z_p;θ) = p^p (p^-1[f_θ(z_1),…,f_θ(z_p)] [f_θ(z_1),…,f_θ(z_p)]^), and the matrix on the r.h.s. under the determinant is equal to the information matrix M(ξ[z_1,…,z_p],θ) where ξ[z_1,…,z_p]:=1/p∑_j=1^pδ[z_j]. So (<ref>) rewrites as p^-pc_0 = min_θ∈Θmax_z_1,…,z_p∈ X M(ξ[z_1,…,z_p],θ). For k∈ℕ the design η_k=1/p∑_j=1^pδ[x_n_k+j] has the property M(η_k,θ_k) =max_z_1,…,z_p∈ X M(ξ[z_1,…,z_p],θ_k). Hence, by (<ref>), M(η_k,θ_k) ≥ p^-p c_0 Using the positive finite constant γ=max_(x,θ)∈ X×Θ‖ f_θ(x)‖ it is easily seen that for all designs ξ∈Ξ, all θ∈Θ, and all vectors a∈ℝ^p one has a^ M(ξ,θ) a ≤ γ^2 a^ a, where the identity a^ M(ξ,θ) a = ∑_x∈ supp(ξ)ξ(x) (a^ f_θ(x))^2 is useful. So M(ξ,θ)≤γ^2I_p in the Loewner semi-ordering, where I_p denotes the p× p identity matrix. Hence all the eigenvalues of M(ξ,θ) are less than or equal to γ^2. Together with (<ref>) it follows that λ_ min(M(η_k,θ_k)) ≥ p^-p c_0 γ^-2(p-1) For any vector a∈ℝ^p with ‖ a‖=1 one has a^ M(η_k,θ_k) a ≥λ_ min(M(η_k,θ_k)), and hence a^ M(η_k,θ_k) a ≥ p^-p c_0 γ^-2(p-1), and observing (<ref>) this yields max_1≤ j≤ p(a^ f_θ_k(x_n_k+j))^2 ≥ p^-p c_0 γ^-2(p-1). By (GLM) f_θ(x)=ψ(x,θ) f(x) for all (x,θ)∈ X×Θ where, in particular, ψ is a continuous positive function. So, ψ_ max:=max_(x,θ)∈ X×Θψ(x,θ) is a positive finite constant, and hence by (<ref>), defining ε_0:= p^-p/2 c_0^1/2 γ^-(p-1)ψ_ max^-1 one gets from (<ref>) max_1≤ j≤ p(a^ f(x_n_k+j))^2 ≥ ε_0^2 Clearly, this is the same as η_k({x∈ X : |a^ f(x)|≥ε_0}) ≥ 1/p From (<ref>) and ξ_k=n_1/n_kξ_1+p/n_k∑_j=1^k-1η_j for all k≥2 according to (<ref>), one gets for all a∈ℝ^p, ‖ a‖=1, and all k≥1, ξ_k({x∈ X : |a^ f(x)|≥ε_0}) ≥ p/n_kk-1/p = k-1/n_k, where the inequality is trivial for k=1. §.§ Proofs of the theorems Proof of Theorem <ref> We proceed basically as in the proof of Theorem 1 in <cit.>, with apppropriate modifications. Consider the random variables S_n_k(θ):=∑_i=1^n_k(Y_i-μ(X_i,θ))^2 D_n_k(θ,θ) := ∑_i=1^n_k(μ(X_i,θ)-μ(X_i,θ))^2. for all k∈ℕ and θ∈Θ. The least squares estimator θ_k^(LS) minimizes S_n_k(θ) over θ∈Θ. For ε>0 we denote C(θ,ε):= {θ∈Θ : d_Θ(θ,θ)≥ε}, where d_Θ denotes the distance function in Θ. The proof is divided into three steps. Step 1. Show that for all ε>0 with C(θ,ε)≠∅, | 1/k(inf_θ∈ C(θ,ε) S_n_k(θ) -S_n_k(θ)) - 1/kinf_θ∈ C(θ,ε) D_n_k(θ,θ) | 0 Step 2. Show that for all ε>0 with C(θ,ε)≠∅, lim inf_k→∞(1/k inf_θ∈ C(θ,ε)D_n_k(θ,θ)) > 0 Step 3. Conclude from the results of Step 1 and Step 2 that for all ε>0 with C(θ,ε)≠∅, inf_θ∈ C(θ,ε)S_n_k(θ) - S_n_k(θ) ∞ Then, by (<ref>) and by Lemma 1 of Wu <cit.> one gets θ_k^( LS)θ. Ad Step 1. As in <cit.> one gets, for all k∈ℕ, S_n_k(θ)-S_n_k(θ) = D_n_k(θ,θ) + 2W_n_k(θ,θ), W_n_k(θ,θ) := ∑_i=1^n_k(μ(X_i,θ)-μ(X_i,θ)) e_i. So, for all k≥1, | 1/k(inf_θ∈ C(θ,ε) S_n_k(θ) -S_n_k(θ)) - 1/kinf_θ∈ C(θ,ε) D_n_k(θ,θ) | ≤ 2/k sup_θ∈Θ|W_n_k(θ,θ)|. Introducing the function h(x,θ):=μ(x,θ)-μ(x,θ), (x,θ)∈ X×Θ, we may write W_n_k(θ,θ) = ∑_j=1^k∑_ℓ=1^p h(X_n_j-1+ℓ,θ) e_n_j-1+ℓ, where for simplicity of presentation we assume that n_1=p, and hence |W_n_k(θ,θ)| ≤ ∑_ℓ=1^p|∑_j=1^k h(X_n_j-1+ℓ,θ) e_n_j-1+ℓ|. For each fixed ℓ∈{1,…,p} the sequences of random variables e_n_j-1+ℓ, j∈ℕ, and X_n_j-1+ℓ, j∈ℕ, satisfy the assumptions of Lemma A.1 in <cit.>, and part (iii) of that lemma yields 1/ksup_θ∈Θ|∑_j=1^k h(X_n_j-1+ℓ,θ) e_n_j-1+ℓ| 0 By (<ref>), (<ref>) and (<ref>) the result of Step 1 follows. Ad Step 2 in the case that condition (SI) holds. Consider any path x_i,y_i, i∈ℕ, and θ_k, k∈ℕ of the sequences X_i,Y_i, i∈ℕ, and θ_k, k∈ℕ. Choose Δ_0>0 according to Lemma <ref> (i). Consider the subset of X^p given by D = {(z_1,…,z_p)∈ X^p : d_ X(z_ℓ,z_m)≥Δ_0, 1≤ℓ<m≤ p}, which is compact. Let ε>0 be given such that C(θ,ε)≠∅. Consider the (continuous) function on D× C(θ,ε) given by (z_1,…,z_p,θ)⟼∑_ℓ=1^p(μ(z_ℓ,θ)-μ(z_ℓ,θ))^2. By condition (SI) this function is strictly positive on D× C(θ,ε), and by compactness of this set the infimum c(ε) of this function over D× C(θ,ε) is attained and hence c(ε)>0. It follows that ∑_ℓ=1^p(μ(x_n_k+ℓ,θ)-μ(x_n_k+ℓ,θ))^2 ≥ c(ε) From this one gets for all k≥2 and all θ∈ C(θ,ε), 1/kD_n_k(θ,θ) ≥1/k∑_i=n_1+1^n_k(μ(x_i,θ)-μ(x_i,θ))^2 =1/k∑_j=1^k-1∑_ℓ=1^p (μ(x_n_j+ℓ,θ)-μ(x_n_j+ℓ,θ))^2 ≥(k-1)c(ε)/k ≥ c(ε)/2. Hence the result of Step 2 follows. Ad Step 2 in the case that condition (GLM^*) holds. Again, consider any path x_i,y_i, i∈ℕ, and θ_k, k∈ℕ of the sequences X_i,Y_i, i∈ℕ, and θ_k, k∈ℕ. Choose a compact subinterval J⊆ I such that f^(x) θ∈ J for all (x,θ)∈Θ. Then b:=min_u∈ JG'(u) is positive and by the mean value theorem |G(u)-G(v)|≥ b|u-v| for all u,v∈ J. Hence for all i∈ℕ and θ∈Θ, |μ(x_i,θ)-μ(x_i,θ)|= |G(f^(x_i) θ)-G(f^(x_i) θ)| ≥ b|f^(x_i) (θ-θ)|. So, for all k≥1 and θ∈ C(θ,ε), denoting a_θ=(θ-θ)/‖θ-θ‖, D_n_k(θ,θ) = ∑_i=1^n_k(μ(x_i,θ)-μ(x_i,θ))^2 ≥ b^2ε^2∑_i=1^n_k(f^(x_i) a_θ)^2 = b^2ε^2n_k∫_ X(f^(x) a_θ)^2 dξ_k(x). Choose ε_0>0 according to Lemma <ref> (ii). Then for all k≥1 and θ∈ C(θ,ε), ∫_ X(f^(x) a_θ)^2 dξ_k(x) ≥∫_{x∈ X : |f^(x) a_θ|≥ε_0}(f^(x) a_θ)^2 dξ_k(x) ≥ ε_0^2 ξ_k({x∈ X : |f^(x) a_θ|≥ε_0}) ≥ ε_0^2(k-1)/n_k. Together with (<ref>) this yields 1/k inf_θ∈ C(θ,ε)D_n_k(θ,θ) ≥ 1/kb^2ε^2n_kε_0^2k-1/n_k. For all k≥2 the r.h.s. of (<ref>) is greater than or equal to b^2ε^2ε_0^2/2, and the result of Step 2 follows. Ad Step 3. Obviously, this follows from the results of steps 1 and 2. Proof of Theorem <ref> We will appropriately modify the arguments in the proof of Theorem 2 in <cit.>. For simplicity of presentation, we assume n_1=p for the starting design of the algorithm. Choose a compact ball B centered at θ such that B⊆ int(Θ). By the strong consistency of , k∈ℕ, there is a random variable K with values in ℕ∪{∞} such that K<∞ a.s. and θ_k^ (LS)∈B on {K≤ k} for all k∈ℕ. Along the lines in <cit.>, by equating the gradient w.r.t. θ of the sum of squares at θ= to zero, one gets for all k∈ℕ, ∑_i=1^n_ke_i∇μ(X_i,) = ∑_i=1^n_k[μ(X_i,)-μ(X_i,)] ∇μ(X_i,) - ∑_i=1^n_ke_i [∇μ(X_i,)-∇μ(X_i,)]. Concerning the asymtotic behavior of each of the three sums in (<ref>), we show the following. n_k^-1/2σ^-1() ^-1/2()∑_i=1^n_ke_i∇μ(X_i,) N(0,I_p) n_k^-1/2∑_i=1^n_k[μ(X_i,)-μ(X_i,)] ∇μ(X_i,) = [M(ξ_k,) + A_k] [n_k^1/2(-)], n_k^-1/2∑_i=1^n_ke_i[∇μ(X_i,)-∇μ(X_i,)] = B_k [n_k^1/2(-)], Ad (<ref>). ∑_i=1^n_ke_i∇μ(X_i,) = ∑_j=1^k∑_ℓ=1^pe_n_j-1+ℓ∇μ(X_n_j-1+ℓ,) = ∑_j=1^kG(X_j) e_j, where G denotes the ℝ^p× p-valued function on X^p given by G(z)= [∇μ(z_1,) , … , ∇μ(z_p,)] for all z=(z_1,…,z_p)∈ X^p. Let any v∈ℝ^p with ‖ v‖=1 be given. Then, n_k^-1/2σ^-1() v^^-1/2()∑_i=1^n_ke_i∇μ(X_i,) = k^-1/2∑_j=1^ke_j, e_j=Z_j^e_j Z_j = p^-1/2σ^-1() G^(X_j) ^-1/2() v. Note that n_k=kp has been used. The sequence of p-dimensional random variables Z_j, j∈ℕ, is uniformly bounded, that is, sup_j∈ℕ‖Z_j‖ ≤ c for some finite constant c, and Z_j is F_j-1-measurable. From this it is easily seen that ∑_j=1^ke_j, k∈ℕ, together with F_k, k∈ℕ∪{0}, constitutes a martingale. According to Corollary 3.1 of Hall and Heyde <cit.>, the distributional convergence k^-1/2∑_j=1^ke_j N(0,1) is ensured by the following conditions (α) and (β). (α) 1/k∑_j=1^k E(e_j^2| F_j-1) 1, (β) 1/k∑_j=1^k E(e_j^2(|e_j|>ε√(k)) | F_j-1) 0. To verify (α), we write e_j^2=Z_j^e_je_j^Z_j, hence E(e_j^2| F_j-1) = Z_j^ E(e_je_j^| F_j-1) Z_j. By (AH) and the uniform boundedness of the sequence Z_j, j∈ℕ, max_1≤ j≤ k|Z_j^ E(e_je_j^| F_j-1) Z_j - σ^2() Z_j^Z_j| 0 and hence |1/k∑_j=1^kZ_j^ E(e_je_j^| F_j-1) Z_j - σ^2()/k∑_j=1^kZ_j^Z_j| 0. By (<ref>), σ^2()/k∑_j=1^kZ_j^Z_j = v^^-1/2(1/kp∑_j=1^kG(X_j) G^(X_j)) ^-1/2() v. Since G(X_j) G^(X_j)= ∑_ℓ=1^p∇μ(X_n_k-1+ℓ,) ∇^μ(X_n_k-1+ℓ,), and by (b5), 1/kp∑_j=1^kG(X_j) G^(X_j) = M(ξ_k,). By Corollary <ref>, M(ξ_k,)() and hence, together with (<ref>), (<ref>), (<ref>), and (<ref>), condition (α) follows. To verify (β), we observe that e_j^2=(Z_j^e_j)^2≤ c^2‖e_j‖^2 and hence E(e_j^2(|e_j|>ε√(k)) | F_j-1) ≤ c^2 E(‖e_j‖^2(‖e_j‖>(ε/c)√(k)) | F_j-1). So (β) follows from (L). We have thus shown that k^-1/2∑_j=1^ke_j N(0,1), and together with (<ref>) and the Cramér-Wold device, the convergence (<ref>) follows. Ad (<ref>). As in <cit.>, one obtains n_k^-1/2∑_i=1^n_k[μ(X_i,)-μ(X_i,)] ∇μ(X_i,) = [M(ξ_k,)+A_k] [n_k^1/2(-)], A_k = 1/n_k∑_i=1^n_k∇μ(X_i,) [ ∇μ(X_i,θ_i,k)-∇μ(X_i,)]^, and where θ_i,k, 1≤ i≤ n_k, are appropriate random points on the line segment joining and . Along the lines in <cit.>, p. 11, one concludes A_k 0. Ad (<ref>). Let any v∈ℝ^p be given. As in <cit.> one calculates v^(n_k^-1/2∑_i=1^n_ke_i[∇μ(X_i,)-∇μ(X_i,)]) = b_k^(v) [n_k^1/2(-)], b_k(v) = 1/n_k∑_i=1^n_ke_i∇^2μ(X_i,θ_i,k) v, with appropriate random points θ_i,k, 1≤ i≤ n_k, on the line segment joining and , and with the Hessians ∇^2μ(x,θ), (x,θ)∈ X× int(Θ), according to assumption (b5). We decompose b_k(v) = b_k^(1)(v) + b_k^(2)(v), b_k^(1)(v) = 1/n_k∑_i=1^n_ke_i∇^2μ(X_i,) v b_k^(2)(v) = 1/n_k∑_i=1^n_ke_i [∇^2μ(X_i,θ_i,k) v - ∇^2μ(X_i,) v]. Consider b_k^(1)(v), which can be written as b_k^(1)(v) = 1/p∑_ℓ=1^p(1/k∑_j=1^ke_n_j-1+ℓ∇^2μ(X_n_j-1+ℓ,) v). For fixed ℓ∈{1,…,p}, each component of the inner sum on the r.h.s. of (<ref>) satisfies the assumptions of Lemma A.1 in <cit.> and hence, by part (iii) of that lemma, converges almost surely to zero (as k→∞). By (<ref>) we conclude that b_k^(1)(v)0. Consider b_k^(2)(v). By the uniform continuity of (x,θ)↦∇^2μ(x,θ) v on X×B according to (b5), and by max_1≤ i≤ n_k‖θ_i,k-‖ ≤ ‖-‖ 0, one gets max_1≤ i≤ n_k‖∇^2μ(X_i,θ_i,k) v - ∇^2μ(X_i,) v‖ 0. Since ‖ b_k^(2)(v)‖ ≤ (max_1≤ i≤ n_k‖∇^2μ(X_i,θ_i,k) v - ∇^2μ(X_i,) v‖) 1/n_k∑_i=1^n_k|e_i|, the concergence b_k^(2)(v) 0 will follow from lim sup_k→∞1/n_k∑_i=1^n_k|e_i| <∞ a.s. In fact, 1/n_k∑_i=1^n_k|e_i| = 1/p∑_ℓ=1^p(1/k∑_j=1^k|e_n_j-1+ℓ|), and for each fixed ℓ∈{1,…,p} an application of Lemma A.1, part (i) in <cit.> yields lim sup_k→∞1/k∑_j=1^k|e_n_j-1+ℓ| <∞ a.s., and hence also lim sup_k→∞1/n_k∑_i=1^n_k|e_i| <∞ a.s. We have thus shown that b_k(v) 0. Specializing v to the elementary unit vectors v^(ℓ) of ℝ^p, 1≤ℓ≤ p, and forming the p× p matrix B_k with rows b_k^(v^(1)),…,b_k^(v^(ℓ)) one has B_k 0, and (<ref>) follows from (<ref>). From (<ref>), (<ref>), (<ref>) and (<ref>), σ^-1() ^-1/2() [M(ξ_k,) +A_k-B_k] [√(kp) (-)] N(0,I_p), and A_k0, B_k0. By Corollary <ref>, M(ξ_k,)(). Using standard properties of convergence in distribution, the result follows. Proof of Theorem <ref> The error variables in model (a1), (a2'), (a3') are given by e_i = Y_i - G(f^(X_i) ), i∈ℕ, and we consider the error vectors e_k = (e_n_k-1+1,…,e_n_k)^, k∈ℕ. From (a2') and from general properties of an exponential family one concludes in particular, that the fourth conditional moments of the error vectors are bounded by some finite constant C_4, E(‖e_k‖^4| F_k-1) ≤ C_4 Along the lines of the proof of Theorem 3.3 in <cit.> one obtains, for all θ∈Θ and all k∈ℕ, L_n_k(θ)-L_n_k(θ) ≥ ∑_i=1^n_k(τ_i-τ_i(θ))e_i + 1/2β_0β_1^2∑_i=1^n[f^(X_i) θ - f^(X_i) θ ]^2 with some positive real constants β_0 and β_1. According to Wu <cit.>, Lemma 1, for strong consistency of θ_k^ (ML) it is sufficient to show that, for every ε>0 such that the parameter subset C(θ,ε)={θ∈Θ : ‖θ-θ‖≥ε} is nonempty, one has lim inf_k→∞(L_n_k(θ)-sup_θ∈ C(θ,δ) L_n_k(θ)) >0 In fact, the lim inf turns out to be equal to infinity almost surely, since we show that lim inf_k→∞1/k(L_n_k(θ)-sup_θ∈ C(θ,ε) L_n_k(θ)) >0 From (<ref>) one concludes 1/k(L_n_k(θ)-sup_θ∈ C(θ,ε) L_n_k(θ)) ≥ -1/ksup_θ∈Θ|∑_i=1^n_k(τ_i-τ_i(θ))e_i| + 1/2β_0β_1^2 1/kinf_θ∈ C(θ,ε)∑_i=1^n_k[f^(X_i) θ - f^(X_i) θ ]^2. Introduce the function h(x,θ) := (b')^-1(f^(x) θ) - (b')^-1(f^(x) θ), (x,θ)∈ X×Θ. From the definition of τ_i and τ_i(θ) one has τ_i-τ_i(θ)=h(X_i,θ) for all i∈ℕ and θ∈Θ. For convenience, we now assume n_1=p. Then |∑_i=1^n_kh(X_i,θ) e_i| = |∑_ℓ=1^p∑_j=1^kh(X_n_j-1+ℓ,θ) e_n_j-1+ℓ| ≤ ∑_ℓ=1^p|∑_j=1^kh(X_n_j-1+ℓ,θ) e_n_j-1+ℓ|. By (a1), (a2') and an application of Lemma A.1, part (iii) of <cit.>, one gets for each ℓ=1,…,p, 1/ksup_θ∈Θ|∑_j=1^kh(X_n_j-1+ℓ,θ) e_n_j-1+ℓ| 0 and hence by (<ref>) 1/ksup_θ∈Θ|∑_i=1^n_kh(X_i,θ) e_i| ≤ ∑_ℓ=1^p ( 1/ksup_θ∈Θ|∑_j=1^kh(X_n_j-1+ℓ,θ) e_n_j-1+ℓ| ) 0. In view of (<ref>) and (<ref>), it remains to show that lim inf_k→∞( 1/kinf_θ∈ C(θ,ε)∑_i=1^n_k[f^(X_i) θ - f^(X_i) θ ]^2) > 0 To this end, we consider an arbitrary path of the adaptive process and, in particular, a path x_i, i∈ℕ, of the sequence X_i, i∈ℕ. Since ∫_ X[f^(x) θ - f^(x) θ ]^2 dξ_k(x) = 1/n_k∑_i=1^n_k[f^(x_i) θ - f^(x_i) θ ]^2, (<ref>) will follow from lim inf_k→∞(inf_θ∈ C(θ,ε)∫_ X[f^(x) (θ - θ) ]^2 dξ_k(x) ) >0. In fact, (<ref>) can be seen as follows. By Lemma <ref>, part (ii), there is an ε_0>0 such that for all k∈ℕ and all a∈ℝ^p with ‖ a‖=1 one has ξ_k({x∈ X : |f^(x) a|≥ε_0}) ≥ (k-1)/n_k. In particular, for any θ∈ C(θ,ε) we take a=a_θ=(θ-)/‖θ-‖, and from (<ref>) together with ‖θ-‖≥ε we get ξ_k({x∈ X : |f^(x) (θ-)|≥ε_0ε}) ≥ (k-1)/n_k for all k∈ℕ and all θ∈ C(θ,ε). Hence, using the obvious inequality ∫_ X[f^(x) (θ - θ) ]^2 dξ_k(x) ≥ (ε_0ε)^2 ξ_k({x∈ X : |f^(x) (θ-)|≥ε_0ε}), we obtain inf_θ∈ C(θ,ε)∫_ X[f^(x) (θ - θ) ]^2 dξ_k(x) ≥ (ε_0ε)^2(k-1)/n_k. Since lim_k→∞(k-1)/n_k =1/p, it follows that the lim inf in (<ref>) is greater than or equal to (ε_0ε)^2/p >0. Proof of Theorem <ref> We will appropriately modify the arguments in the proof of Theorem 3.3 in <cit.>. For simplicity of presentation, we assume n_1=p for the starting design of the algorithm. Choose a compact ball B centered at θ such that B⊆ int(Θ). By the strong consistency of according to Theorem <ref>, there is a random variable K with values in ℕ∪{∞} such that K<∞ a.s. and θ_k^ (ML)∈B on {K≤ k} for all k∈ℕ. Along the lines of <cit.>, p. 719, one obtains for the gradients (w.r.t. θ) of the log-likelihood, S_n_k(θ)=∇ L_n_k(θ), where θ∈B, S_n_k(θ) = ∑_i=1^n_k(Y_i-G(f^(X_i) θ)) H(f^(X_i) θ) f(X_i), H(u) = G'(u)/b”((b')^-1(G(u))) and one concludes, 1/√(n_k) ^-1/2() S_n_k(θ) = ^-1/2() [M(ξ_k,θ)+ 1/n_kD_k - 1/n_kB_k] [√(n_k)(θ_k^ (ML)-θ)], D_k = ∑_i=1^n_k[φ^2(f^(X_i) θ_i,k)- φ^2(f^(X_i) θ)] f(X_i) f^(X_i) B_k = ∑_i=1^n_k(Y_i-G(f^(X_i) θ_i,n)) H'(f^(X_i) θ_i,k) f(X_i) f^(X_i), and where θ_i,k, 1≤ i≤ n_k, are appropriate random points on the line segment joining and . The asymptotics (as k→∞) of the left-hand side of (<ref>) and of the random matrices D_k and B_k will shown to be as follows. 1/√(n_k) ^-1/2() S_n_k(θ) N(0,I_p); 1/n_kD_k 0 1/n_kB_k 0. Ad (<ref>): According to the Cramér-Wold device choose any v∈ℝ^p with ‖ v‖=1. By (<ref>) with θ=θ and Y_i=G(f^(X_i) θ) +e_i according to (<ref>), we get 1/√(n_k) v^^-1/2() S_n(θ) = 1/√(n_k) v^^-1/2()∑_i=1^n_k e_iH(f^(X_i) θ) f(X_i) = 1/√(kp) v^^-1/2()∑_ℓ=1^p∑_j=1^kG(X_j) e_j, where G : X^p⟶ℝ^p× p is defined by G(z) := [H(f^(z_1) ) f(z_1) , … , H(f^(z_p) ) f(z_p)] z=(z_1,…,z_p)∈ X^p, and the error vectors e_j, j∈ℕ, are given by (<ref>). Introducing the sequence Z_j, j∈ℕ, of ℝ^p-valued random variables, Z_j = p^-1/2 G^(X_j) ^-1/2() v, we can write 1/√(n_k) v^^-1/2() S_n(θ) = 1/√(k) ∑_j=1^kZ_j^e_j. Abbreviate e_j=Z_j^e_j. Since Z_j is F_j-1-measurable for all j∈ℕ, and the sequence Z_j is uniformly bounded, that is, ‖Z_j‖≤ c for all j∈ℕ for some finite constant c, it follows that the sequence of partial sums ∑_j=1^ke_k, k∈ℕ, is a martingale w.r.t. the filtration F_k, k∈ℕ∪{0}. We will verify the following two conditions (1) and (2). (1) 1/k∑_j=1^k E(e_j^2| F_j-1) 1; (2) 1/k∑_j=1^k E(e_j^2(|e_j|>√(k) ε)| F_j-1) 0 for all ε>0. Then, by Corollary 3.1 (p. 58) of Hall and Heyde <cit.>, the convergence 1/√(k) ∑_j=1^ke_j N(0,1) and thus (<ref>) will follow. To verify condition (1), inserting e_j^2=(Z_j^e_j)^2=Z_j^e_je_j^Z_j, one gets E(e_j^2| F_j-1)=Z_j^ E(e_je_j^| F_j-1) Z_j, and hence 1/k∑_j=1^k E(e_k^2| F_k-1) = 1/k∑_j=1^kZ_j^ E(e_je_j^| F_j-1) Z_j. According to (<ref>) and (a2'), E(e_je_j^| F_j-1) = V(X_j), where V(z) := diag(b”((b')^-1(f^(z_ℓ) )) : 1≤ℓ≤ p) for z=(z_1,…,z_p)∈ X^p, and where diag(a_ℓ : 1≤ℓ≤ p), for real numbers a_1,…, a_p, stands for the diagonal p× p matrix with diagonal entries a_1,…, a_p. Inserting according to (<ref>), one gets 1/k∑_j=1^k E(e_j^2| F_j-1) = v^^-1/2()(1/kp∑_j=1^kG(X_j) V(X_j) G^(X_j)) ^-1/2() v. By the definitions of G(z) and V(z), where z=(z_1,…,z_p)∈ X^p, and by (<ref>) and (a3'), one gets G(z) V(z) G^(z) = ∑_ℓ=1^pφ^2(f^(z_ℓ) ) f(z_ℓ) f^(z_ℓ). It follows that 1/kp∑_j=1^kG(X_j) V(X_j) G^(X_j) = 1/n_k∑_i=1^n_kφ^2(f^(X_i) ) f(X_i) f^(X_i) = M(ξ_k,). By Corollary <ref>, M(ξ_k,)() which entails ^-1/2() M(ξ_k,) ^-1/2() I_p, and hence 1/k∑_j=1^k E(e_j^2| F_j-1) = v^^-1/2() M(ξ_k,) ^-1/2() v 1. To verify condition (2), recall (<ref>) showing boundedness of the fourth conditional moments of the error vectors e_j, j∈ℕ, by a finite constant C_4, and recall also the uniform boundedness of the random vectors Z_j, j∈ℕ, by a finite constant c. Using the inequalities e_j^2(|e_j|>√(k)ε)≤1/ε^2ke_j^4 and e_j^4=(Z^_je_j)^4≤‖Z_j‖^4‖e_j‖^4, one gets 1/k∑_j=1^k E(e_j^2(|e_j|>√(k) ε)| F_j-1) ≤ 1/ε^2k^2∑_j=1^k‖Z_j‖^4 E(‖e_j‖^4| F_j-1) ≤c^4C_4/ε^2k → 0 Ad (<ref>): The first convergence D_k/n_k0 is shown as in <cit.>, pp. 720-721. For the second convergence the arguments in <cit.>, p. 721, are modified as follows. We split B_k, B_k = B_k^(1) + B_k^(2) + B_k^(3), B_k^(1) = ∑_i=1^n_k[G(f^(X_i) θ) -G(f^(X_i) θ_i,k)] H'(f^(X_i) θ_i,k) f(X_i) f^(X_i), B_k^(2) = ∑_i=1^n_k e_i H'(f^(X_i) θ) f(X_i) f^(X_i), B_k^(3) = ∑_i=1^n_k e_i [H'(f^(X_i) θ_i,k)-H'(f^(X_i) θ)] f(X_i) f^(X_i). The convergence 1/n_kB_k^(1) 0 is obtained as in <cit.>, p. 721. Concerning B_k^(2), fix any pair (r,s), where 1≤ r,s≤ p. Consider the (r,s)-th entry of 1/n_kB_k^(2), which can be written as 1/p∑_ℓ=1^p(1/k∑_j=1^kZ_j,ℓ e_n_j-1+ℓ), Z_j,ℓ := H'(f^(X_n_j-1+ℓ) θ) f_r(X_n_j-1+ℓ) f_s(X_n_j-1+ℓ), j∈ℕ, 1≤ℓ≤ p. For each fixed ℓ∈{1,…,p} an application of Lemma A.1, part (ii), of <cit.> yields 1/k∑_j=1^kZ_j,ℓ e_n_j-1+ℓ0. Hence each entry of 1/n_kB_k^(2) converges to zero almost surely, that is, 1/n_kB_k^(2)0. Concerning B_k^(3), as in <cit.>, p. 721, it is easily seen that the absolute value of each entry of 1/n_kB_k^(3) is bounded above by γ_0^2 (max_1≤ i≤ n_k|H'(f^(X_i) θ_i,k)-H'(f^(X_i) θ)|) 1/n_k∑_i=1^n_k|e_i|, and max_1≤ i≤ n_k|H'(f^(X_i) θ_i,n)-H'(f^(X_i) θ)| 0. Writing 1/n_k∑_i=1^n_k|e_i| = 1/p∑_ℓ=1^p(1/k∑_j=1^k|e_n_j-1+ℓ|), an application of Lemma A.1, part (i), of <cit.> yields for each fixed ℓ∈{1,…,p} that lim sup_k→∞1/k∑_j=1^k|e_n_j-1+ℓ| <∞ Hence, lim sup_k→∞1/n_k∑_i=1^n_k|e_i| <∞ a.s., and thus each entry of 1/n_kB_k^(3) converges to zero almost surely, that is, 1/n_kB_k^(3)0. From (<ref>), (<ref>), (<ref>), together with M(ξ_k,)() and (K≤ k)1, one concludes that ^1/2() [√(kp) (-)] N(0,I_p), √(kp) (-) N(0,^-1()). 22 [AW-2015]Atkinson-Woods Atkinson, A.C.; Woods, D.C. (2015). Designs for generalized linear models. In: Dean, A.; Morris, M.; Stufken, J.; Bingham, D. (eds). Handbook of Design and Analysis of Experiments. CRC Press, 2015, pp. 471-514. [BW-1988]Bates-Watts Bates, D.M.; Watts, D.G. (1988). Nonlinear Regression Analysis and Its Applications. Wiley, New York. [BDZ-2006]Biedermann-Dette-Zhu Biedermann, S.; Dette, H.; Zhu, W. (2006). Optimal designs for dose-response models with restricted design spaces. Journal of the American Statistical Association, 101, 747-759. [BL-1959]Box-Lucas Box, G.E.P.; Lucas, H,L. (1959). Design of experiments in non-linear situations. Biometrika 46, 77-90. [CHY-1999]Chen-Hu-Ying Chen, K.; Hu, I.; Ying, Z. (1999). Strong consistency of maximum quasi-likelihood estimators in generalized linear models with fixed and adaptive designs. Ann. Statist 27, 1155-1163. [FK-1985]Fahrmeir-Kaufmann Fahrmeir, L.; Kaufmann, H. (1985). Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models. Ann. Statist. 13, 342-368. [FTW-1992]Ford-Torsney-Wu Ford, I.; Torsney, B.; Wu, C.F.J. (1992). The use of a canonical form in the construction of locally optimal designs for non-linear problems. J. R. Statist. Soc. B, 569-583. [FGS-2021]FF-NG-RS-18 Freise, F.; Gaffke, N.; Schwabe, R. (2021). The adaptive Wynn-algorithm in generalized linear models with univariate response. Ann. Statist. 49, 702-722. [FGS-2021a]FF-NG-RS-19 Freise, F.; Gaffke, N.; Schwabe, R. (2021). Convergence of least squares estimators in the adaptive Wynn algorithm for some classes of nonlinear regression models. Metrika 84, 851-874. [HH-1980]Hall-Heyde Hall, P.; Heyde, C.C. (1980). Martingale Limit Theory and Its Application. Academic Press, New York. [L-1994]Lai Lai, T.L. (1994). Asymptotic properties of nonlinear least squares estimates in stochastic regression models. Ann. Statist 22, 1917-1930. [LW-1982]Lai-Wei Lai, T.L.; Wei, C.Z. (1982). Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. Ann. Statist 10, 154-166. [MP-1992]Mueller-Poetscher Müller, W.G.; Pötscher, B.M. (1992). Batch sequential design for a nonlinear estimation problem. In: Model Oriented Data-Analysis. V.V. Fedorov, T.N. Vuchkov (Eds.) Physika-Verlag, Heidelberg. pp. 77-87. [P-2010]Pronzato Pronzato, L. (2010). One-step ahead adaptive D-optimal design on a finite design space is asymptotically optimal. Metrika 71, 219-238. [RWLE-2009]Russell-et-al Russell, K.G.; Woods, D.C.; Lewis, S.M.; Eccleston, J.A. (2009). D-optimal designs for Poisson regression models. Statistica Sinica 19, 721-730. [Wu-1981]Wu Wu, C-F. (1981). Asymptotic theory of nonlinear least-squares estimation. Ann. Statist. 9, 501-513. [W-1970]Wynn Wynn, H. (1970). The sequential generation of D-optimum experimental designs. Ann. Math. Statist. 5, 1655-1664.
http://arxiv.org/abs/2307.02851v1
20230706083253
Generalized Lotka-Volterra Systems with Time Correlated Stochastic Interactions
[ "Samir Suweis", "Francesco Ferraro", "Sandro Azaele", "Amos Maritan" ]
q-bio.PE
[ "q-bio.PE", "cond-mat.stat-mech", "physics.bio-ph" ]
APS/123-QED ^1Laboratory of Interdisciplinary Physics, Department of Physics and Astronomy “G. Galilei”, University of Padova, Padova, Italy ^2 INFN, Sezione di Padova, via Marzolo 8, Padova, Italy - 35131 ^3 Padova Neuroscience Center, University of Padova, Padova, Italy ^4 DARE Foundation, - Digital Lifelong Prevention, Bologna, Italy ^5National Biodiversity Future Center, Piazza Marina 61, 90133 Palermo, Italy The dynamics of species communities are typically modelled considering fixed parameters for species interactions. The problem of over-parameterization that ensues when considering large communities has been overcome by sampling species interactions from a probability distribution. However, species interactions are not fixed in time, but they can change on a time scale comparable to population dynamics. Here we investigate the impact of time-dependent species interactions using the generalized Lotka-Volterra model, which serves as a paradigmatic theoretical framework in several scientific fields. In this work we model species interactions as stochastic colored noise. Assuming a large number of species and a steady state, we obtain analytical predictions for the species abundance distribution, which matches well empirical observations. In particular, our results suggest the absence of extinctions, unlike scenarios with fixed species interactions. Generalized Lotka-Volterra Systems with Time Correlated Stochastic Interactions Amos Maritan^1,2,5 August 1, 2023 ================================================================================ § INTRODUCTION In the last years, the approach of statistical physics has been decisively contributing to our understanding of ecological processes by providing powerful theoretical tools and innovative steps towards the comprehension and the synthesis of broad empirical evidence <cit.>, especially in microbial ecology <cit.>. In fact, the advent of next-generation sequencing techniques is generating an increasing volume of ecologically relevant data, characterizing several microbial communities in different environments and involving a large number of species <cit.>. For this reason, the theoretical approach has shifted from the traditional dynamical systems theory, which is effectively used for a small number of species <cit.>, to statistical physics-based methods that are better suited for large systems <cit.>. In particular, several recent works have proposed to model species interactions through Generalized Lotka-Volterra equations with quenched random disorder (QGLV), and this approach has yielded a number of interesting results <cit.>. The phase diagram of these models describe a system which transitions from a single stationary state to an unbounded one, passing through a multiple-attractors state as the variance of the interactions between species exceeds a critical value <cit.>. Furthermore, with the addition of demographic noise to the QGLV new phases are also found such as a Gardner phase <cit.>. The QGLV does not typically display properties observed by real ecosystems. It suffers from the stability-diversity paradox <cit.>, which for QGLV emerges as a result of a dynamical process where the system reduces the number of coexisting species so to remain marginally stable <cit.>. Moreover, the species abundance distribution (SAD), as obtained in the limit of a large number of species within the dynamical mean field theory (DMFT) or cavity method, is a truncated Gaussian <cit.>, very different from the heavy tail SAD observed in empirical microbial <cit.> or in forest communities <cit.>. Another limitation of the QGLV model is its underlying assumption that species interactions remain constant over time. However, ecological systems in reality are characterized by temporal fluctuations in species interactions, influenced by variations in environmental conditions, resource availability, and other factors which operate on a timescale that is comparable to the population dynamics <cit.>. In the present study, we explicitly consider time-dependent species interactions as an alternative approach. Specifically, we adopt the hypothesis that these interactions can be modeled as stochastic colored noises within the framework of the GLV model, which we will call annealed GLV (AGLV). Strikingly, the introduction of temporal stochastic fluctuations in the strengths of species interactions yields results that are remarkably rich and ecologically relevant. By employing the DMFT in the limit of a large number of species, we analytically determine the stationary state, which is in excellent agreement with numerical simulations of the AGLV. In the case of white noise limit, the resulting SAD is a Gamma distribution, in agreement with observed distributions in plant and microbial communities <cit.>. Moreover, this also justifies a recently introduced phenomenological model <cit.> that has been shown to reproduce several macro-ecological laws, thus providing further validation and support for our approach. Eventually, we study the AGLV phase diagram which reveals two distinct phases: one where the system reaches a stationary state and all species coexist; another one where unbounded growth is observed. Let us consider x_i(t), the population at time t of the species i. Then the dynamics of the AGLV system with colored noise for S interacting species are given by ẋ_i(t)=r_i x_i(t)[1-x_i(t)/K_i+∑_j≠ iα_ij(t)x_j(t) +h_i(t)], with i=1,...,S, and where α_ij(t)=μ/S+σ z_ij(t)/√(S) for i≠ j and {z_ij(t):t>0} are independent Gaussian random variables with z_ij(t)=0, z_ij(t)z_ij(t')=P(Δ t|τ)=1+2τ/τ_0/2τe^-Δ t/τ, where Δ t=|t-t'|; h_i(t) is a possible time dependent external field. For simplicity, we set τ_0, r_i, K_i=1 for i=1,...,S and we work with dimensionless variables/parameters. From this general annealed formulation with colored noise, the limit τ→ 0 corresponds to the white noise (AWN) dynamics. In Fig. 1 we show the effect of time correlated noise in the species abundances evolution. Our proposed model presents a distinct characteristic, where species populations undergo recurring quasi-cycles of both high and low abundances, whose average frequency depends on the value of τ. This cyclic behavior is instrumental in promoting the coexistence of multiple species within the ecosystem (as also noted in <cit.>), and it is present for all ranges of τ, including the limit τ→ 0. The DMFT for the general AGLV eq. (<ref>) is given by (see Supplementary Methods) ẋ(t)=x(t)[1-x(t)+μ M(t)+ση(t)+h(t)], where M(t)=E(x(t)) and in the following we set h(t)=0. The self-consistent Gaussian noise η(t) is such that E(η(t))=0 and E(η(t)η(t'))=P(Δ t|τ)E(x(t)x(t')). From Fig. (<ref>) we can see that at stationarity, the (connected) auto-correlation function of x(t) decays exponentially E(x(t)x(t'))-E^2(x)=(E(x^2)-E^2(x)) e^-|Δ t|/τ_x, and exploiting eq. (<ref>) we can simplify the self consistency for η as E(η(t)η(t'))=P(Δ t|τ̅)E(x^2), at least in the relevant regime |t-t'|≪τ_x, with the new effective time scale τ̅=1/[1/τ+(1- E^2(x)/E(x^2))/τ_x] (see Supplementary Methods for further details). With this simplification we can now use the Unified Colored Noise Approximation (UCNA) <cit.> on eq. (<ref>), which leads to the stationary SAD P_τ^*(x)=x^-1+δ_τ/Z(1/τ̅+x)e^-x/D-τ̅/2D(x-x̅)^2, where x>0, Z is the normalization constant, that can be computed analytically, δ_τ=(1+μ M^*)/D(τ), D(τ)=σ^2 E^*(x^2)(1+2τ)τ̅(τ)/2τ and x̅=1+μ M^* (E^*(·) denotes the average with the distribution P_τ^* and M^*=E^*(x)). As anticipated, we thus find that for all finite τ no extinction occurs as also confirmed by numerically integrating eq. (<ref>) for 30 species (see Fig. <ref>). Notice that P_τ^*(x) is basically an interpolation between a truncated Gaussian ( peaked at x>0) and a Gamma distribution. The former is known to be the solution for the SAD of the DMFT in the case of random quenched interactions in the single equilibrium phase <cit.>, while the latter we will show is the exact solution of the AWN case, corresponding to the limit τ̅∼τ→ 0 of eq. (<ref>). In the AWN limit, in fact, the DMFT equation is the same of eq. (<ref>), but in this case with E(η(t)η(t'))=Σ^2(t)δ(t-t'), Σ^2(t)=E(x^2(t)), and the multiplicative noise term x(t)η(t) should be interpreted in the Stratonovich sense <cit.>. At stationarity, the self-consistency imposes M^*=E^*(x) and Σ^*^2=E^*(x^2). The exact stationary distribution P_0^* can be derived from the Fokker-Planck Equation corresponding to eq. (<ref>) and it reads P_0^*(x)=β^δ/Γ(δ)x^-1+δe^-β x, and it coincides with the limits of P_τ^*(x) when τ→ 0. We also have that lim_τ→ 0δ_τ=2(1+μ M^*)/(σ^2Σ^*^2)≡δ with M^*=1/(1-μ) and Σ^*^2=δ(δ+1)/β^2. In this way we can write explicitly the SAD's parameters as a function of μ and σ as (see Supplementary Methods): β=σ^2/2δ(δ+1); δ=2/σ^2(1-μ)-1. The histograms in Figure <ref> show the probability distributions for the stationary species abundances obtained by simulating the full AGLV equations given by eq. (<ref>) for the same parameters used in Fig. <ref>. The predicted SADs by the DMFT are plotted as continuous lines and given, respectively, by eq. (<ref>) and in (A) also by eq. (<ref>), denoted by the dark blue dashed line. In the latter case, the distribution parameters are directly calculated from eq. (<ref>) as a function of μ and σ. For eq. (<ref>) instead, the parameters are obtained by first fitting the distribution and then checking the agreement with the self-consistent equations (error below 5%, see Supplementary Methods). Using the chosen value of the correlation time, τ, the parameters D(τ) and τ̅(τ) given by our analytical framework, we can deduce the value of τ_x in eq. (<ref>). The red line in Figure <ref> shows that indeed the predicted value of τ_x is consistent with the decay of E(x(t))(x(t')) obtained by simulating the full AGLV system given by eq. (<ref>) (error below 5%, see Supplementary Methods). Since δ>0 and E(x)>0, in order for the stationary solution to exist, we have the conditions σ<√(2(1-μ)) and μ≤1, leading to a lower bound for the unbounded growth phase of the AGLV as shown in Fig. <ref>. However, by solving numerically the self-consistent eq. (<ref>) (see Supplementary Methods) and also performing the numerical simulation of the entire GLV systems, we find that below this bound, even though a stationary solution exists, it may not be reached. In particular, in the red region of Fig. <ref>, independently of the initial condition for x(t=0), there is a singularity at finite times, leading to the explosion of the species population. In the green region instead, if we start close to the predicted stationary solution P^*(x), then we always find that the stationary solution is reached and it coincides with the one predicted by the DMFT eq. (<ref>). However, there is a set of initial conditions (for sufficiently large x(t=0)) for which x(t) may diverge for finite t. Such divergent trajectories are also confirmed when we simulate the full eq. (<ref>) for a large enough number of species (see Supplementary Methods). However, understanding the divergence at finite times due to the non-linearity of the Fokker-Planck equation and its dependence on initial conditions is left for future works[One possible solution to this problem is to change the dynamics of the population in the GLV so that the total population is conserved, i.e. M ≡ E(x) is independent of time. By implementing this modification, the resulting phase diagram in the (M, σ) plane mirrors the diagram depicted in Fig. <ref>, with the association M=1/(1-μ) and the absence of any divergences.] In this study, we have undertaken an investigation into the GLV equations with annealed disorder, incorporating finite correlation time, and have determined the corresponding dynamical mean field for a large number of species. The inclusion of temporal stochastic fluctuations in the strengths of species interactions has resulted in a remarkably diverse range of phenomena and ecologically significant outcomes. Firstly, the introduction of annealed disorder in the GLV equations, for any finite correlation time, has exerted a substantial positive influence on the biodiversity of the system. Specifically, when the dynamics of the system converge to the stationary distribution, we observe the coexistence of all species without extinctions. This is not the case in conventional quenched GLV models where extinctions are observed <cit.>. The facilitation of coexistence arises from the existence of temporal periods during which species abundances alternate between high and low values. This phenomenon engenders favorable conditions for the persistence of species and prevents their local extinctions. Second, in the white noise limit, the DMFT leads to the stochastic logistic model, a phenomenological model that proved to be consistent with several macro-ecological laws in microbial ecosystems <cit.>. In particular, the analytical species abundance distribution derived from the DMFT follows the Gamma distribution, a widely utilized probability distribution in macro-ecology <cit.>. Furthermore, we have successfully obtained the phase diagram for the case of annealed white noise (AWN), and numerical simulations have revealed the potential for unbounded growth when the initial conditions possess large values, despite the existence of an analytically stationary solution. In other words, due to the non-linear nature of the corresponding Fokker-Planck equation, the dynamics may not converge to the stationary solution, leading to divergent trajectories. Our future work will focus on conducting more detailed analyses to explore the nature of these finite time singularities. Moreover, we propose various other avenues of research, including the integration of quenched as well as annealed disorder and the correlations between pairs of interacting species <cit.> or more complex hierarchical correlation structures <cit.> More generally, the methodology presented here can be exploited to study the effect of annealed disorder also in other ecological dynamics. The exploration of such directions within our framework holds significant promise for advancing the modelling of large-scale ecosystem dynamics, understanding emergent macro-ecological patterns observed in empirical data, and investigating the influence of environmental fluctuations on species coexistence. We wish to acknowledge Jacopo Grilli, Davide Bernardi and Christian Grilletta for critical reading of the manuscript and useful discussions. S.S. acknowledges Iniziativa PNC0000002-DARE - Digital Lifelong Prevention. F.F. wishes to thank Matteo Guardiani and the Information Field Theory group at the Max-Planck-Institute for Astrophysics for the hospitality and helpful comments. S.A., F.F. and A.M. also acknowledge the support of the NBFC to the University of Padova, funded by the Italian Ministry of University and Research, PNRR, Missione 4, Componente 2, “Dalla ricerca all’impresa”, Investimento 1.4, Project CN00000033. 38 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Azaele et al.(2016)Azaele, Suweis, Grilli, Volkov, Banavar, and Maritan]azaele2016statistical author author S. Azaele, author S. Suweis, author J. Grilli, author I. Volkov, author J. R. Banavar, and author A. Maritan, title title Statistical mechanics of ecological systems: Neutral theory and beyond, @noop journal journal Reviews of Modern Physics volume 88, pages 035003 (year 2016)NoStop [Quéméner et al.(2014)Quéméner, Bouchez et al.]quemener2014thermodynamic author author D.-L. Quéméner, author T. Bouchez, et al., title title A thermodynamic theory of microbial growth, @noop journal journal The ISME journal volume 8, pages 1747 (year 2014)NoStop [Xiao et al.(2017)Xiao, Angulo, Friedman, Waldor, Weiss, and Liu]xiao2017mapping author author Y. Xiao, author M. T. Angulo, author J. Friedman, author M. K. Waldor, author S. T. Weiss, and author Y.-Y. Liu, title title Mapping the ecological networks of microbial communities, @noop journal journal Nature communications volume 8, pages 2042 (year 2017)NoStop [Grilli(2020)]grilli2020macroecological author author J. Grilli, title title Macroecological laws describe variation and diversity in microbial communities, @noop journal journal Nature communications volume 11, pages 4743 (year 2020)NoStop [Hu et al.(2022)Hu, Amor, Barbier, Bunin, and Gore]hu2022emergent author author J. Hu, author D. R. Amor, author M. Barbier, author G. Bunin, and author J. Gore, title title Emergent phases of ecological diversity and dynamics mapped in microcosms, @noop journal journal Science volume 378, pages 85 (year 2022)NoStop [The Integrative HMP (iHMP) Research Network Consortium(2019)]integrative2019integrative author author The Integrative HMP (iHMP) Research Network Consortium, title title The integrative human microbiome project, @noop journal journal Nature volume 569, pages 641 (year 2019)NoStop [Gilbert and Lynch(2019)]gilbert2019community author author J. A. Gilbert and author S. V. Lynch, title title Community ecology as a framework for human microbiome research, @noop journal journal Nature medicine volume 25, pages 884 (year 2019)NoStop [Reichenbach et al.(2007)Reichenbach, Mobilia, and Frey]reichenbach2007mobility author author T. Reichenbach, author M. Mobilia, and author E. Frey, title title Mobility promotes and jeopardizes biodiversity in rock–paper–scissors games, @noop journal journal Nature volume 448, pages 1046 (year 2007)NoStop [Friedman et al.(2017)Friedman, Higgins, and Gore]friedman2017community author author J. Friedman, author L. M. Higgins, and author J. Gore, title title Community structure follows simple assembly rules in microbial microcosms, @noop journal journal Nature ecology & evolution volume 1, pages 0109 (year 2017)NoStop [Tikhonov and Monasson(2017)]tikhonov2017collective author author M. Tikhonov and author R. Monasson, title title Collective phase in resource competition in a highly diverse ecosystem, @noop journal journal Physical review letters volume 118, pages 048103 (year 2017)NoStop [Tu et al.(2017)Tu, Grilli, Schuessler, and Suweis]tu2017collapse author author C. Tu, author J. Grilli, author F. Schuessler, and author S. Suweis, title title Collapse of resilience patterns in generalized lotka-volterra dynamics and beyond, @noop journal journal Physical Review E volume 95, pages 062307 (year 2017)NoStop [Marsland et al.(2020)Marsland, Cui, and Mehta]marsland2020minimal author author R. Marsland, author W. Cui, and author P. Mehta, title title A minimal model for microbial biodiversity can reproduce experimentally observed ecological patterns, @noop journal journal Scientific reports volume 10, pages 1 (year 2020)NoStop [Batista-Tomás et al.(2021)Batista-Tomás, De Martino, and Mulet]batista2021path author author A. Batista-Tomás, author A. De Martino, and author R. Mulet, title title Path-integral solution of macarthur’s resource-competition model for large ecosystems with random species-resources couplings, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 31, pages 103113 (year 2021)NoStop [Gupta et al.(2021)Gupta, Garlaschi, Suweis, Azaele, and Maritan]gupta2021effective author author D. Gupta, author S. Garlaschi, author S. Suweis, author S. Azaele, and author A. Maritan, title title Effective resource competition model for species coexistence, @noop journal journal Physical review letters volume 127, pages 208101 (year 2021)NoStop [Barbier et al.(2018)Barbier, Arnoldi, Bunin, and Loreau]barbier2018generic author author M. Barbier, author J.-F. Arnoldi, author G. Bunin, and author M. Loreau, title title Generic assembly patterns in complex ecological communities, @noop journal journal Proceedings of the National Academy of Sciences volume 115, pages 2156 (year 2018)NoStop [Biroli et al.(2018)Biroli, Bunin, and Cammarota]biroli2018marginally author author G. Biroli, author G. Bunin, and author C. Cammarota, title title Marginally stable equilibria in critical ecosystems, @noop journal journal New Journal of Physics volume 20, pages 083051 (year 2018)NoStop [Galla(2018)]galla2018dynamically author author T. Galla, title title Dynamically evolved community size and stability of random lotka-volterra ecosystems (a), @noop journal journal Europhysics Letters volume 123, pages 48004 (year 2018)NoStop [Pearce et al.(2020)Pearce, Agarwala, and Fisher]pearce2020stabilization author author M. T. Pearce, author A. Agarwala, and author D. S. Fisher, title title Stabilization of extensive fine-scale diversity by ecologically driven spatiotemporal chaos, @noop journal journal Proceedings of the National Academy of Sciences volume 117, pages 14572 (year 2020)NoStop [Bunin(2017)]bunin2017ecological author author G. Bunin, title title Ecological communities with lotka-volterra dynamics, @noop journal journal Physical Review E volume 95, pages 042414 (year 2017)NoStop [Altieri et al.(2021)Altieri, Roy, Cammarota, and Biroli]altieri2021properties author author A. Altieri, author F. Roy, author C. Cammarota, and author G. Biroli, title title Properties of equilibria and glassy phases of the random lotka-volterra model with demographic noise, @noop journal journal Physical Review Letters volume 126, pages 258301 (year 2021)NoStop [McCann(2000)]mccann2000diversity author author K. S. McCann, title title The diversity–stability debate, @noop journal journal Nature volume 405, pages 228 (year 2000)NoStop [Allesina and Tang(2015)]allesina2015stability author author S. Allesina and author S. Tang, title title The stability–complexity relationship at age 40: a random matrix perspective, @noop journal journal Population Ecology volume 57, pages 63 (year 2015)NoStop [Gibbs et al.(2018)Gibbs, Grilli, Rogers, and Allesina]gibbs2018effect author author T. Gibbs, author J. Grilli, author T. Rogers, and author S. Allesina, title title Effect of population abundances on the stability of large random ecosystems, @noop journal journal Physical Review E volume 98, pages 022410 (year 2018)NoStop [Sala et al.(2016)Sala, Vitali, Giampieri, do Valle, Remondini, Garagnani, Bersanelli, Mosca, Milanesi, and Castellani]sala2016stochastic author author C. Sala, author S. Vitali, author E. Giampieri, author Ì. F. do Valle, author D. Remondini, author P. Garagnani, author M. Bersanelli, author E. Mosca, author L. Milanesi, and author G. Castellani, title title Stochastic neutral modelling of the gut microbiota’s relative species abundance from next generation sequencing data, @noop journal journal BMC bioinformatics volume 17, pages 179 (year 2016)NoStop [Ser-Giacomi et al.(2018)Ser-Giacomi, Zinger, Malviya, De Vargas, Karsenti, Bowler, and De Monte]ser2018ubiquitous author author E. Ser-Giacomi, author L. Zinger, author S. Malviya, author C. De Vargas, author E. Karsenti, author C. Bowler, and author S. De Monte, title title Ubiquitous abundance distribution of non-dominant plankton across the global ocean, @noop journal journal Nature ecology & evolution volume 2, pages 1243 (year 2018)NoStop [Azaele et al.(2006)Azaele, Pigolotti, Banavar, and Maritan]azaele2006dynamical author author S. Azaele, author S. Pigolotti, author J. R. Banavar, and author A. Maritan, title title Dynamical evolution of ecosystems, @noop journal journal Nature volume 444, pages 926 (year 2006)NoStop [Thompson(1999)]thompson1999evolution author author J. N. Thompson, title title The evolution of species interactions, @noop journal journal Science volume 284, pages 2116 (year 1999)NoStop [Suweis et al.(2013)Suweis, Simini, Banavar, and Maritan]suweis2013emergence author author S. Suweis, author F. Simini, author J. R. Banavar, and author A. Maritan, title title Emergence of structural and dynamical properties of ecological mutualistic networks, @noop journal journal Nature volume 500, pages 449 (year 2013)NoStop [Fiegna et al.(2015)Fiegna, Moreno-Letelier, Bell, and Barraclough]fiegna2015evolution author author F. Fiegna, author A. Moreno-Letelier, author T. Bell, and author T. G. Barraclough, title title Evolution of species interactions determines microbial community productivity in new environments, @noop journal journal The ISME journal volume 9, pages 1235 (year 2015)NoStop [Pacciani-Mori et al.(2021)Pacciani-Mori, Suweis, Maritan, and Giometto]pacciani2021constrained author author L. Pacciani-Mori, author S. Suweis, author A. Maritan, and author A. Giometto, title title Constrained proteome allocation affects coexistence in models of competitive microbial communities, @noop journal journal The ISME Journal volume 15, pages 1458 (year 2021)NoStop [Descheemaeker and De Buyl(2020)]descheemaeker2020stochastic author author L. Descheemaeker and author S. De Buyl, title title Stochastic logistic models reproduce experimental time series of microbial communities, @noop journal journal Elife volume 9, pages e55650 (year 2020)NoStop [Descheemaeker et al.(2021)Descheemaeker, Grilli, and de Buyl]descheemaeker2021heavy author author L. Descheemaeker, author J. Grilli, and author S. de Buyl, title title Heavy-tailed abundance distributions from stochastic lotka-volterra models, @noop journal journal Physical Review E volume 104, pages 034404 (year 2021)NoStop [Roy et al.(2020)Roy, Barbier, Biroli, and Bunin]roy2020complex author author F. Roy, author M. Barbier, author G. Biroli, and author G. Bunin, title title Complex interactions can create persistent fluctuations in high-diversity ecosystems, @noop journal journal PLoS computational biology volume 16, pages e1007827 (year 2020)NoStop [Jung and Hänggi(1987)]jung1987 author author P. Jung and author P. Hänggi, title title Dynamical systems: a unified colored-noise approximation, @noop journal journal Physical review A volume 35, pages 4464 (year 1987)NoStop [Kupferman et al.(2004)Kupferman, Pavliotis, and Stuart]kupferman2004ito author author R. Kupferman, author G. A. Pavliotis, and author A. M. Stuart, title title Itô versus stratonovich white-noise limits for systems with inertia and colored multiplicative noise, @noop journal journal Physical Review E volume 70, pages 036120 (year 2004)NoStop [Note1()]Note1 note One possible solution to this problem is to change the dynamics of the population in the GLV so that the total population is conserved, i.e. M ≡ E(x) is independent of time. By implementing this modification, the resulting phase diagram in the (M, σ ) plane mirrors the diagram depicted in Fig. <ref>, with the association M=1/(1-μ ) and the absence of any divergences.Stop [Serván et al.(2018)Serván, Capitán, Grilli, Morrison, and Allesina]servan2018coexistence author author C. A. Serván, author J. A. Capitán, author J. Grilli, author K. E. Morrison, and author S. Allesina, title title Coexistence of many species in random ecosystems, @noop journal journal Nature ecology & evolution volume 2, pages 1237 (year 2018)NoStop [Poley et al.(2023)Poley, Baron, and Galla]poley2023generalized author author L. Poley, author J. W. Baron, and author T. Galla, title title Generalized lotka-volterra model with hierarchical interactions, @noop journal journal Physical Review E volume 107, pages 024313 (year 2023)NoStop
http://arxiv.org/abs/2307.01935v1
20230704214625
Relative Equilibria of Dumbbells Orbiting in a Planar Newtonian Gravitational System
[ "Jodin Morey" ]
math.CA
[ "math.CA", "astro-ph.EP", "math.DS" ]
=0.3in empty A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Richard Moeckel ==8pt gobble 0pt © ALL RIGHTS RESERVED gobble CHAPTER: ACKNOWLEDGMENTS tocchapterAcknowledgments roman I am grateful to my wife, family, and friends for their patience as I dedicated my time to the writing of this thesis. I also thank Richard Moeckel, my advisor, who provided support, expertise, and encouragement when needed. CHAPTER: ABSTRACT tocchapterAbstract In the cosmos, any two bodies share a gravitational attraction. When in proximity to one another in empty space, their motions can be modeled by Newtonian gravity. Newton found their orbits when the two bodies are infinitely small, the so-called two-body problem. The general situation in which the bodies have varying shapes and sizes, called the full two-body problem, remains open. We find relative equilibria (RE) and their stability for an approximation of the full two-body problem, where each body is restricted to a plane and consists of two point masses connected by a massless rod, a dumbbell. In particular, we find symmetric RE in which the bodies are arranged colinearly, perpendicularly, or trapezoidally. When the masses of the dumbbells are pairwise equal, we find asymmetric RE bifurcating from the symmetric RE. And while we find that only the colinear RE have nonlinear/energetic stability (for sufficiently large radii), we also find that the perpendicular and trapezoid configurations have radial intervals of linear stability. We also provide a geometric restriction on the location of RE for a dumbbell body and any number of planar rigid bodies in planar orbit (an extension of the Conley Perpendicular Bisector Theorem). 1.1 tocchapterList of FigurestocchapterList of Tables 1.28 fancy [R]page arabic CHAPTER: INTRODUCTION We will explore the dynamics of a planar Newtonian two-body system. That is, a system in which two bodies interact by way of Newtonian gravity, and whose orbits are restricted to a plane. The Two-Body Problem (2BP), where each body is modeled as a point mass, was solved by Newton in the Principia in 1687 <cit.>. He also proved Shell Theorem, which states that spherically symmetric rigid bodies (which also lack tidal forces) affect “external objects gravitationally as though all of its mass were concentrated at a point at its center.” However, when one considers the Full Two-Body Problem (F2BP) in which the bodies have nonspherical shapes, nonuniform densities, or tidal forces (like planets or asteroids); general solutions to the equations of motion (differential equations describing the motions of the system) are intractable. Despite the difficulty in finding solutions, information about the dynamics of some of these types of systems has been found. In this paper, we will use an approximation to the F2BP, where each body is represented as a dumbbell (a dumbbell consists of two point masses connected by a massless rod). § RELATIVE EQUILIBRIA (RE) As a step in understanding these more complicated systems, we would like to locate some kind of equilibria. While we don't expect literal equilibria from orbiting bodies, we can identify so-called relative equilibria. A relative equilibrium (RE) of a dynamical system is a solution which becomes an equilibrium in some uniformly rotating coordinate system. Put another way, a RE is an equilibrium point for a dynamical system which has been reduced through the quotienting out of some variable (a rotation angle in our case), as was shown in <cit.>. Put yet another way, for a two-body (or n-body) problem in a uniformly rotating reference frame, RE configurations of the parameters (initial positions, velocities, mass distributions) are those for which the bodies are static. That is, the distance between the bodies is constant, and neither body rotates relative to the other (they are tidally locked). An observer in an inertial reference frame however, would see the two bodies rotating in circles about the system's axis of angular momentum (which includes the center of mass) in a rigid fashion (constant radius and such that the face that each body reveals to the other remains constant). For point mass bodies, note that only the radius requirement is relevant, as a point mass has no defined state of rotation. Lastly, RE can be characterized as the critical points of an “amended potential,” which we will discuss in Section <ref>. Motivating our interest in RE, we note that Pluto and Charon are in a near RE <cit.>. Many known bodies also exist in quasi-RE (the less massive of the two bodies is tidally locked, but the more massive is not). Examples include recently discovered orbiting binary asteroids <cit.>, Jupiter and its Galilean moons (the four largest)<cit.>, and closer to home, the Moon and Earth. Additionally, as we send satellites out to orbit comets and asteroids, we will wish to have knowledge of configurations allowing these types of static orbits (particularly those which are stable). Outline for the paper * In Chapter <ref> of this paper, we will introduce the amended potential and discuss its role in finding and characterizing the stability of RE. * In Chapter <ref>, we will re-solve Newton's 2BP using the amended potential technique as a way of introducing the methods we will use in our dumbbell models. * In Chapter <ref>, we examine the dumbbell/point mass problem. We expand on the work of Beletskii and Ponomareva who explored this model in 1990 <cit.>, finding colinear and perpendicular (isosceles) RE. However, our work will differ as we will use the amended potential method. We use this technique and others to identify RE and examine stability (both nonlinear/energetic and linear). Unlike Beletskii and Ponomareva (who found Lyapunov stability), we find stable RE as minimal energy states of the Hamiltonian. * In Chapter <ref>, we will use similar techniques to look for RE and their stability in the two-dumbbell problem. Here, there are many more families of RE, and classifying all of them is not as simple as in the dumbbell/point mass problem. In addition to symmetric RE (colinear, perpendicular, trapezoid), we apply bifurcation analyses to identify families of asymmetric RE bifurcating from the symmetric ones. Results * The most general result is an extension of Conley's Perpendicular Bisector Theorem <cit.>, applied to a system containing a dumbbell and other planar rigid bodies in planar orbit. Our theorem gives a geometric restriction on the location of the bodies while in RE. * For the dumbbell/point mass problem, we expand the results of Beletskii and Ponomareva <cit.> to include energetically stable colinear RE in the radially overlapped region. We also identify regions of the parameter space that vary in the number of RE by using angular momentum as a bifurcation parameter. * For the two-dumbbell problem, we locate symmetric RE (colinear, perpendicular, trapezoid), and families of asymmetric RE bifurcating from the symmetric ones. We identify regions of the parameter space that vary in the number of RE by using angular momentum as a bifurcation parameter. And while the asymmetric RE are found to be unstable, we find the colinear RE to be energetically stable for sufficiently large radii. We also identify radial intervals of linear stability for perpendicular and trapezoid RE. § HISTORICAL EXAMPLES The first orbital two-body RE discovered were in Newton's 2BP solutions. If one assumes the radius to be constant in his solutions, they produce RE consisting of circular planar orbits (which we will show below). Then, while studying the Earth-Moon-Sun system, Euler (in 1766) discovered RE solutions to the Circular Restricted Three-Body Problem (CR3PB), where two bodies have masses significantly larger than the third and the motions of the larger two are restricted to circular orbits around their center of mass <cit.>. His RE consisted of colinear configurations with the Moon taking positions L1, L2, or L3 in Figure <ref>. In the following year, he found these colinear RE for the general three-body problem where the three bodies were free to take on any mass <cit.>. Then, in 1772, Lagrange found the two remaining “Lagrange Points” (L4, L5) for the general three-body problem. Bodies orbiting at these points form equilateral triangles with the other two bodies. Sometime later (1859) while studying the rings of Saturn, Sir James Clerk Maxwell showed the existence of RE for an (n+1)-body problem <cit.>. The configuration has n orbiting bodies of equal mass positioned regularly on a circular orbit (forming a regular n-gon) orbiting the “+1” central mass. Maxwell found the rings were stable if the central mass was sufficiently large relative to the ring's mass. Richard Moeckel later identified a minor error in the original calculations that put a lower bound of 7 on the number of bodies necessary to achieve stability (1994) <cit.>. However, when one moves from point masses to extended rigid bodies (ERBs), the complexity of the problem for two bodies may preclude comprehensive classification of all RE configurations. Nonetheless, a number of papers have explored the possible configurations for and stability of RE when dealing with non-spherical ERBs. Wang and Maddocks (1992) proved the existence of non-Lagrangian RE (where the orbits of the two bodies exist in distinct parallel planes) <cit.>. And Maciejewski (1995) proved the existence of at least 36 non-Lagrangian RE in the limit as the distances between the bodies go (and angular momentum goes) to infinity <cit.>. Scheeres (2006) subsequently discovered necessary and sufficient conditions for RE of a system consisting of a point mass and an ERB. He also analyzed the linear (eigenvalues of the linearized equations of motion) and nonlinear energetic stability (whether the RE are at strict minima of an energy function) of these RE <cit.>. In another paper (2009), he located RE and determined the stability properties for a system (with an approximate potential) where both bodies were non-spherical but limited to planar motion <cit.>. A few years later (2012), he generalized by allowing (in each body) internal interactions (like tidal forces) that can cause dissipation of energy. Then, using an energy function for the system, he identified minimum energy configurations at fixed values of angular momenta <cit.>. Recently (2018), my advisor Richard Moeckel gave lower bounds on the number of RE for the F2BP where the angular momentum (and radius) of the system is large, but finite <cit.>. For this paper, instead of looking at continuous ERBs, we will build an approximating structure out of a finite number of point masses, connected by massless rods. We build on work conducted by Beletskii and Ponomareva (1990) <cit.>. This point mass approximation has the benefit of massively simplifying the potential function in our equations. CHAPTER: AMENDED POTENTIAL The process to discover equilibria often begins with writing down the Lagrangian of a system ℒ = K - U, where K is the kinetic energy and U is the potential energy (with conserved energy given by the Hamiltonian ℋ = K + U). One then uses the Euler Lagrange equations d/dt∂ℒ/∂q̇_i-∂ℒ/∂ q_i = 0, (where the q_i's are the system's variables) to generate equations of motion. The equilibria are the solutions to these equations when the time derivatives of the variables are set to zero. However, a technique used by several of the above mentioned papers, and what we will use in our analyses, is the amended (effective) potential for finding RE. The technique uses symmetries (conserved quantities) in the system to reduce the equations of motion by removing symmetry related velocities. The resulting amended potential V is essentially the regular potential U plus contributions we get from being in a rotating reference frame. This technique is useful for at least three reasons. First, you form an amended potential by using conserved quantities/symmetries to eliminate velocity variables and therefore unnecessary complexity from your system. Second, Smale <cit.> showed that the RE of the original system are identical to the critical points of the amended potential. So once you form your amended potential, you can use standard tools for identifying critical points and therefore the RE. Third, if you can identify a critical point as a strict minimum of this amended potential, Smale showed that this critical point is also a strict minimum of the original energy function, and therefore the system is energetically stable for the associated RE by the Dirichlet-Lagrange stability theorem <cit.>. 2.5em0pt The Dirichlet-Lagrange Stability Theorem: An equilibrium point (velocity v⃗=0) at position r⃗=r⃗_0 of Newton’s equations for a particle of mass m, moving under the influence of a potential V, which has a local strict minimum at r⃗_0, is stable. The general difficulty in finding energetic stability in dynamical systems makes this technique very attractive. Energetic stability means that if your trajectory is sufficiently close to the minimum of the energy function, for small enough perturbations, the resulting trajectory remains close to that minimum. So how does one calculate the amended potential? First, we reduce our system by using an inertial reference frame moving with the same constant velocity as the system's center of mass, which we place at the origin. We then form a Lagrangian for our dynamical system ℒ = K - U and use the Euler Lagrange equations to form equations of motion. Next, we note that the dynamics are invariant under a change of the rotation ϕ of the reference frame. As a result, ϕ does not appear in our Lagrangian (it is a cyclic variable). So, our Lagrangian possesses a symmetry, and we can perform a reduction of the system. By Noether <cit.>, we then have the equation: ∂ℒ/∂ϕ̇ =L:=|L⃗|. The angular momentum L⃗ is a conserved quantity (first integral). Equation (<ref>) allows us to eliminate our velocity variable ϕ̇ by solving for it explicitly in terms of the other variables, and then substituting this back into our equations of motion. Once eliminated, the resulting reduced Lagrangian ℒ_red is still a Lagrangian (K_red-V), and includes the amended potential V. Essentially, you are quotienting out a symmetry group (a circle in our case), and the resulting system exists on a quotient manifold with one fewer degrees of freedom. And the critical points of V are RE of our original system. Now we will apply these techniques to our first model, Newton's 2BP. CHAPTER: RE OF NEWTONIAN POINT MASS TWO-BODY PROBLEM (2BP) Newton used geometric arguments in his solution to the 2BP for point masses. In order to familiarize ourselves with the methods described in the previous section, let us use them to calculate RE for the 2BP. The 2BP (through a simple change of variables shown below), can be modeled as a restricted 2BP (Kepler Problem), or one in which the central body is assumed to be motionless (approximating the case when the mass of the central body far exceeds the mass of the orbiting body). This also meets the definition of a body in a central force. A central force is one which acts on masses M_i such that: * The force F⃗_i on M_i is always directed toward, or away from a fixed point O; and * The magnitude of the force |F⃗_i| only depends on the distance r between M_i and O. Characterizing our system in this way will simplify our calculations of the equations of motion. Also, we show that even though the bodies exist in three dimensional space, their motion is restricted to a plane, and therefore we can simplify our analysis by examining the motion in just two dimensions. § CENTRAL FORCE MOTION IS PLANAR To solve the Kepler problem, we assume the fixed body is located at the origin of our system, and without loss of generality we set M_2 as that fixed body. Observe that the initial position r⃗_0 and velocity vector v⃗_0 for the moving mass M_1 define a plane. We will show that r⃗ and v⃗ remain in this plane. Observe that the dot product of the angular momentum L⃗=r⃗× M_1v⃗ with the mass' position vector r⃗ is zero: r⃗·L⃗=r⃗· (r⃗× M_1v⃗)=M_1v⃗·(r⃗×r⃗)=0. Therefore the position r⃗ (and similarly the velocity v⃗=dr/dt) at each moment lies in a plane perpendicular to L⃗. And so if L⃗ is constant, then r⃗ and v⃗ remain in the plane perpendicular to L⃗. Recall that the time derivative of angular momentum is: dL⃗/dt =d/dt(r⃗× M_1v⃗)=(v⃗× M_1v⃗)+(r⃗× M_1d/dtv⃗)=r⃗×F⃗ = net torque. Since F⃗ points in the opposite direction as r⃗ (F⃗ is central), we have r⃗×F⃗ =0. Therefore, L⃗ is constant, and our central force motion is planar. With this information, let us calculate the equations of motion for the Kepler problem. § KEPLER PROBLEM: EQUATIONS OF MOTION We calculate the equations of motion using the Lagrangian: ℒ=T-U. Recall that force can be defined as the negative of the vector gradient of a potential field: F⃗(r⃗):=-∇ U=-d/dr⃗ U(r) (where r:=|r⃗|). And furthermore, from Newton's universal law of gravitation, the force between two masses M_1 and M_2 is: F⃗(r⃗)=-GM_1 M_2/|r⃗|^3r⃗. So, integrating we find: U(r)=-∫_r⃗^∞F⃗(s⃗)· ds⃗=-∫_r⃗^∞-GM_1 M_2/|s⃗|^3s⃗· ds⃗=-GM_1 M_2/r. And since our motion is restricted to two dimensions, we have kinetic energy T=1/2M_1(ṙ^2+r^2ϕ̇ ^2). So the Lagrangian is: ℒ=1/2M_1(ṙ^2+r^2ϕ̇ ^2)+GM_1 M_2/r. Applying the Euler-Lagrange equations, we have d/dt∂ℒ/∂q̇_i-∂ℒ/∂ q_i=0, where the q_i's in our case are r and ϕ. Taking these derivatives we find two coupled equations of motion: fleqntrue M_1r̈+M_1 rϕ̇^2-GM_1 M_2/r^2=0, and d/dt(M_1r^2ϕ̇)=0. Now we can either note that (<ref>) upon integrating gives us a conserved quantity L:=M_1 r^2ϕ̇, or we can appeal to Noether and the existence of the cyclic variable ϕ, and calculate ∂ℒ/∂ϕ̇=L. Either way, we then perform the reduction from Section <ref> using our conserved quantity L. Solving for the rotation speed from the equation above, we have: ϕ̇ =L/M_1 r^2. Substituting ϕ̇ into (<ref>), we have: M_1r̈+L^2/M_1r^3-GM_1 M_2/r^2=0. To find our reduced Lagrangian ℒ_red and the amended potential, note that the kinetic and potential energy necessary for d/dt∂ℒ_red/∂ṙ-∂ℒ_red/∂ r=0 to produce the previous equation would be: ℒ_red=T_red-V, where T_red=1/2M_1ṙ^2 and V=L^2/2M_1 r^2-GM_1 M_2/r. So, we have eliminated the velocity variable ϕ̇, decoupled the system, and reduced to a single equation of motion: r̈=∂_rV/M_1=GM_2/r^2-L^2/M_1^2 r^3. To find RE of our system, one option is to look for fixed points of this reduced system. Note that a constant radius implies a circular orbit. Additionally, since point masses have no meaningful sense of rotation, all two-body circular orbits of point masses meet the definition of being RE. So, from (<ref>), we use the fact that RE occur when the time derivatives are zero (except rotational ϕ̇), and set r̈ =0. Solving for the radius in what remains, we conclude: r=L^2/GM_1 M_2^2. Observe the one-to-one relationship between radii and positive L in this equation. From (<ref>), we can determine the rotation speed: ϕ̇ =√(GM_1/r)=GM_1 M_2/L. Again, a one-to-one relationship between ϕ̇ and L. So, angular momentum uniquely determines both the ϕ̇ and r for RE. Additionally, for each fixed L, note that the circular orbits are critical points (minimums) of the amended potential: ∂_rV=-L^2/M_1 r^3+GM_1 M_2/r^2=0, when r=L^2/GM_1 M_2^2, as expected (see Figure <ref>a). As noted earlier, the RE of the original system (<ref>) are the same as the equilibria of the reduced system (<ref>). Now that we have found the RE, let us see if they are stable. That is, if something were to perturb the RE orbit (some solar wind, or small asteroid), will the resulting orbit remain close to the RE? §.§.§ Energetic Stability of Kepler As we saw in Section <ref>, to determine energetic stability we verify that the RE are strict minima of the amended potential V. We find positive curvature (∂^2_rV=3L^2/M_1 r^4-2GM_1 M_2/r^3=M_1^4M_2^7G^4/L^6>0), when r=L^2/GM_1 M_2^2. So, these RE are strict minima. So, by Smale, these RE are energetically stable. In the Figure <ref>b, you see the phase portrait for the amended potential. Note that surrounding the equilibrium are nearby periodic orbits. § UNRESTRICTED 2BP Generalizing, what are the RE when both bodies can move freely? So this is not a Kepler problem. First we may wish to reconsider the location of the origin of our inertial reference frame. Due to conservation of linear momentum, we can assume the system's center of mass moves at a constant rate. This allows us to choose an inertial reference frame such that our choice of origin coincides with the system's center of mass. We then denote the positions of the two bodies as r⃗_1,r⃗_2, and the vector between them as r⃗:=r⃗_2-r⃗_1. Constructing the dynamics, recall from Newton's second (F⃗=Ma⃗) and third (F⃗_12=-F⃗_21) laws that M_2 r̈_2=F⃗_21=-F⃗_12=-M_1r̈_1, or equivalently r̈=F⃗_21/M_2-F⃗_12/M_1=(1/M_2+1/M_1)F⃗_21, where F⃗_ij is the force on r⃗_i by r⃗_j. And denoting the “reduced mass” M:=M_1M_2/M_1+M_2, we have: Mr̈=F⃗_21. So, by the form of this equation, we see that we have characterized the motion of the two bodies as equivalent to the motion of a single body at radius r⃗ with mass M, moving under the influence of F⃗_21. And if the motion described here is due to a central force, we can use the results from the Kepler Problem to find our equation of motion. Note that in this system, the center of mass serves as our fixed point, and always lies between r⃗_1 and r⃗_2, so the force is directed toward it. Additionally, since the force GM_1M_2/|r⃗|^3r⃗ only depends on r⃗, it fits the definition of a central force. And so the Kepler problem above gives us our solutions. Shell Theory lets us also apply these solutions to spherically symmetric ERBs with constant density. However, to find the RE when one of the bodies is not spherically symmetric, or with irregular density is complicated. As a step in that direction, our next model alters one of the point mass bodies by splitting it into two point masses connected by a massless rod, i.e. a “dumbbell.” CHAPTER: PLANAR DUMBBELL/POINT MASS PROBLEM It was shown in 2013 <cit.> that the dumbbell/point mass problem (seen above) is non-integrable. Nonetheless, various researchers have identified its RE and their stability (<cit.>,<cit.>,<cit.>,<cit.>). Unlike Newton's model with two point masses, the dumbbell/point mass problem need not have planar orbits. For example, researchers in 2020 <cit.> examined the dumbbell/point mass problem, allowing for three dimensional motion, and identified Lyapunov stability of RE. However, a likely location for RE of our subsequent two-dumbbell problem is in planar motion, so it is in this context we will study the dumbbell/point mass problem. We follow some of the calculations of Beletskii and Ponomareva (1990) <cit.>, who also looked at RE for this planar model. In particular, the authors were interested in how stability (Lyapunov and linear) of RE changes as you alter parameter values (the mass and length of the dumbbell). Below, we extend the results of these previous papers, formulating the problem through the use of an amended potential to identify and characterize the energetic stability of RE via Smale. We also identify linear stability, and use angular momentum as a bifurcation parameter in the equations of motion. This will identify regions of the parameter space that differ in the number of RE and help us identify extrema in the angular momentum, assisting with the energetic stability analysis. As previous authors identified, the bodies of the planar configuration RE are either colinear, or form an isosceles triangle. In Section <ref>, we prove an extension of the Conley Perpendicular Bisector Theorem, which limits the possible RE configurations to only the colinear or isosceles. [Perpendicular Bisector Theorem for RE of a Dumbbell and Rigid Planar Bodies] Let a dumbbell r_1r_2 and one or more planar rigid bodies ℬ_2,...,ℬ_n be in a planar RE. Then, if one of the two open cones determined by the lines through r_1r_2 and its perpendicular bisector contains one or more rigid bodies, the other open cone cannot be empty. In particular, applying our theorem shows that any RE must have the point mass body lie on the line defined by the dumbbell's massless rod (colinear configuration), or on the perpendicular bisector of that rod (isosceles configuration). § SET-UP AND NOTATION For the following description, refer to Figure <ref>. Without loss of generality, we denote the mass of the point mass body as M_1, and the total dumbbell's mass as M_2 such that the system mass is M_1+M_2=1. We also denote the mass ratios of the dumbbell masses as x_1 and x_2 such that x_1+x_2=1. We label the origin and center of mass of our system reference frame as O. The masses of the dumbbell are connected by a massless rod of length ℓ which we set to 1 (by scaling the system distances). The dumbbell has a center of mass located at C⃗. We label the “radius” vector connecting the point mass body to C⃗ as r⃗, and r:=|r⃗|. We also label the acute angle between r⃗ and the horizontal axis of our system reference frame as ϕ. We let θ represent the angle between r⃗ and the dumbbell's massless rod, where if θ =0, then x_1 is the mass located on OC. Without loss of generality, we limit θ∈[0,π). Note we can replicate dynamics on the rest of the range by switching the dumbbell masses. The system has three degrees of freedom r,ϕ, and θ. The position vectors for the dumbbell masses are: r⃗_i:=M_1r(cosϕ,sinϕ)+(-1)^ix_k(cos(ϕ+θ),sin(ϕ +θ)), where k≠ i, and the point mass body position vector is r⃗_p:=-M_2r(cosϕ,sinϕ). Note that the mass associated with r⃗_i is M_2 x_i. Also, the distance between r⃗_p and r⃗_i is: d_i(r,θ):=√(r^2+(-1)^i2x_krcosθ+x_k^2). For a table of the notation used in this paper, see Appendix <ref>. § EQUATIONS OF MOTION In order to find the equations of motion, we let v⃗_i,v⃗_p denote the velocities of r⃗_i,r⃗_p. We will calculate the Lagrangian ℒ=T-U, where T=1/2M_1 v⃗_p^2+1/2Σ_i∈{1,2}M_2 x_iv⃗_i^2 is the kinetic energy, and (from Newton) the potential energy of the system is (with G=1, and scaled by 1/M_1 M_2): U(r,θ):=-x_1/d_1-x_2/d_2. To calculate kinetic energy, we write: v⃗_i=d/dtr⃗_i=M_1 ṙ(cosϕ,sinϕ) +M_1 rϕ(-sinϕ,cosϕ)+(-1)^ix_k(ϕ̇+θ̇)(-sin(ϕ+θ),cos(ϕ+θ)), and v⃗_p=-M_2ṙ(cosϕ,sinϕ)-M_2rϕ̇(-sinϕ,cosϕ). So, using some trigonometric identities, we find our Lagrangian (scaled by 1/M_1 M_2) is: ℒ=T-U=1/2(ṙ^2+r^2ϕ̇^2)+1/2B(θ̇+ϕ̇)^2 +(x_1/d_1+x_2/d_2), where B:=x_1 x_2/M_1 is the dumbbell's scaled moment of inertia relative to its center of mass. The equations of motion are given by the Euler-Lagrange equation: d/dt∂ L/∂q̇_i -∂ L/∂ q_i=0, for each degree of freedom q_i∈{r,ϕ,θ}. So, we calculate: fleqntrue r̈-rϕ̇^2+∂ U/∂ r=0, r^2ϕ̈+2ṙϕ̇ r+B(θ̈+ϕ̈)=-∂ U/∂ϕ=0, and B(θ̈+ϕ̈)+∂ U/∂θ=0. § FINDING RE To find RE, we first wish to reduce our system using the amended potential method in Section <ref> by exploiting the conserved angular momentum. As we did in the 2BP, we calculate scalar angular momentum (scaled by 1/M_1M_2): ∂ℒ/∂·φ=r^2·φ+B(·θ+·φ) =:L. Solving for the rotational speed: ·φ =L-B·θ/r^2+B or ·φ =L/r^2+B at RE. We can now eliminate the rotation speed ·φ from our system using this relation. Upon substituting ·φ into (<ref>), and solving for ··φ, we see that (<ref>) is equivalent to ··φ =-2r·r(L-B·θ)+B··θ(r^2+B)/(r^2+B)^2. Substituting ·φ and ··φ into (<ref>) and (<ref>), we determine the reduced Lagrangian ℒ_red for which d/dt∂ℒ_red/∂·r_i-∂ℒ_red/∂ r_i will generate these reduced equations. We find: ℒ_red=T_red-V=1/2(·r^2+ Br^2/r^2+B·θ^2)+BL/r^2+B·θ-(L^2/2(r^2+B)+U). So, we have our amended potential (scaled by 1/M_1M_2): V=L^2/2(r^2+B)-x_1/d_1-x_2/ d_2. Recall we can characterize the RE of our system as the critical points of the amended potential V. Taking our derivatives, we find for θ, the angular requirement: ∂_θV=x_1 x_2 rsinθ(1/d_1^3-1/d_2^3)=0. And for r, the radial requirement: ∂_rV=x_1(r-x_2cosθ)/d_1^3+x_2(r+x_1cosθ)/d_2^3 -L^2/(r^2+B)^2r=0. Observe that if we find a θ which satisfies (<ref>), then substitute it into (<ref>), we will have a relationship between r and L for RE. This allows us to parameterize the RE by r. Also note that for RE, if you fix angular momentum L, then for each r, equation (<ref>) provides a unique rotation speed ·φ. A nontrivial dumbbell is one that has positive length, both masses are positive, and whose distance from the system's center of mass is positive i.e., x_i∉{0,1}, and ℓ,r>0. The authors of <cit.> observed that for (<ref>) to hold with a nontrivial dumbbell, there are only two configurations: θ∈{ 0,π} or d_1=d_2. Remarkably, the only configurations allowed correspond to RE with masses being in a colinear alignment or in the shape of an isosceles triangle. Particularly, the isosceles RE exist with any choice of dumbbell masses (they need not be equal). These results seem less mysterious when considered as the only allowed configurations of our Perpendicular Bisector Theorem for RE of a Dumbbell and Rigid Planar Bodies. Before we look at each of these configurations individually, we first develop the tools we will need to analyze the energetic and linear stability. Energetic Stability As we saw in Section <ref>, to determine energetic stability we check if RE are strict minima of the amended potential V. So we calculate the Hessian of V: H:=[ ∂_r^2 V ∂_r,θ^2 V; ∂_r,θ^2 V ∂_θ^2 V ]. Any strict minima will produce real, positive H eigenvalues, making H positive definite. To simplify our calculations, we first recharacterize ∂_rV (equation <ref>) as: (g(r,θ) -L^2) r/(r^2+B)^2, where g(r,θ) :=(r^2+B)^2/r(x_1(r-x_2cosθ)/d_1^3+x_2(r+x_1cosθ)/d_2^3). So at a RE (critical point of ∂_rV), we have: g(r,θ) =L^2. Also, ∂_r^2V=r/(r^2+B)^2∂_rg(r,θ)+(g(r,θ)-L^2)(1-4r^2/B+r^2)1/(r^2+B)^2, which at RE is: ∂_r^2V|_RE=r/(r^2+B)^2∂_rg(r,θ). We can now determine the sign of ∂_r^2V at a RE by looking at ∂_rg, or ∂_rL^2. In other words, with the slopes of graphs like Figure <ref> (positive slopes indicate ∂_r^2V>0). Next, for θ, we calculate: ∂_θ^2V=x_1 x_2 rcosθ(1/d_1^3-1/d_2^3) -3x_1x_2 r^2sin ^2θ(x_2/d_1^5+x_1/d_2^5). And lastly: ∂_θ ,rV=∂_r,θV =x_1 x_2sinθ(1/d_1^3-1/d_2^3)+3x_1x_2 rsinθ(r+x_1cosθ/(r^2+2x_1rcosθ +x_1^2)^5/2-r-x_2cosθ/(r^2-2x_2rcosθ +x_2^2)^5/2). For a nontrivial (r≠ 0) critical point this becomes: ∂_θ,rV|_RE = 3x_1 x_2 rsinθ(r+x_1cosθ/(r^2+2x_1rcosθ +x_1^2)^5/2 -r-x_2cosθ/(r^2-2x_2rcosθ +x_2^2)^5/2). In the configurations below, we will use these in the second partial derivative test to determine whether H is positive definite, and therefore energetic stability of our RE. Linear Stability We determine linear stability by linearizing our equations of motion (<ref>) as ·v =Av⃗, and requiring purely imaginary A eigenvalues at RE. We start by rewriting the reduced Lagrangian in terms of the amended potential. So: ℒ_red=1/2(·r^2+2BL/r^2+B·θ+Br^2/r^2+B·θ^2) -V(r,θ), where V(r,θ) :=L^2/2(r^2+B)+U(r,θ) is the amended potential. Applying the Euler Lagrange equation gives us equations of motion: fleqntrue ··r=Br·θ(B·θ-2L)/(r^2+B)^2-∂_rV, ··θ=2L-B·θ/r(r+B^2)·r-r^2+B/Br^2∂_θV. For RE, we have ·r_RE=·θ_RE= ··r_RE=··θ_RE=0, giving us: fleqntrue ··r_RE=-∂_rV=0, ··θ_RE=-r^2+B/Br^2∂_θV=0, from which we can see our previous requirements for RE that ∂_rV=∂_θV=0. Letting v⃗:=[rθ·r·θ], we linearize our system (<ref>) as ·v =Av⃗, where A:=[ 0 I; A_3 A_4 ], A_3=[ B·θ(B·θ-2L)/(r^2+B)^2-4rBr·θ(B·θ-2L)/(r^2+B)^3-∂_r^2V -∂_r,θ^2V; 2(B·θ-L)(2r+B^2)/r^2(r+B^2)^2·r+(2r^2+B/Br^3-2/Br)∂_θV-r^2+B/Br^2∂_r,θ^2V -r^2+B/Br^2∂_θ^2V ], A_4=[ 0 Br(B·θ-2L)+B^2r·θ/(r^2+B)^2; 2L-B·θ/r(r+B^2) -2B/r(r+B^2)·r ], and I,0 are the 3× 3 identity and zero matrices, respectively. At RE (· r=·θ=∂_rV=∂_θV=0) we get A_RE=[ 0 I; A_3_RE A_4_RE ], where A_3_RE=[ -∂_r^2V -∂_r,θ^2V; -r^2+B/Br^2∂_r,θ^2V -r^2+B/Br^2∂_θ^2V ] and A_4_RE=[ 0 -2BLr/(r^2+B)^2; 2L-B·θ/r(r+B^2) 0 ]. Calculating the characteristic polynomial of A_RE, we find: z^4+c_1z^2+c_0, where c_1=∂_r^2V+4BL^2/(r^2+B)^3+r^2+B/Br^2∂_θ^2V and c_0=r^2+B/Br^2(∂_θ^2V∂_r^2V-(∂_r,θ^2V)^2). Note for linear stability we need imaginary eigenvalues. So our linear stability conditions are: c_1^2≥ 4c_0,c_1≥ 0, and c_0≥ 0. We will use these conditions in the configurations below to establish linear stability. §.§ Colinear Configuration: r>0 and θ =0 Let us first look at the simpler of the two configurations, when the masses are colinear. Note, there is a singularity when r=x_2, so this configuration naturally divides up into r>x_2 and r<x_2. We will not address the situation where r=x_2, as a model like ours, which uses point masses and Newton's interaction potential, does not accurately model any real situation when bodies are this close. For work that attempts to model these scenarios, see Scheeres <cit.> where he studies the case when the orbiting bodies come in contact with each other. Note also that the case r<x_2 is physically impossible due to the “massless rod” colliding with the r⃗_1 mass. So this model is explored more as a mathematical curiosity. From the radial requirement (<ref>) we find: x_1/(r-x_2)| r-x_2|+x_2/(r+x_1)^2 =L^2/(r^2+B)^2r. So since r>0, our bifurcation equation is: L^2(r)=(r^2+B)^2/r( x_1/(r-x_2)| r-x_2|+x_2/(r+x_1)^2). Graphing L^2 for x_2<x_1 with (x_1,M_1) =(6/10,1/2), and x_1<x_2 with (x_1,M_1) =(1/10,1/2) we have Figure <ref>. Given x_1,M_1, one can vary a horizontal line (representing the square of the angular momentum) in such a graph, using it as a bifurcation parameter. For x_2<x_1 (Figure <ref>), there appear to be RE only in the non-overlapped (r>x_2) part of the domain. So no RE for sufficiently low angular momenta L^2. However, for sufficiently large L^2, the line intersects the graph at two r values of different colinear RE. For x_1<x_2 (in Figure <ref>), we see RE in the overlapped part of the domain as well. This particular graph suggests there is only one RE for every L^2 in the overlap. However, we will see below that the overlapped region is a bit more complicated. §.§.§ Subcase 1: Non-Overlapped: r>x_2 To examine bifurcations when the dumbbell is not overlapping, we look at r>x_2. So let R:=r-x_2 or r=R+x_2. With this substitution, the dumbbells do not overlap for R∈(0,∞). Let a dumbbell and point mass be in a colinear non-overlapped RE. There are no RE for sufficiently low angular momenta, and two RE for angular momenta greater than some value L_b>0. We will check that in the non-overlap region, the graph of L^2 is qualitatively similar to Figure <ref>, that is, concave up ( (L^2)^ ''(R)>0) with one positive minimum. For our analysis, it is helpful to do the following change of variable: let x_1=u/1+u and x_2=1/1+u. Note that we still have x_1+x_2=1, but now we have characterized x_1,x_2 with one variable 0<u<∞. Making these changes in (L^2)^ ''(R), and multiplying by the positive expression h(R):=1/2M_1^2R^4(R+1)^4(u+1)^4(1+R+uR)^3, we find: h(R)L^2=M_1(3M_1+2)u(u+1)^5R^8+8M_1u(u+1)^4(3M_1+u+1)R^7 +6u(u+1)^3(M_1^2(3u+14)+2M_1(u^2+5u+1)+u) R^6 +4u(u+1)^2(3M_1^2(u^2+8u+14)+2M_1(u^3+11u^2+22u+1)+3u(2u+1))R^5 +u(u+1)[3M_1^2(u^3+19u^2+71u+70)+2M_1(u^4+27u^3+123u^2+137u+1) +3u(12u^2+23u+2)] R^4 +12u(M_1^2(u^3+9u^2+21u+14)+M_1u(u^3+11u^2+29u+21)+u^2(2u^2+8u+7)) R^3 +2u(M_1^2(9u^2+42u+42)+2M_1u(6u^2+32u+35)+u^2(3u^2+22u+28))R^2 +4u(3M_1^2(u+2)+M_1u(5u+11)+u^2(2u+5)) R+3u(M_1+u)^2. Observe (by inspection) the expression is always positive, so L^2 is concave up, giving at most one bifurcation. We also check that there is a critical point. We observe (L^2)^ '(R)=0 when: 0=M_1R^6+3M_1R^5+(3M_1(x_1^2+x_1 x_2+x_2^2) -3x_1x_2) R^4+(M_1(x_1^3-2x_1x_2(5x_2-1)+x_2^3)-3x_1x_2(2x_1+1))R^3-3x_1x_2(3+2x_2)(M_1x_2+x_1)R^2 -3x_1x_2(2x_2+1)(M_1x_2+x_1)R-2x_1x_2^2(M_1x_2+x_1). Note that this expression is continuous and takes negative (when R=0 for instance), and positive (as R→∞) values. Therefore, by the intermediate value theorem (L^2)^ '(R) has a zero. So L^2 must have a minimum, giving a bifurcation of the number of RE as L^2 is varied. Lastly, we check that this minimum is positive, giving these RE physical relevance (real angular momenta). Observe by inspection that for r>x_2 we have (using (<ref>)): L^2=1/r(r^2+B)^2(x_1/(r-x_2)^2+x_2/(r+x_1)^2), which is always positive. Therefore, for r>x_2 we see no RE for small angular momenta, and two RE for angular momenta larger than some positive bifurcation value L_b. Now let's look at the overlap case. §.§.§ Subcase 2: Overlap r<x_2 Note that for r<x_2, the bodies are overlapping. When this occurs, r⃗_p is located within the massless rod of the dumbbell. So, obviously this has limited practical application. This model could be used to loosely represent oddly shaped orbiting asteroids, the larger of which is shaped roughly like a bent dumbbell, and the smaller asteroid could be “orbiting” the larger in a RE within the concavity of the dumbbell (see the image above). But mostly, we study this case for mathematical curiosity. So let R:=r/x_2-r or r=x_2R/R+1. With this substitution, the bodies overlap for R∈(0,∞). From (<ref>) we have: L^2(R)=R+1/x_2 R(B+x_2^2R^2/(R+1)^2)^2(x_2/(x_2R/R+1+x_1)^2-x_1/(x_2R/R+1-x_2)^2). Then, multiplying (L^2)^ '(R) by the positive expression M_1^2x_2(R+1)^2(R+x_1)^3/2(M_1x_2 R^2+x_1(R+1)^2), and collecting powers of R we define: k(R):=-2x_1(M_1x_2+x_1)R^6-3x_1(2x_1+1)(M_1x_2+x_1)R^5-3x_1^2(2x_1+3)(M_1x_2+x_1)R^4 +(-10x_1^5-(11M_1+6)x_1^4x_2+3(1-3M_1)x_1^3x_2^2+3(M_1-1)x_1 x_2^4+M_1x_2^5)R^3 -3x_1x_2(x_2-x_1)((2-M_1)(x_1^2+x_1 x_2+x_2^2)+x_1 x_2)R^2 -3x_1(x_2-x_1)(x_1^2+x_1 x_2+x_2^2)R-x_1^2(x_2-x_1)(x_1^2+x_1 x_2+x_2^2). We will learn from analyzing k that we have two qualitatively different configurations; one when x_1<1/2<x_2, and another when x_2<1/2<x_1. Subcase 2a: x_1<1/2<x_2 Let a dumbbell and point mass be in a colinear overlapped RE. For x_1<x_2, with x_1 sufficiently large, we have (L^2)^ ' (R)<0 for all R, and therefore only one RE for each L^2 (Figure <ref>a). However, for x_1≪ x_2, we have (L^2)^ '>0 for one interval, and therefore three RE for some interval of L^2 values, and one RE otherwise (Figure <ref>). In order to visually cover the entire range R∈(0,∞), here and in other graphs in this paper, we graph the horizontal axis as R=z/2-z for z∈(0,2). We look for zeros of (L^2)^ ' (R). Observe that in k(R), every term is negative except perhaps the R^3 term. Also, every term has an x_1 factor except the R^3 term. So, as x_1→ 0, every term except R^3 gets arbitrarily small. The R^3 term can remain large, since there is a x_2^5 in the coefficient which remains (and increases) as x_1→ 0. Note in particular, that the sign of k(R) will change since the positive R^3 term will survive, and the others (while being negative) will shrink to zero. So, for fixed R and x_1≪ x_2 we find positive slope (L^2)^ '(R), and as you then increase R, larger powers of R overtake the R^3 term, and the slope (L^2)^ '(R) changes from positive back to negative (Figure <ref>). To ensure that there is only one interval in which (L^2)^ '>0, we bound the number of zeros of (L^2)^ '(R) using Descartes' rule of signs on -k(R). Observe that when x_1<x_2, all of the coefficients of -k(R) except the coefficient of R^3 are trivially positive. And as we observed above, the coefficient of R^3 can change sign depending on the size of x_1. So, using the rule of signs, we observe that the sign changes twice or not at all, and we conclude that the number of (positive) roots is either two or zero. We also observe that as R→ 0 the constant term gives us x_1^2(x_1-x_2)(x_1^2+x_1 x_2+x_2^2) <0 (since x_1<x_2). And as R→∞, we see from the R^6 term that -2x_1(M_1x_2+x_1)<0, and the boundary behavior is what we suspect from the graphs above. So, with only two potential roots for (L^2)^ ', we see L^2 can have at most one increasing interval. So if the overlapped dumbbell mass x_1 is very small, for the right choice of angular momentum, the two bodies find themselves in a RE at one of three (overlapped) radii. Below, we graph k=d/dRk=0 (representing points in parameter space at which the extrema and inflection points of graphs like <ref> collide) in Figure <ref>. It shows regions of the x_1M_1-plane in which we have qualitatively different graphs of (L^2)(R). Region A has only one RE ((L^2)^ '<0 for all R, Figure <ref>), and B has one or three RE depending upon L ((L^2)^ '>0 on one interval, Figure <ref>). Below, we depict the orbital configurations of RE for a particular choice of (x_1,M_1) in Region B. Specifically, we depict the locations of the two bifurcation points which occur for (x_1,M_1) =(0.01, 0.75) at L≈ 0.6536 andL≈ 0.05795, respectively. Subcase 2b: x_2<1/2<x_1 Figure <ref> implies that in the overlap (r<x_2) there are no RE when x_2<x_1. To verify this, we show L^2<0, which would correspond to L with nonzero imaginary part, which are not physically realizable angular momenta. From (<ref>) we have: L^2=-((x_2^2+B) R^2+(2y+1) B)^2((R+2x_1) x_1R+(x_1^3-x_2^3)) /x_2^3R(R+1)(R+x_1)^2. Observe by inspection that when x_2<x_1, we have L^2 negative. Therefore, there are no physically real angular momenta for x_2<x_1 in the radius r<x_2, and therefore x_2<x_1 has no RE. Having characterized the location of the colinear RE, let us look at their stability. §.§.§ Energetic Stability of Colinear As we saw in Section <ref>, to determine energetic stability we check if the RE are strict minima of the amended potential V. We are looking for a positive definite H, the Hessian of V. Note from (<ref>) ∂_θ,rV|_RE=0 when θ =0, so our Hessian is diagonal. For stability we need ∂^2_rV, ∂^2_θV|_RE>0. Recall that we can determine the sign of ∂_r^2V|_RE by the slope of our L^2 curve. Next, calculating ∂_θ^2V|_RE, we have: x_1 x_2 r/(r^2-2x_2r+x_2^2)^3/2-x_1 x_2 r/(r^2+2x_1r+x_1^2)^3/2. Observe this is greater than zero when r>x_2-x_1/2. Note that this is always true when the bodies are not overlapping (i.e., r>x_2). And therefore, when (L^2)^ '>0, we conclude that our critical points are strict minima of the amended potential and energetically stable. Looking back at Figure <ref>, we recall that for sufficiently large angular momenta, in the non-overlap region (r>x_2) we have two RE, and now we see that the one with a greater radius is a strict minimum, and the other a saddle of V. When the bodies are overlapping (r<x_2) and x_1>1/2 we find physically unreal angular momenta (L^2<0). For x_1<1/2 (see Figure <ref>) we find the region r>x_2-x_1/2 (to the right of the solid line) intersects with the region (L^2)^ '>0 (below the dashed line) for low x_1, and therefore we have stability. We can therefore conclude that our critical points are energetically stable in this region. In the overlap case when r<x_2-x_1/2, the critical points are maxima of the amended potential when (L^2)^ '<0, and saddle points otherwise. However, since the kinetic energy in (<ref>) is positive definite, all the points in the overlap with r<x_2-x_1/2 represent saddles in the energy manifold. Note that in the overlap region when (x_1,M_1)=(0.008,1/2) (as in Figure <ref>) and with appropriate L^2, the second of the three RE occurs at a positive slope. In particular, when L^2 = 7/10 we have three RE at r∈(0.0865293, 0.721838, 0.80048) with the second and third one being larger than x_2-x_1/2=0.492. Therefore, the configuration in the Figure <ref> is energetically stable (we have curved the massless rod to make it a bit more physically feasible). §.§.§ Linear Stability of Colinear Mapping the linear stability conditions (<ref>) in the Rx_1-plane, for the non-overlapped colinear configuration, we have Figure <ref>. The dashed curve going through the plane is ∂_r L^2=0. So, numerically it appears the linear and energetic stability coincide. In Figure <ref>, we map the linear stability conditions (<ref>) for the overlapped colinear configuration. We limit our graph to x_1∈(0,1/2) where stability (and physically realizable angular momenta) could be found. Recall that we found energetic stability when r>x_2-x_1/2 and ∂_rL^2>0. That is, the filled in region below the ∂_r L^2=0 dashed curve. So the energetic stability coincides with the linear stability in this region. In addition, we also see some linear stability when r<x_2-x_1/2, but this region is where L^2<0, and therefore represents physically unrealizable RE. Now let us look at the other configuration having RE, when the bodies take the shape of an isosceles triangle. §.§ Isosceles Triangle Configuration: r>0 and d_1=d_2 Observe from (<ref>) that d_1=d_2 implies a relationship (found by Beletskii and Ponomareva) between the radius and the angle θ∈(-π/2,π/2]: cosθ =x_2-x_1/2r. Note that θ =π/2 when x_1=x_2, for any r>0. And when x_1<x_2, we have a one-to-one relationship for r∈[ x_2-x_1/2,∞), where θ∈[ 0,π/2). Similarly, for x_2<x_1, we have the one-to-one relationship for r∈[ x_1-x_2/2,∞), and θ∈(-π/2,0]. So, we have a minimum radius | x_2-x_1|/2. We see from (<ref>) and the radial requirement (<ref>) that when d_1=d_2 there is a simple relationship between rotational speed and the distances between the masses: (·φ)^2=L^2/(r^2+B)^2=1/d_1^3. Rearranging this to find bifurcations of our RE with parameter L, we have: L^2=(r^2+B)^2/(r^2+M_1B)^3/2, where B:=x_1 x_2/M_1 is the dumbbell's scaled moment of inertia relative to its center of mass. Graphing for a couple of particular parameters (x_1,M_1), we have Figure <ref>. Let a dumbbell and point mass body be in an isosceles RE. The x_1M_1-plane divides into regions which differ by the possible number of RE. Particularly for M_1<12 x_1 x_2/(x_1 - x_2)^2+16x_1 x_2, and as L^2 increases from zero, we initially have no RE until a bifurcation point at L_b=√(256/27BM_2), where two RE appear at r_b=√(B(3-4M_1)). Subsequently, a RE merges with the origin. As a result, we have have zero, then two, then one RE. For larger M_1, the bifurcation curve has no local minima, but does have an absolute minimum at r=| x_2-x_1|/2. Therefore, we have zero, then one RE as L^2 increases from zero. Note that a requirement for bifurcation is: (L^2)^ '(r)=4r(r^2+B)(r^2+BM_1) -3r(r^2+B)^2/(r^2+BM_1)^5/2 =0, from which we find only one positive critical point r_b:=√(B(3-4M_1)), when M_1<3/4. We must also ensure that r_b is greater than or equal to our previously calculated minimum radius r_min=|x_2-x_1|/2. Comparing these expressions, we find the additional restriction M_1≤12x_1x_2/(x_1-x_2)^2+16x_1x_2. A simple calculation shows the maximum of the right-hand side of this inequality occurs when x_1=1/2 and M_1=3/4. Therefore, making this inequality strict satisfies M_1<3/4. So, if the mass of the point mass body M_1 is too large, the related graph has a RE curve with no bifurcation (as in Figure <ref>). For instance, this would be the case for a planet and an orbiting small (dumbbell shaped) satellite. For smaller M_1 (see Figure <ref>), we compute the (square of the) angular momentum at the point of bifurcation L^2(r_b): L_r_b:=√(256/27BM_2). To ensure our critical point is a minimum, we calculate (L^2)^ ''(r_b)=8√(3)(4M_1-3)/27(M_1-1)√(-B(M_1-1))>0. So there is only one bifurcation and it is located at (r_b,L_r_b). As r→ 0, we find the angular momentum for RE becomes: L_0:=√(B/M_1^3)>0. Or, since our minimum radius is | x_2-x_1|/2, a more relevant initial angular momentum is: L_r_min:=r→| x_2-x_1|/2lim(L)=(x_2-x_1)^2+4B/√(2)((x_2-x_1)^2+4M_1B)^3/4>0. Below, Figure <ref> represents the regions of the x_1M_1-plane in which we have qualitatively different graphs of L^2(r). In the region below the curve, the graph of L^2(r) has a minimum as calculated above (with a graph similar to Figure <ref>), and therefore a bifurcation above which, there are two RE until L^2>L_0, where there exists a unique RE. The region above the curve has no local minimums, and therefore has a unique RE for each L^2>L_r_min (Figure <ref>). What do the two isosceles RE look like in the lower region, and how do they differ? Looking at the example in Figure <ref> (where (x_1,M_1) =(3/4,9/20) with L^2=1.7), we calculate the radii to be r≈ 0.3384 or 1.262. And the related θ are approximately 0.7646π and 0.5634π, respectively. See the orbital configurations below. For the smaller radius RE, the rotational speed is much higher ·φ ≈ 2.45 compared to the larger radius which has ·φ ≈ 0.649. Now that we have located our isosceles RE, let us see if they are stable. Energetic Stability of Isosceles As we saw in Section <ref>, to determine energetic stability we check if the RE are strict minima of the amended potential V. So, we are looking for a positive definite H, the Hessian of V. Using the second partial derivative test, with cosθ = x_2-x_1/2r observe from (<ref>) that: ∂_θ^2V=x_1 x_2 rcosθ(1/d_1^3-1/d_2^3)-3x_1x_2 r^2sin^2θ(x_2/d_1^5+x_1/d_2^5)=-3x_1x_2 r^2sin^2θ/d_1^5<0. So, the only possibilities are maxima or saddles. Evaluating the eigenvalues, one finds that these are maxima of V. So we will have no stable RE for the isosceles configuration. If we wish to classify what type of critical points these are on the energy manifold ℋ, note that the kinetic energy in (<ref>) is positive definite, so that the RE found in the isosceles configuration are all saddle points in the energy manifold. Linear Stability of Isosceles Although we lack energetic stability, when we map the linear stability conditions (<ref>) for the isosceles configuration (See Figure <ref>), for M_1<12 x_1 x_2/(x_1 - x_2)^2+16x_1 x_2≤3/4 we see that for each x_1 we have linear stability for some interval(s) of (small) r. We graph r_min as a light gray curve and ∂_r L^2=0 as a dashed curve. With respect to x_1, numerically we find linear stability to be bounded above and below by M_1<12 x_1 x_2/(x_1 - x_2)^2+16x_1 x_2≤3/4 (causing the horizontal region of linear stability to narrow as M_1 increases). Radially, we find linear stability bounded below by r_min (where the radial eigenvalue of the amended potential's Hessian turns negative) and above by ∂_r L^2=0 (where the radial eigenvalue turns positive). Since ∂_r L^2>0 for all radii when M_1>3/4, we see the reason for the M_1 restriction. Since there is no linear stability for M_1≥12 x_1 x_2/(x_1 - x_2)^2+16x_1 x_2, that means there is no linear stability in the upper region of Figure <ref>, or for bifurcation curves like Figure <ref>. Physically, this means that for linear stability we need the dumbbell body to be a significant portion of the overall mass of the system. This is an unreasonable scenario for a large astronomical object and a dumbbell shaped artificial satellite (since the mass differential would be too great), but this could certainly be accomplished by two natural objects, or a small asteroid and an artificial satellite. If instead, we consider the point mass body as modeling a somewhat spherical shaped artificial satellite with M_1 very small, and the dumbbell modeling a massive oblong asteroid or moon, we can find linear stability. As a next step up in complexity of our model, we examine the RE of a planar system with two dumbbells. CHAPTER: PLANAR TWO-DUMBBELL PROBLEM Recall how the possible configurations for the planar dumbbell/point mass problem (colinear and isosceles) were easily determined from the angular requirements. For the planar two-dumbbell problem, however, finding the complete set of solutions to the angular requirements is not possible. Instead, we consider simplifications which are likely to lead to RE. Intuitively, and from our experience with the dumbbell/point mass problem, we suspect having dumbbells in symmetric configurations (colinear, perpendicular, parallel) may lead to RE. Another option involves symmetry of masses on the dumbbells (equal masses). In what follows, you will see this process play out as we find RE with these symmetric qualities. Of course, this process begs the question: “Are there any asymmetric RE?” Yes. In addition to the symmetric RE, below we locate families of RE with asymmetric angular rotations bifurcating from the symmetric RE, but having some symmetry with respect to the dumbbell mass values. We have also numerically located RE where all of the parameters and rotation angles are asymmetric. In Section <ref> of this paper, we prove an extension of the Conley Perpendicular Bisector Theorem which puts geometric restrictions on the location of the dumbbell bodies for these RE. [Perpendicular Bisector Theorem for RE of a Dumbbell and Rigid Planar Bodies] Let a dumbbell r_1r_2 and one or more planar rigid bodies ℬ_2,...,ℬ_n be in a planar RE. Then, if one of the two open cones determined by the lines through r_1r_2 and its perpendicular bisector contains one or more rigid bodies, the other open cone cannot be empty. In particular, from that theorem we can conclude in advance that the dumbbells cannot be wholly contained in an open quadrant defined by the other dumbbell's rod and that rod's perpendicular bisector. In particular, we might suspect a colinear or a perpendicular configuration could be RE. Additionally, our theorem allows for RE when one of the dumbbell's masses is in an open quadrant, and the other mass is in a neighboring quadrant. The RE we locate in this chapter do indeed obey the restrictions identified by our theorem. § SET-UP AND NOTATION We will follow the notation conventions used in the dumbbell/point mass problem. For the following description, refer to Figure <ref>. Let the origin and center of mass of our system reference frame be O. We denote the mass ratios of the dumbbell bodies as M_i such that the total system mass is scaled to M_1+M_2=1. Each dumbbell body has a center of mass denoted C⃗_i. Each dumbbell body consists of two masses, we denote their mass ratios as x_i1 and x_i2 such that x_i1+x_i2=1. The masses are located at r⃗_i1 and r⃗_i2 (with respect to the system frame), which are connected by a massless rod of length ℓ_i. We scale the system distances to let ℓ_1+ℓ_2=1. We label the “radius” vector C_1C_2 as r⃗, and r:=|r⃗|. We let φ be the acute angle between the system's horizontal axis and r⃗, so C⃗_i=(-1)^iM_nr(cosφ,sinφ), where n≠ i. And we let θ_i represent the angle between r⃗ and each dumbbell's rod, where if θ_i=0, then x_in is the mass lying on C_1,C_2, where n≠ i. We see that we have four degrees of freedom: r, φ, θ_1, and θ_2. And the system position vectors for the masses are: r⃗_ij=(-1)^iM_nr(cosφ ,sinφ)+(-1)^jx_ikℓ_i(cos(φ +θ_i) , sin(φ+θ_i)), where n≠ i and k≠ j. Note, the mass associated with each r⃗_ij is M_ix_ij. Also, we denote the distance between r⃗_1u and r⃗_2v as: d_uv(r,θ_1,θ_2) := √(r^2-(-1)^u2x_1u ℓ_1rcosθ_1+(-1)^v2x_2v ℓ_2rcosθ_2-(-1)^u+v2x_1u x_2v ℓ_1ℓ_2cos(θ_1-θ_2)+x_1u ^2ℓ_1^2+x_2v ^2ℓ_2^2), where u≠u and v≠v . § EQUATIONS OF MOTION Unless otherwise stated, we assume both dumbbells are nontrivial (as we have already considered the dumbbell/point mass problem in the previous chapter). In order to find the equations of motion, we let v⃗_ij denote the velocities of r⃗_ij. We calculate the Lagrangian ℒ =T -U , where T =1/2∑_i,j∈{1,2}M_ix_ijv⃗_ij^2 is the kinetic energy, and the potential energy of the system (with G=1, and scaled by 1/M_1M_2) is U =-∑_u,v ∈{1,2}^x_1ux_2v/d_uv. To calculate kinetic energy, we first calculate the velocity vectors: v⃗_ij=(-1)^iM_n·r(cosφ,sinφ) + (-1)^iM_nr·φ(-sinφ,cosφ) +(-1)^jx_ikℓ_i(·φ+·θ_i)(-sin(φ+θ_i),cos(φ+θ_i)). Recalling some trigonometric identities, we find: v⃗_ij^2= M_n^2(·r^2+r^2·φ^2)+x_ik^2ℓ_i^2(·θ_i+·φ)^2 +(-1)^i+j2M_nx_ikℓ_i(·θ_i+·φ)(r·φcosθ_i-·rsinθ_i), where n≠ i and k≠ j. And the kinetic energy is: T =1/2M_1M_2(·r^2+r^2·φ^2)+1/2x_11x_12M_1ℓ_1^2(·θ_1+·φ)^2+1/2x_21x_22M_2ℓ_2^2(·θ_2+·φ)^2. So, our Lagrangian (scaled by 1/M_1M_2) is: ℒ=T -U =1/2(·r^2+r^2·φ^2)+1/2B_1(·θ_1+·φ)^2+1/2B_2(·θ_2+·φ)^2-U, where B_i:=x_i1x_i2/M_nℓ_i^2 (with n≠ i) are the scaled moments of inertia relative to the centers of mass for each of the dumbbells. The equations of motion are given by the Euler-Lagrange equation: d/dt∂ℒ/∂·q_i-∂ℒ/∂ q_i=0, for each degree of freedom q_i∈{ r,φ ,θ_1,θ_2}. We calculate: fleqntrue ··r-r·φ^2=-∂ U/∂ r, r^2··φ+2r·r·φ+B_1(··θ_1+··φ)+B_2(··θ_2+··φ) =-∂ U/∂φ=0, B_i(··θ_i+··φ) =-∂ U/∂θ_i, for i∈{ 1,2}. § FINDING RE We now wish to reduce our system using the amended potential method in Section <ref> by exploiting the conserved angular momentum. So, as in the previous models, we calculate angular momentum (scaled by 1/M_1M_2) as: ∂ℒ/∂·φ=r^2·φ+B_1(·θ_1+·φ)+B_2(·θ_2+·φ) =:L. Or, solving for the rotational speed: ·φ =L-B_1·θ_1-B_2·θ_2/r^2+B_1+B_2 or ·φ =L/r^2+B_1+B_2 at RE. We can now eliminate velocity ·φ from our system using (<ref>). Upon substituting ·φ into (<ref>), and solving for ··φ, we see it is equivalent to ··φ =-2r·rL-B_1·θ_1-B_2·θ_2/(r^2+B_1+B_2)^2-B_1··θ_1+B_2··θ_2/r^2+B_1+B_2. And substituting ·φ and ··φ into (<ref>) and (<ref>), we determine the reduced Lagrangian ℒ_red for which d/dt∂ℒ_red/∂·r_i-∂ℒ_red/∂ r_i will generate these reduced equations. We find: ℒ_red=T_red-V =1/2(·r^2+B_1r^2+B_2/r^2+B_1+B_2·θ_1^2+B_2r^2+B_1/r^2+B_1+B_2·θ_2^2)+L(B_1·θ_1+B_2·θ_2)-B_1B_2·θ_1·θ_2/r^2+B_1+B_2 -(L^2/2(r^2+B_1+B_2)+U). So, we have our amended potential (scaled by 1/M_1M_2): V:=L^2/2(r^2+B_1+B_2)+U. Recall we can characterize the RE of our system as the critical points of the amended potential V. Taking the r derivative of V, we find the radial requirement: ∂_rV=x_11(x_21(r+x_12ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_11^3+x_22(r+x_12ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_12^3) +x_12(x_21(r-x_11ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_21^3+x_22(r-x_11ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_22^3)-rL^2/(r^2+B_1+B_2)^2=0. In the cases pursued below, we will wish to examine bifurcations of the related L^2 graph in order to identify qualitatively different parts of our parameter space as it relates to the quantity and stability of the RE. To this end, we rearrange (<ref>) as: L^2=(r^2+B_1+B_2)^2/rx_11(x_21(r+x_12ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_11^3+x_22(r+x_12ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_12^3) +(r^2+B_1+B_2)^2/rx_12(x_21(r-x_11ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_21^3+x_22(r-x_11ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_22^3). Taking θ_i derivatives of V, we find angular requirements: ∂_θ_1V= x_11x_12ℓ_1(x_21(x_22ℓ_2sin(θ_1-θ_2)-rsinθ_1)(1/d_11^3-1/d_21^3)) +x_11x_12ℓ_1(x_22(x_21ℓ_2sin(θ_1-θ_2)+rsinθ_1)(1/d_22^3-1/d_12^3)) =0, ∂_θ_2V= x_21x_22ℓ_2(x_11( x_12ℓ_1sin(θ_1-θ_2) -rsinθ_2)(1/d_12^3-1/d_11^3)) +x_21x_22ℓ_2(x_12(x_11ℓ_1sin(θ_1-θ_2)+rsinθ_2)(1/d_21^3-1/d_22^3)) =0. Simplifying we have: 0=x_21(x_22ℓ_2sin(θ_1-θ_2)-rsinθ_1)(1/d_11^3-1/d_21^3) +x_22(x_21ℓ_2sin(θ_1-θ_2)+rsinθ_1)(1/d_22^3-1/d_12^3), 0=x_11(x_12ℓ_1sin(θ_1-θ_2)-rsinθ_2)(1/d_11^3-1/d_12^3) -x_12(x_11ℓ_1sin(θ_1-θ_2)+rsinθ_2)(1/d_21^3-1/d_22^3). Note that similar to Beletskii and Ponomareva's work on the dumbbell/point mass problem, if we can solve the angular requirements above for θ_1,θ_2, we can substitute these into the radial requirement and find r,L, pairs, which are then associated with a unique rotational speed ·φ given by (<ref>). However, finding the complete set of RE solutions to the angular requirements is nontrivial. To proceed beyond this impasse, we will consider symmetric configurations allowed by the Perpendicular Bisector Theorem for RE of a Dumbbell and Rigid Planar Bodies Theorem, for which one might suspect RE. Energetic Stability Once we find RE, we will want to analyze their stability. Recall in the dumbbell/point mass problem that we had energetic stability for the colinear case only. So we hope to find some energetic stability for a colinear configuration here as well. As we saw in Section <ref>, to determine energetic stability we check if the RE are strict minima of the amended potential V. To this end, we calculate the Hessian H of V: H=[ ∂^2_rV ∂^2_r,θ_1V ∂^2_r,θ_2V; ∂^2_θ_1,rV ∂^2_θ_1V ∂^2_θ_1θ_2V; ∂^2_θ_2,rV ∂^2_θ_2,θ_1V ∂^2_θ_2V ]. For several of the RE configurations examined below, we find H becomes block diagonal (∂^2_r,θ_iV=0). So let us recharacterize ∂_r V in a way that helps us determine the sign of ∂^2_r V, and therefore the sign of H's radial eigenvalue. Calculating ∂_r V, we see we can recharacterize it as: ∂_rV=(g(r,θ_1,θ_2) -L^2)r/(r^2+B_1+B_2)^2, where: g(r,θ_1,θ_2) :=(r^2+B_1+B_2)^2/rx_11(x_21(r+x_12ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_11^3+x_22(r+x_12ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_12^3) 65pt+(r^2+B_1+B_2)^2/rx_12(x_21(r-x_11ℓ_1cosθ_1-x_22ℓ_2cosθ_2)/d_21^3+x_22(r-x_11ℓ_1cosθ_1+x_21ℓ_2cosθ_2)/d_22^3). This is done so that at a critical point of ∂_r V, we have: g=L^2. ∂_r^2V is then: r/(r^2+B_1+B_2)^2∂_rg+(g-L^2)(1-4r^2/r^2+B_1+B_2) 1/(r^2+B_1+B_2)^2, which at a critical point of ∂_r V becomes: ∂_r^2V|_RE=r/(r^2+B_1+B_2)^2∂_rg. So, we can determine the sign of ∂_r^2V with ∂_rg, or equivalently ∂_rL^2. ∂_rL^2>0∂_r^2V>0. In other words, we can determine the sign of ∂^2_r V with the slopes of graphs like Figure <ref> below. Linear Stability We will determine linear stability for the two-dumbbell problem in the same way as we did with the dumbbell/point mass problem, by first rewriting the reduced Lagrangian (<ref>) in terms of the amended potential. So: ℒ_red=1/2·r^2+1/2(r^2+B_1+B_2)(B_1(r^2+B_2)·θ_1^2+B_2(r^2+B_1)·θ_2^2+2L(B_1·θ_1+B_2·θ_2)-B_1B_2·θ_1·θ_2) -V(r,θ_i), where V(r,θ_i):=L^2/2(r^2+B_1+B_2)+U(r,θ_i) is the amended potential. Applying the Euler Lagrange equation and solving for acceleration gives us equations of motion: fleqntrue ··r =r(B_1·θ_1+B_2·θ_2)(B_1·θ_1+B_2·θ_2-2L)/(r^2+B_1+B_2)^2-∂_rV, ··θ_i=-2·r(B_1·θ_1+B_2·θ_2-L)/r(r^2+B_1+B_2)-(1/B_i+1/r^2)∂_θ_iV-1/r^2∂_θ_nV, for i∈{1,2} and n≠ i. For RE, we have ·r_RE =·θ_iRE=··r_RE=··θ_iRE=0, giving us: fleqntrue ··r_RE=-∂_rV=0, ··θ_iRE=-1/r^2(r^2+B_i/B_i∂_θ_iV+∂_θ_nV)=0, for i∈{1,2} and n≠ i, from which we can see our previous requirements for RE that ∂_rV=∂_θ_iV=0. Letting v:=[rθ_1θ_2·r·θ_1·θ_2], we linearize our system (<ref>) as ·v =Av, which at RE (·r=·θ_i=∂_rV=∂_θ_iV=0) has: A_RE :=[ 0 I; A_3 A_4; ]_ RE, where A_3=[ -∂_r^2V -∂_r,θ_1^2V -∂_r,θ_2^2V; -1/r^2(∂_rθ_2^2V+r^2+B_1/B_1∂_rθ_1^2V) -1/r^2(∂_θ_1θ_2^2V+r^2+B_1/B_1∂_θ_1^2V) -1/r^2(∂_θ_2^2V+r^2+B_1/B_1∂_θ_1θ_2^2V); -1/r^2(∂_rθ_1^2V+r^2+B_2/B_2∂_rθ_2^2V) -1/r^2(∂_θ_1^2V+r^2+B_2/B_2∂_θ_1θ_2^2V) -1/r^2(∂_θ_1θ_2^2 V+r^2+B_2/B_2∂_θ_2^2V) ], A_4=[ 0 -2B_1Lr/(r^2+B_1+B_2)^2 -2B_2Lr/(r^2+B_1+B_2)^2; 2L/r(r^2+B_1+B_2) 0 0; 2L/r(r^2+B_1+B_2) 0 0 ], and I,0 are respectively the 3×3 identity and zero matrices. For linear stability, we need the eigenvalues of A_RE to be purely imaginary. Calculating the characteristic polynomial of A_RE, we find: p(z):=z^6+c_2z^4+c_1z^2+c_0, where c_2=4 L^2 (B_1+B_2)/(r^2+B_1+B_2)^3+ r^2+B_1/B_1 r^2∂^2_θ_1V+r^2+B_2/B_2 r^2∂^2_θ_2V+2/r^2∂^2_θ_1θ_2V+∂^2_rV, c_1=4 L^2/B_1 B_2 (r^2+B_1+B_2)^3(B_1^2 ∂^2_θ_2V+B_2^2 ∂^2_θ_1V-2 B_1 B_2 ∂^2_θ_1θ_2V) +r^2+B_1+B_2/B_1 B_2 r^2(∂^2_θ_1V ∂^2_θ_2V-∂^2_θ_1θ_2V^2)+r^2+B_1/B_1 r^2(∂^2_rV ∂^2_θ_1V-(∂^2_θ_1V)^2) +r^2+B_2/B_2 r^2(∂^2_rV ∂^2_θ_2V-(∂^2_θ_2V)^2)+2/r^2(∂^2_rV ∂^2_θ_1θ_2V-∂^2_θ_1V ∂^2_θ_2V), c_0=-r^2+B_1+B_2/B_1 B_2 r^2(∂^2_θ_1V ((∂^2_θ_2V)^2-∂^2_rV ∂^2_θ_2V)+∂^2_rV ∂^2_θ_1θ_2V^2) -r^2+B_1+B_2/B_1 B_2 r^2((∂^2_θ_1V)^2 ∂^2_θ_2V-2 ∂^2_θ_1V ∂^2_θ_2V ∂^2_θ_1θ_2V). Considering the polynomial p(z^2) =(z^2)^3+c_2(z^2)^2+c_1z^2+c_0 as a cubic in z^2, our requirement is that z^2 roots are real and negative. For a cubic to have all real roots we need the discriminant Δ≥0, where Δ=-27c_0^2-4c_1^2+18c_0c_1c_2+c_1^2c_2^2-4c_0c_2^3. This becomes our first criteria for linear stability. To ensure that the roots are negative, we can employ the Routh-Hurwitz stability criteria <cit.>. These criteria require the coefficients to be real, which we have. For a cubic, the criteria are: c_i>0. So our criteria for z^2 negative roots become: Δ≥ 0 and c_i>0. Roots matching these criteria give us z purely imaginary, and also linear stability. We will use these criteria for the configurations examined below. For our first configuration, as with the dumbbell/point mass problem, we find RE when the dumbbells are colinear. §.§ Colinear Configuration: θ_1,θ_2 =0 Note that this configuration (θ_1,θ_2 =0) immediately satisfies the angular requirements (<ref>). In terms of L^2, from the radial requirement (<ref>) we find: L^2(r)=(r^2+B_1+B_2)^2/rx_11(x_21/(r+x_12ℓ_1-x_22ℓ_2)|r+x_12ℓ_1-x_22ℓ_2|+x_22/(r+x_12ℓ_1+x_21ℓ_2)^2) +(r^2+B_1+B_2)^2/rx_12(x_21/(r-x_11ℓ_1-x_22ℓ_2)|r-x_11ℓ_1-x_22ℓ_2|+x_22/(r-x_11ℓ_1+x_21ℓ_2)|r-x_11ℓ_1+x_21ℓ_2|). Observe that we have singularities (collisions of the masses) when d_uv=0, or equivalently when r∈{r_4,r_3,r_2,r_1}:={-x_21ℓ_2-x_12ℓ_1, -x_21ℓ_2+x_11ℓ_1, x_22ℓ_2-x_12ℓ_1, x_22ℓ_2+x_11ℓ_1}. For this configuration, not only do we have singularities, but also “overlap.” That is, for sufficiently small radii, the inner mass of each body is located within the massless rod of the other body. We see five intervals around these singularities: -∞<r_4< r_3,r_2 <r_1<∞. We can ignore the interval r<r_4 (non-overlap, but negative radius), as this case is the same as r>r_1 with the x_ij and ℓ_i permuted. Indeed, when r<r_2,r_3, we may have negative radius, but we can disregard these radially negative cases for the same reason. However, the behavior of our system has the most physical relevance when r>r_1, the non-overlapped case. We will restrict our attention to this case. §.§.§ Case: r>r_1, Non-Overlap To examine this interval of most interest, let: R:=r-x_22ℓ_2-x_11ℓ_1 or r=R+x_11ℓ_1+x_22ℓ_2. So for R∈(0,∞), we have r>r_1. Substituting this into (<ref>): L^2(R)= 1/R+x_11ℓ_1+x_22ℓ_2(x_11x_21/(R+ℓ_1)^2+x_12x_22/(R+ℓ_2)^2)(x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2+(R+x_11ℓ_1+x_22ℓ_2)^2)^2 +1/R+x_11ℓ_1+x_22ℓ_2(x_11x_22/(R+1)^2+x_12x_21/R^2)(x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2+(R+x_11ℓ_1+x_22ℓ_2)^2)^2. Now we will prove the shape of the non-overlap L^2 graphs. In Figure <ref>, we see the RE bifurcate for a particular value of L^2. Below this value, there are no RE, and above it there are two RE. In the planar colinear non-overlapped two-dumbbell problem, and for sufficiently low angular momenta L, there are no RE. However, for some angular momentum L_b>0, and at some radius r_b, two RE bifurcate. For all angular momenta greater than L_b, there are two RE. By inspection, observe that (<ref>) is always positive, and L^2(R)→∞ as R→{0,∞}. To show that we have only one bifurcation as we vary L^2, we first show that L^2 has positive curvature. We do this by showing that (L^2)^''(R) is always positive. Multiplying (L^2)^''(R) by the positive expression h(R):=(R+x_11ℓ_1+x_22ℓ_2)^3, we get: h(R)(L^2)^''(R)= -0.7cm0cm 6(R+x_11ℓ_1+x_22ℓ_2)^2(x_12x_22/(R+ℓ_2)^4+x_12x_21/R^4+x_11x_21/(R+ℓ_1)^4+x_11x_22/(R+1)^4)((R+x_11ℓ_1+x_22ℓ_2)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)^2 -16(R+x_11ℓ_1+x_22ℓ_2)^3(x_12x_22/(R+ℓ_2)^3+x_12x_21/R^3+x_11x_21/(R+ℓ_1)^3+x_11x_22/(R+1)^3)((R+x_11ℓ_1+x_22ℓ_2)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2) +4( R+x_11ℓ_1+x_22ℓ_2)(x_12x_22/(R+ℓ_2)^3+x_12x_21/R^3+x_11x_21/(R+ℓ_1)^3+x_11x_22/(R+1)^3)((R+x_11ℓ_1+x_22ℓ_2)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)^2 +2(x_12x_22/(R+ℓ_2)^2+x_12x_21/R^2+x_11x_21/(R+ℓ_1)^2+x_11x_22/(R+1)^2)(( R+x_11ℓ_1+x_22ℓ_2)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)^2 -4(R+x_11ℓ_1+x_22ℓ_2)^2(x_12x_22/(R+ℓ_2)^2+x_12x_21/R^2+x_11x_21/(R+ℓ_1)^2+x_11x_22/(R+1)^2)((R+x_11ℓ_1+x_22ℓ_2)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2) +8(R+x_11ℓ_1+x_22ℓ_2)^4(x_12x_22/(R+ℓ_2)^2+x_12x_21/R^2+x_11x_21/(R+ℓ_1)^2+x_11x_22/(R+1)^2). To simplify our analysis, we make the following changes of parameter: x_11→u_1/1+u_1,x_12→1/1+u_1,x_21→u_2/1+u_2,x_22→1/1+u_2, M_1→m/1+m,M_2→1/1+m, ℓ_1→ℓ/1+ℓ, and ℓ_2→1/1+ℓ. Note that we still have x_i1+x_i2=M_1+ M_2=ℓ_1+ℓ_2=1, but now we have characterized these 8 parameters as only 4 parameters 0<u_i,m,ℓ<∞. Upon substitution into (<ref>), the resulting expanded expression has 37,144 terms. However, since all of our parameters are defined to be positive, and the terms are all added together, the result is positive. Therefore the graph is concave up, giving at most one bifurcation. We calculate that (L^2)^ '(R)=0 when: 0 =4(R+ℓ_1x_11+ℓ_2x_22)^2((R+ℓ_1x_11+ℓ_2x_22)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)(x_12x_21/R^2+x_11x_22/(R+1)^2+x_11x_21/(R+ℓ_1)^2+x_12x_22/(R+ℓ_2)^2) -2(R+ℓ_1x_11+ℓ_2x_22)((R+ℓ_1x_11+ℓ_2x_22)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)^2(x_12x_21/R^3+x_11x_22/(R+1)^3+x_11x_21/(R+ℓ_1)^3+x_12x_22/(R+ℓ_2)^3) -((R+ℓ_1x_11+ℓ_2x_22)^2+x_11x_12/M_2ℓ_1^2+x_21x_22/M_1ℓ_2^2)^2(x_12x_21/R^2+x_11x_22/(R+1)^2+x_11x_21/(R+ℓ_1)^2+x_12x_22/(R+ℓ_2)^2). We note that (on R>0) this expression is continuous and takes negative (as R→ 0 for instance), and positive (as R→∞) values. Therefore, by the intermediate value theorem (L^2)^ '(R) has a zero. So L^2 must have a minimum, giving us a bifurcation of the number of RE as L^2 is varied. As noted above, L^2 is always positive, so this minimum is positive, giving these RE physical relevance (real angular momenta). Now that we know the location of the colinear RE, let us determine their stability. Energetic Stability of Colinear RE of the colinear, non-overlapped two-dumbbell problem are stable when ∂_rL^2>0. As we saw in Section <ref>, to determine energetic stability we will need to check if the RE are strict minima of the amended potential V. We find that since θ_1=θ_2=0, we have ∂_R,θ_1V=∂_R,θ_2V=∂_θ_1,RV=∂_θ_2,RV=0. So our Hessian (<ref>) becomes block diagonal: H=[ ∂^2_RV 0 0; 0 ∂^2_θ_1V ∂^2_θ_1θ_2V; 0 ∂^2_θ_2,θ_1V ∂^2_θ_2V ]. Therefore, stability of our RE will be determined by the sign of ∂^2_RV (which we learned from (<ref>) was the sign of the slope of our L^2 graphs) and the sub-Hessian H_s=[ ∂_θ_1,θ_1V ∂_θ_1,θ_2V; ∂_θ_2,θ_1V ∂_θ_2,θ_2V ]. Determining whether this matrix is positive definite is nontrivial due to the complexity of the component expressions. In an effort to simplify things, we make the same substitutions (<ref>) as we did with (<ref>). Then, the diagonal components as well as the determinant are long expressions of u_1,u_2,m, and ℓ. And since the parameters are defined to be positive, and all of the terms of these expressions are added together, we find the expressions to be positive, the sub-Hessian to be positive definite, and the RE to be stable when ∂_rL^2>0. If you note that the colinear two-dumbbell problem becomes the colinear dumbbell/point mass problem as ℓ_1 → 0, we should expect the stability results for the dumbbell/point mass problem to be consistent with this limit of the two-dumbbell problem. Figure <ref>b suggests stability converging on the dumbbell/point mass stability results shown in Figure <ref>. Linear Stability of Colinear Mapping the linear stability criteria (<ref>) in the Rx_21-plane for the colinear configuration with no overlap, we find graphs: In order to visually cover the entire R range, note that we have used R=z/2-z as the horizontal axis in our graphs with z∈(0,2). The dashed curve going through the plane is ∂_r L^2=0. So we see that the linear stability boundary coincides with the energetic stability boundary calculated in Section <ref>. Figure <ref>a is the graph for our equal mass case, and we see that for each x_21, there is some radius R≤ 1 below which RE are unstable (as well as linearly unstable), and above which RE are stable (consistent with Theorem <ref>). Figure <ref>b has parameters ℓ_1=0.0001 and x_11=0.9999, approaching the colinear dumbbell/point mass problem. Observe that the graph is indistinguishable from Figure <ref> in Section <ref>. We chose M_1=1/2 for these graphs, but qualitatively the shapes of these graphs do not change as you vary M_1. The main difference is that the radius where stability begins is smaller for M_1 small, and larger for M_1 large. Now let us look at another symmetric configuration, where the bodies are perpendicular. §.§ Perpendicular Configuration: (θ_1,θ_2) =(π/2,0) We examine (θ_1,θ_2) =(π/2,0), but similar results are found for (θ_1,θ_2) =(0,π/2). For this perpendicular configuration, the distances between our masses (<ref>) become: d_11=√(r^2-2x_22ℓ_2r+x_12^2ℓ_1^2+x_22^2ℓ_2^2),d_12=√(r^2+2x_21ℓ_2r+x_12^2ℓ_1^2+x_21^2ℓ_2^2), d_21=√(r^2-2x_22ℓ_2r+x_11^2ℓ_1^2+x_22^2ℓ_2^2),d_22=√(r^2+2x_21ℓ_2r+x_11^2ℓ_1^2+x_21^2ℓ_2^2). And our angular requirements (<ref>) become: fleqntrue 0=x_21(x_22ℓ_2-r)(1/d_11^3-1/d_21^3)+x_22(x_21ℓ_2+r)(1/d_22^3-1/d_12^3), 0=x_11x_12ℓ_1(1/d_12^3-1/d_11^3+1/d_21^3-1/d_22^3). Unlike the colinear configuration, simply being perpendicular is insufficient to guarantee a RE. Rather, the following theorem gives further restrictions on the shape and mass values. For the perpendicular (θ_1,θ_2) =(π/2,0) configuration of the two-dumbbell problem, there is one family of isosceles RE (d_11=d_21 and d_12=d_22) where the masses on the vertical body are equal. A simple calculation reveals that the requirement (<ref>), for nontrivial dumbbells, reduces to: 1/d_12^3+1/d_21^3=1/d_22^3+1/d_11^3. Let us examine when (<ref>) is satisfied as we set various distances equal. Rhombus When all of distances are equal, the dumbbells form a rhombus and (<ref>) and (<ref>) are satisfied. Observe that d_11=d_21 implies x_11=x_12=1/2. Also, d_11=d_12, implies: r=x_22-x_21/2ℓ_2 This also requires that r<ℓ_2/2 (otherwise (<ref>) implies x_21≤0). x_21 must also be less than 1/2, otherwise the equation implies negative radius. Therefore r<ℓ_2/2<ℓ_2 x_22 (since x_21<1/2 x_22>1/2), so the radius is in the overlap region. From (<ref>) you can choose various x_2j masses, and the solution adjusts the radius and angular momentum necessary to maintain the rhombus shape. Isosceles Another way we can set distances equal to each other is to satisfy the requirement (<ref>) with d_11=d_21 and d_22=d_12 (which also implies x_11=x_12=1/2). Observe that these also satisfy (<ref>) giving isosceles triangles for any radius. Of course, the rhombus RE also satisfy this requirement, and are therefore a subset of the isosceles RE. Rhombus Again The last way we could satisfy (<ref>) with equal distances is in the case when d_11=d_12 and d_22=d_21. However, upon substituting these into (<ref>), we find we must also have d_11=d_21, which gives us the rhombus configuration again. Unequal distances The last situation to examine is when all of the distances are different, but still somehow manage to satisfy (<ref>). Rearranging (<ref>), we have: 0=x_21ℓ_2(1/d_11^3-1/d_21^3+1/d_22^3-1/d_12^3) + r(1/d_22^3-1/d_12^3-1/d_11^3 +1/d_21^3). Notice that if we assume (<ref>) is satisfied, this allows us to eliminate the first term in (<ref>). And since our radius is greater than zero, (<ref>) becomes: 1/d_22^3+1/d_21^3=1/d_11^3 +1/d_12^3. Solving for 1/d_22^3 in (<ref>) and substituting into (<ref>) reduces to 1/d_11^3=1/d_21^3, which implies d_11=d_21. But we assumed the distances were all different, so we have a contradiction. Therefore, the only possible RE for the perpendicular configuration is the isosceles family (with a subset of rhombus RE). We restrict our analysis to the area of most interest, the non-overlap radii. §.§.§ Perpendicular - Isosceles Substituting the isosceles mass and distance restrictions (d_11=d_21, d_22=d_12, and x_11=1/2) into our radial requirement (<ref>), we find the angular momentum: L^2=(r^2+B_1+B_2)^2/r( x_21(r-x_22ℓ_2)+x_21(r-x_22ℓ_2)/d_11^3+x_22(r+x_21ℓ_2)+x_22(r+x_21ℓ_2)/d_12^3) =2(r^2+B_1+B_2)^2/r(x_21r-x_22ℓ_2/d_11^3+x_22r+x_21ℓ_2/d_12^3) . Let us explore the shape of the curve to see how the RE bifurcate. Note that overlap of the dumbbell masses occurs when r<ℓ_2x_22. So we make the substitution r→ R+ℓ_2x_22, and our distances become: d_11=d_21=√(R^2+1/4ℓ_1^2), and d_12=d_22=√((R+ℓ_2)^2+1/4ℓ_1^2). Our angular momentum becomes: L^2=2((R+ℓ_2x_22)^2+B_1+B_2)^2/R+ℓ_2x_22(x_21R/d_11^3+x_22R+ℓ_2/d_12^3), where we see that L^2>0, and therefore the angular momentum is real for the non-overlap region. As R→ 0 we have: L^2→2(x_22^2ℓ_2^2+x_21x_22/M_1ℓ_2^2 +1/4M_2ℓ_1^2)^2/√(ℓ_2^2+1/4ℓ_1^2)^3, and as R→∞ we have L^2→∞. However, the L^2 bifurcation curve is qualitatively different depending upon (ℓ_1,M_1,x_21). Below, we graph three of the most common L^2 shapes. As bifurcation parameter L^2 increases, the number of RE found in the graphs above are A: (0→1→3→1), B: (0→1), and C: (0→2→1). And while these account for the vast majority of shapes, we have also found (0→2→4→2→1), (0→2→1→3→1), and (0→2→4→3→1). So no simple relationship between the shapes and parameter values was found. However, for all these curves, we start out with no RE for low angular momenta, and a single RE for sufficiently large angular momenta. Energetic Stability of Isosceles As we saw in Section <ref>, to determine energetic stability we check if the RE are strict minima of the amended potential V. For (θ_1,θ_2)=(π/2,0) and x_11=x_12=1/2, the resulting Hessian is block diagonal: H=[ ∂_r^2V 0 0; 0 ∂_θ_1^2V ∂_θ_1,θ_2^2V; 0 ∂_θ_2,θ_1^2V ∂_θ_2^2V ]. Recall from (<ref>) that ∂_r^2V is positive when the slope of our L^2 graph is positive. However, we also note by inspection that ∂_θ_1^2V is strictly negative: ∂_θ_1^2V=-24ℓ_1^2(x_21(r-x_22ℓ_2)^2/(4(r-x_22ℓ_2)^2+ℓ_1^2)^5/2+x_22(r+x_21ℓ_2)^2/(4(r+x_21ℓ_2)^2+ℓ_1^2)^5/2). Therefore, whenever the determinant of the first minor of H is positive, the determinant of the second minor is negative. So, we have no strict minima or stability. Observe that we also lacked stability in <ref> for the isosceles dumbbell/point mass problem. Note that the isosceles two-dumbbell problem above approaches the equal mass isosceles dumbbell/point mass problem as ℓ_2 → 0, so it's not surprising that in the limit we should also find no stability. Linear Stability of Isosceles Despite the lack of energetic stability, when we map the linear stability criteria (<ref>) in the rM_1-plane, we see linear stability for large M_1 on some non-overlapping radial intervals (with starting radius r_s>ℓ_2x_22). We observe that varying x_21 does not significantly affect the location of this region. As seen in the figures below, the region starts at higher M_1 when ℓ_1 low and the radial intervals cease at the dashed line ∂_r L^2=0. Similar graphs for (θ_1,θ_2) =(0,π/2) exist with linear stability for small M_1. So for a sufficiently massive vertical body, a radially oriented satellite can find linear stability. R0.15 justification=centering 0pt[-1.6] < g r a p h i c s > Equal Mass Configuration §.§ (Pairwise) Equal Mass Configuration: x_11=x_12 and x_21=x_22 Another obvious rotational symmetry is when the dumbbells are parallel, a trapezoid configuration. Applying θ_1=θ_2=π/2 to angular requirements (<ref>): 0 = -x_21r(1/d_11^3-1/d_21^3)+x_22r(1/d_22^3-1/d_12^3), and 0 = x_11r(1/d_11^3-1/d_12^3)+x_12(1/d_21^3-1/d_22^3). Finding all possible solutions to the system is nontrivial. However, working with the system one finds d_11=d_22 and d_12=d_21 to be the most straightforward conditions solving it. These conditions further imply either the relationship ℓ_1/ℓ_2 =x_12-x_11/x_22-x_21 between the parameters, or pairwise equal masses. Pairwise equal masses means the two dumbbells may have different masses (M_1 need not equal M_2), but the mass ratios on each dumbbell are equal: x_i1=x_i2=1/2. For this paper, we will focus on this (pairwise) equal mass configuration. In fact, this section will focus more generally on it (not assuming a trapezoid). This will allow us to not only study the equal mass trapezoid configuration, but it provides a larger context in which to study asymmetric RE which bifurcate from the symmetric configurations we've studied so far. Substituting equal masses into our angular requirements (<ref>), we find: 0=(ℓ_2sin(θ_1-θ_2) -2rsinθ_1)(1/d_11^3-1/d_21^3)+(ℓ_2sin(θ_1-θ_2)+2rsinθ_1)(1/d_22^3-1/d_12^3), and 0=(ℓ_1sin(θ_1-θ_2) -2rsinθ_2)(1/d_11^3-1/d_12^3)-(ℓ_1sin(θ_1-θ_2)+2rsinθ_2)(1/d_21^3-1/d_22^3). Or equivalently: 0=2rsinθ_1(d_11^3d_12^3(d_21^3+d_22^3)-d_21^3d_22^3(d_11^3+d_12^3)) +ℓ_2sin(θ_1-θ_2)(d_11^3d_12^3(d_21^3-d_22^3)-d_21^3d_22^3(d_11^3-d_12^3)), and 0=2rsinθ_2(d_11^3d_12^3(d_22^3-d_21^3)-d_21^3d_22^3(d_11^3-d_12^3)) +ℓ_1sin(θ_1-θ_2)(d_11^3d_12^3(d_21^3-d_22^3)-d_21^3d_22^3(d_11^3-d_12^3)), where d_uv=√(r^2-(-1)^uℓ_1rcosθ_1+(-1)^vℓ_2rcosθ_2-(-1)^u+v1/2ℓ_1ℓ_2cos(θ_1-θ_2)+1/4(ℓ_1^2+ℓ_2^2)). Our radial requirement becomes: L^2=(r^2+B_1+B_2)^2/8r(2r+ℓ_1cosθ_1-ℓ_2cosθ_2/d_11^3+2r+ℓ_1cosθ_1+ℓ_2cosθ_2/d_12^3) +(r^2+B_1+B_2)^2/8r(2r-ℓ_1cosθ_1-ℓ_2cosθ_2/d_21^3+2r-ℓ_1cosθ_1+ℓ_2cosθ_2/d_22^3), where B_i:=ℓ_i^2/4M_j and n i. So let us find some solutions for the angular requirements by looking at symmetric configurations and taking advantage of their accompanying simplifications. We saw earlier that the colinear angular requirements are satisfied for any choice of masses, and the perpendicular angular requirements are met when the vertical masses are equal, so these configurations' requirements are certainly satisfied in the equal mass configuration. In an attempt to locate even more RE, we set the body rotation angles equal to each other, but not equal to zero. §.§.§ Case: θ_1 =θ_2 0 0.5cm0cm The distances (<ref>) become: d_11=√(r^2+(ℓ_1-ℓ_2)rcosθ_1+1/4(ℓ_1-ℓ_2)^2), d_12=√(r^2+rcosθ_1+1/4), d_21=√(r^2-rcosθ_1+1/4), and d_22=√(r^2-(ℓ_1-ℓ_2)rcosθ_1+1/4(ℓ_1-ℓ_2)^2). Our angular requirements (<ref>) become: 0=d_11^3d_12^3(d_22^3+d_21^3)-d_21^3d_22^3(d_11^3+d_12^3), and 0=d_11^3d_12^3(d_22^3-d_21^3)-d_21^3d_22^3(d_11^3-d_12^3). We find two obvious conditions satisfying these equations. The first is when d_11=d_22 and d_12=d_21. Examining the distance formulas, we see this requires θ_1=θ_2=π/2, and we recapture the trapezoid configuration. The second condition is when d_22=d_12 and d_11=d_21, or d_22=d_21 and d_11=d_12. Setting the distances equal (from (<ref>)) requires: 2ℓ_1rcosθ_1=-1/4+1/4(ℓ_1-ℓ_2)^2 and 2ℓ_1rcosθ_1=+1/4-1/4(ℓ_1-ℓ_2)^2. Setting the right hand sides of these equations equal gives us 1=(ℓ_1-ℓ_2)^2, which requires a trivial dumbbell ℓ_1∈{0,1}. Similarly, requiring d_11=d_12 and d_22=d_21 (left pointing isosceles triangles) also requires ℓ_2∈{0,1}. But these imply the dumbbell/point mass problem which we examined in Chapter 4. So the RE found through these simplifications are symmetric configurations (colinear, perpendicular, trapezoid). But are there any asymmetric RE? §.§.§ Pitchfork Bifurcations In an attempt to find asymmetric solutions not found above, we perform a bifurcation analysis of the symmetric solutions (colinear, perpendicular, trapezoid), using radius as our bifurcation parameter. We hope to locate asymmetrical solutions bifurcating from the symmetric ones. Encouragingly, when we plot the solution curves to the equal mass angular requirements using Mathematica, we see what appears to be pitchfork bifurcations coming from our symmetric RE. Below, we determine quadratic approximations of these bifurcation curves. Consider a real system f(z⃗;r) ,g(z⃗;r) =0 of two equations in two variables z⃗:=(z_1,z_2) and depending upon a bifurcation parameter r. Our goal is to find a local approximation of solutions to the system near a pitchfork bifurcation. To that end, let us first obtain some properties of this type of pitchfork. Assuming our bifurcation occurs at z⃗=0⃗, in order to be considered a pitchfork, f,g must be odd functions of z⃗. In particular, we would then have 0⃗ as a solution for all r. Therefore, it is possible to write the equations as f=z_1f_1+z_2f_2 and g=z_1g_1+z_2g_2. From these, we can write the Jacobian D(f,g)(with respect to z⃗) at the origin as: D(f,g) _(0⃗;r)=[ f_1 f_2; g_1 g_2 ]|_(0⃗;r), with eigenvalues μ_1(r), μ_2(r). Since we are presupposing a codimension-1 bifurcation, without loss of generality, assume at r=0 we have μ _1(0) =0, μ_2(0) =:μ≠ 0. Also, to be a pitchfork, we have dμ _1/dr=:k≠ 0, where the zero eigenvalue is crossing the imaginary axis (transversality). By performing a Jordan normal form decomposition, we can find a linear change of coordinates Pz⃗=:u⃗. From this we define: { f(u⃗;r),g(u⃗;r)} :=P^-1{f(P^-1u⃗;r) ,g(P^-1u⃗;r) }. Written this way, f,g are odd in (u_1,u_2), and can be expressed as: fleqntrue f=u_1f_1+u_2f_2, and g=u_1g_1+u_2g_2. Making this change brings the benefit that at (u⃗;r) =(0⃗;0) we have D(f,g)_(0⃗;0)=[ f_1 f_2; g_1 g_2 ]|_(0⃗;0)=[ 0 0; 0 μ ]. Also, f_1r=dμ _1/dr=k; where f_1r denotes the partial derivative of f_1 with respect to r. Note that the functions f_i,g_i are even functions of u⃗ (since f,g are odd functions of u_1,u_2). So at u⃗=(0,0) and for all r, their partial derivatives vanish: f_iu_1=f_iu_2=g_iu_1=g_iu_2=0. We will use this in a Taylor expansion below. We will now show that under the assumption that f_1u_1u_1(0⃗;0) =:l≠ 0, we can find a curve of solutions of the form: u_2=α(u_1) and r=β(u_1) near u_1=0, with: α(0)=β(0)=α^'(0)=β^'(0)=α ^''(0)=0, and β ^''(0)≠ 0. We wish to continue our bifurcation point of (<ref>) into a curve of solutions. So the Implicit Function Theorem (IFT) will be helpful, but first we need to do a change of variable to obtain a nonzero Jacobian determinant. We will take advantage of the nonzero values g_2=μ and f_1r=k. Observe that we obtain g_2 from (<ref>) if we take a derivative with respect to u_2, and (if we can first get rid of the u_1 coefficient) we can obtain f_1r from (<ref>) upon taking a derivative with respect to r. Therefore, let us make the change of variable: u_2=u_1z. Upon substituting (<ref>) into (<ref>), we then define (after canceling a factor of u_1): F(u_1,z;r):=f/u_1=f_1(u_1,u_1z;r)+zf_2(u_1,u_1z;r) =0, G(u_1,z;r):=g/u_1=g_1(u_1,u_1z;r)+zg_2(u_1,u_1z;r) =0. The Jacobian ∂ (F,G)/(z,r) evaluated at our bifurcation point is: [ f_2 f_1r; g_2 g_1r ]|_(0;0)=[ 0 k; μ g_1r ]. And since μ ,k are nonzero, the determinant is nonzero, and IFT guarantees solutions of the form: z=γ (u_1), r=β (u_1), with γ (0)=β (0)=0. Recapturing u_2 from z, we find from (<ref>),(<ref>) that u_2=u_1γ(u_1)=:α (u_1), with α (0)=α ^'(0)=0. We then discover our curve by calculating the derivatives of α(u_1), β (u_1) using implicit differentiation on the equations: F(u_1,u_1γ (u_1),β (u_1)) =f_1(u_1,u_1γ (u_1),β (u_1))+γ (u_1)f_2(u_1,u_1γ (u_1),β (u_1)) =0, G(u_1,u_1γ (u_1),β (u_1)) =g_1(u_1,u_1γ (u_1),β (u_1))+γ (u_1)g_2(u_1,u_1γ (u_1),β (u_1)) =0. The first derivatives are: f_1u_1+(u_1γ^'+γ)f_1u_2+β^'f_1r+γ^'f_2+γ(f_2u_1+(u_1γ^'+γ)f_2u_2+f_2r) =0, g_1u_1+(u_1γ^'+γ)g_1u_2+β^'g_1r+γ^'g_2+γ(g_2u_1+(u_1γ^'+γ)g_2u_1+g_2r) =0, where the arguments have been suppressed. At (u⃗;r) =(0⃗;0) we get: kβ ^'(0)=g_1rβ ^'(0)+μγ^'(0)=0. So β ^'(0)=γ ^'(0)=0. Since γ ^'(0)=0, then from (<ref>) we find α ^''(0)=(γ +u_1γ ^')^'|_0=(γ ^'+γ ^'+u_1γ ^'')|_0=0. Differentiating the first equation of (<ref>), and ignoring terms which we determined above vanish at (0⃗;0) gives: f_1u_1u_1+f_1rβ ^''(0)=l+kβ^''(0)=0 ⇒ β ^''(0)=-l/k. So if f_1u_1u_1(0⃗;0) =l≠ 0, we can find our curve of solutions up to second order as: fleqntrue u_2=0+𝒪(u_1^3), and r= β^''(0)/2!u_1^2+𝒪(u_1^3)=-l/2ku_1^2+𝒪(u_1^3). Next, we show that if P=P^-1, then at the bifurcation point, l=f_1u_1u_1 in (<ref>) is equal to 1/3f_u_1u_1u_1. We point this out since, in practice, it is easier to calculate f_u_1u_1u_1 than to calculate f_1u_1u_1 directly. Recall: { f(u⃗;r),g(u⃗;r)} :=P^-1{f(P^-1u⃗;r) ,g( P^-1u⃗;r) }, where f=z_1f_1+z_2f_2 and g=z_1g_1+z_2g_2. Then assuming: P=P^-1= [ P_11 P_12; P_12 -P_11 ], we have: f=u_1(P_11(P_11f_1+P_12f_2)+P_12(P_11g_1+P_12g_2)) +u_2(P_11(P_12f_1-P_11f_2)+P_12(P_12g_1-P_11g_2)) =:u_1f_1+u_2f_2. Note that ∂_u_1^3f |_u_2=0=∂ _u_1^2(f_1+u_1f_1u_1) =∂_u_1(2f_1u_1+u_1f_1u_1u_1) =3f_1u_1u_1+u_1f_1u_1u_1u_1. So, we do indeed find that f_u_1u_1u_1 |_u⃗=0=3f_1u_1u_1. Now let us apply these bifurcation results to our equal mass configuration. Bifurcation Analysis for Equal Mass Configuration When looking for RE numerically, depending on ℓ_1, there are several 1D families of RE curving through the equal mass configuration space (r,θ_1,θ_2). In particular, there are symmetric families which consist of the colinear configuration ℛ_C where (θ_1,θ_2) =(0,0), the perpendicular configurations ℛ_P_1,ℛ_P_2 where (θ_1,θ_2) ∈{(π/2,0) ,(0,π/2) }, and the trapezoid configuration ℛ_T where (θ_1,θ_2) =(π/2,π/2). Their existence is independent of r,ℓ_1. We also find asymmetric solutions which bifurcate from these symmetric ones. After setting r, the configuration space consists of a (θ_1,θ_2) torus. If we graph the angular RE requirements (V_θ_i=0), the resulting RE appear at their intersections. In Figure <ref> you see an example of such a graph (r=1000, ℓ_1=3/4). Observe that ℛ_T is at the center of the graph, and ℛ_C,ℛ_P_1,ℛ_P_2 are on the boundary. Note that the edges are identified (this is a torus), so in total we have only four RE visualized here (not nine). Bifurcation curves for ℓ_1≠1/2. We now describe bifurcations shown in Figures <ref>,...,<ref>. We include hatched regions in the figures to be used later when examining stability. * As r increases from 0, we see ℛ_C bifurcating (Figure <ref>a) at radius r_1 (whose value depends upon ℓ_1) into three RE, including two new RE branches which we label ℬ_CC^± (a bifurcation from colinear, later merging back with colinear) where ℬ_CC^+ has θ_1 increasing, and ℬ_CC^- has θ_1 decreasing. Then at some r_8 (Figure <ref>), ℬ_CC^± merge back with ℛ_C. * At some r_2 (Figure <ref>), ℛ_T bifurcates into three RE, including two new RE branches which we label ℬ_TP^± (bifurcation going from trapezoid to perpendicular). Then at some r_7 (Figure <ref>), ℬ_TP^± merge with ℛ_P_1. * At some r_3 (Figure <ref>), ℛ_P_2 bifurcates into three RE, including two new RE branches which we label ℬ_PC^± (going from perpendicular to colinear). Then at some r_5 (Figure <ref>), ℬ_PC^± merge with ℛ_C. * At some r_4 (Figure <ref>), ℛ_C bifurcates again into two RE which we label ℬ_CP^± (going from colinear to perpendicular). Then at some r_6 (Figure <ref>), ℬ_CP^± merge with ℛ_P_2. To get an idea of the long-term behavior, we graph r=1000: Bifurcation curves for ℓ_1=1/2. We will now describe the bifurcations shown in Figures <ref>,...,<ref> below. For this case when the lengths of the bodies are equal, observe that r=0 and θ_1=θ_2±1/2nπ with n∈ℤ trivially satisfy the angular requirements V_θ_i=0 (<ref>). Therefore, they represent a family ℛ_0 of RE at r=0. For small r, the following RE curves can be seen bifurcating from points within ℛ_0 (Figure <ref>), and later merge with the symmetric families at some radii. * Two curves which we denote ℬ_C^±, bifurcate from (θ_1,θ_2)∈{(π/4,3π/4),(3π/4,π/4)}, and later (r_1) merge with ℛ_C (Figure <ref>). * Two curves which we denote ℬ_LP^±, bifurcate from θ_1,θ_2= ^-1√(2) (at collision), and later (r_2) merge with ℛ_P_1/2 (Figure <ref>). And similarly two curves which we label ℬ_RP^±, bifurcate from θ_1,θ_2=cos^-1(-√(2/3)) (at collision), and later (r_2) merge with ℛ_P_1/2 (Figure <ref>). * Two curves which we denote ℬ_T^±, bifurcate from the trapezoid RE θ_1,θ_2=π/2 (at collision), and later (r_3) merge with ℛ_C (Figure <ref>). Below we graph a 2D slice of the configuration space at r=0.01 that reveals these RE families. You'll note the three RE {ℛ_T,ℬ_T^+(r),ℬ_T^-(r)} tightly grouped near (π/2,π/2). However, the RE ℬ_LP^± and ℬ_RP^± are so close together (near (^-1√(2),^-1√(2))) it is difficult to distinguish them in Figure <ref>, so we have included Figure <ref> zoomed in on θ_1,2=^-1√(2)≈ 0.6155, where one is able to distinguish between ℬ_LP^+ and ℬ_LP^-. A similar zoomed in graph exists for ℬ_RP^±. To get an idea of the long-term behavior, we graph r=1000 below: When looking at the numerically found curves through our configuration space, we calculate that the angular momentum is nonphysical (either infinite or complex) for ℬ_CC, ℬ_PC while ℓ≠1/2; and for ℬ_C, ℬ_T while ℓ_1=1/2. Also, the angular momentum becomes unbounded for ℬ_LP/RP as the curves approach their collision branching points at r=0. As a result, we will focus our attention on the bifurcation points located where the angular momentum is physically relevant, namely {ℬ_TP(r_2) ,ℬ_TP(r_7) ,ℬ_CP(r_4) ,ℬ_CP(r_6)} for ℓ≠1/2 and ℬ_LP/RP(r_2) for ℓ =1/2, with r>0. Let us perform the bifurcation analysis described above for these curves to confirm their existence. First, we will do a change of variables (θ_1,θ_2)→(θ_1+θ_1^∗,θ_2+θ_2^∗) so that our bifurcation points are at (θ_1,θ_2) =(0,0). For (θ_1^∗,θ_2^∗) ∈{(0,0) ,(0,π/2) ,(π/2,0) ,(π/2,π/2) }, it is easily confirmed that (f,g)(θ_1+θ_1^∗ ,θ_2+θ_2^∗,r) is odd in (θ_1,θ_2). Next, we will apply our bifurcation analysis to ℓ_1=3/4 for ℬ_TP. This is the curve which bifurcates from the trapezoid configuration ℛ_T at r_2, then the dumbbell bodies rotate until the curve merges with the perpendicular configuration ℛ_P at r_7. ℬ_TP(r_7) Pitchfork for ℓ_1=3/4 When we are at (θ_1^∗,θ_2^∗) =(π/2,0), our system (<ref>) becomes: f(θ⃗;r) :=∂_θ_1V(θ⃗,r) =1/2(1/8sin(θ_1-θ_2)-rsinθ_1)(1/d_11^3-1/d_21^3) +1/2(1/8sin(θ_1-θ_2)+rsinθ_1)(1/d_22^3-1/d_12^3), and g(θ⃗;r) :=∂_θ_2V(θ⃗,r) =1/2(3/8sin(θ_1-θ_2) -rsinθ_2)(1/d_12^3-1/d_11^3) -1/2(3/8sin(θ_1-θ_2)+rsinθ_2)(1/d_21^3-1/d_22^3), where d_wv(θ⃗;r) :=√(r^2-(-1)^w3/4rcosθ_1+(-1)^v1/4rcosθ_2-(-1)^w+v3/32cos(θ_1-θ_2)+5/32). For θ⃗=0⃗, we locate | D(f,g) _(0⃗;r)| =0 for r^∗≈ 0.3893. We then calculate the eigenvalues (μ _1, μ _2) of | D(f,g)_(0⃗;0)|, the eigenvectors, and the transition matrix P= [ P_11 P_12; P_12 -P_11 ] to the Jordan normal form, where (P_11,P_12)≈( 0.1669, -0.9860). This allows us to change variables to align our axes with the tangent plane to the RE curves. Observe that | P| =1, such that P=P^-1. So, let u⃗=Pθ⃗, or θ_1=P_11u_1+P_12u_2 and θ_2=P_12u_1-P_11u_2. Our new functions become: (f(u⃗),g(u⃗)) :=P^-1(f(P^-1u⃗;r) ,g(P^-1u⃗;r)). If we then define: [ f_1 f_2; g_1 g_2 ]:=D(f,g) _u⃗, we find that at the bifurcation point we have: [ f_1 f_2; g_1 g_2 ]|_(0⃗;0)=[ 0 0; 0 -1.295 ]. In other words: μ _1(0) =0, μ _2(0) =:μ≈ -1.295≠ 0, k:=dμ _1/dr=f_1r|_(0⃗;0)≈ 0.9213≠ 0, and l:=1/3f_ u_1 u_1 u_1(0;0)≈ 0.05897. And using our conclusions (<ref>) above, we have: fleqntrue u_2=0+𝒪(u_1^3) , r=-l/2ku_1^2+𝒪(u_1^3)≈- 0.05897/2( 0.9213)u_1^2+𝒪(u_1^3)≈ -0.03200 u_1^2+𝒪(u_1^3). To compare (<ref>) to our numerical results, we will reverse our previous change of coordinates. In general we have: r→ r-r^∗, u_1→ P_11(θ_1-θ_1^∗)+P_12(θ_2-θ_2^∗)and u_2→ P_12(θ_1-θ_1^∗) -P_11(θ_2-θ_2^∗). So (<ref>) becomes: θ_2 =θ_2^∗+P_12/P_11(θ_1-θ_1^∗)+𝒪((θ_1-θ_1^∗)^3) r =r^∗-l/2k(P_11(θ_1-θ_1^∗)+P_12(θ_2-θ_2^∗))^2+𝒪((θ_1-θ_1^∗)^3) =r^∗-l/2k(P_11+P_12^2/P_11)^2(θ_1-θ_1^∗)^2+𝒪((θ_1-θ_1^∗)^3) And our parameterized graph in (θ⃗,r) becomes: G(θ_1) :=(θ_1,θ_2^∗+P_12/P_11(θ_1-θ_1^∗) ,r^∗-l/2k(P_11+P_12^2/P_11)^2(θ_1-θ_1^∗)^2). In particular, for ℬ_TP(r_7) we take: r→ r- 0.3893, u_1→ 0.1669θ_1- 0.9860(θ_2-π/2), and u_2→ -0.9860θ_1 -0.1669(θ_2-π/2). Also, P_12/P_11= -0.9860/ 0.1669≈ -5.909, and -l/2k(P_11+P_12^2/P_11)^2 ≈ -0.03200( 35.92) ≈ -1.150. Therefore, we find the parameterized graph of ℬ_TP(r_7) as G=(θ_1, -5.909 θ_1, 0.3893-1.150 (θ_1-π/2)^2). Below, this graph is plotted in the rθ_2-plane. We also include the numerically found RE for comparison. ℬ_TP(r_2) Pitchfork for ℓ_1=3/4 Similarly, for the start of the trapezoid to perpendicular bifurcation ℬ_TP(r_2) we find: G=(θ_1,π/2-6.44012 (θ_1-π/2) ,0.360032-0.0253042(42.4752) (θ_1-π/2)^2). And below we include a plot comparing the numerically found results to this curve. ℬ_CP Pitchforks for ℓ_1=3/4 ℬ_CP is the curve which bifurcates from the colinear configuration, and later merges with the perpendicular configuration. For the two ends of the curve we find (respectively): 20ptG=(θ_1, 19.99θ_1, 0.3686+0.01874(400.8)θ_1^2), and 20ptG=(θ_1,π/2-21.611θ_1, 0.3865-0.01684(468.0) θ_1^2). Below are the plots comparing the numerically found results to these curves. ℬ_LP Pitchforks for ℓ_1=1/2 ℬ_LP is the curve which bifurcates from (θ_1,θ_2)=( ^-1√(2), ^-1√(2)) at r=0 (a collision), and later merges with the perpendicular configuration. For the end of the curve at r_2 we find: G=(θ_1,π/2 -1.686θ_1,0.3377-0.1068(3.842)θ_1^2). Below we compare the numerically found results to this curve. L^2 Bifurcation Analysis Now that we have confirmed the existence of the trapezoid and asymmetric RE curves, let us determine the regions of the parameter space that differ by the number of RE. For the asymmetric curves, we first calculate angular momentum for the numerically located RE families. Recall for ℓ_1≠1/2, we had a curve bifurcating from the trapezoid RE and subsequently merging with the perpendicular RE (see Figure <ref>), and a curve bifurcating from the colinear RE and subsequently merging with the perpendicular RE (see Figure <ref>). And for ℓ_1=1/2, we had one family of solutions in which a curve bifurcated from a collision of the two dumbbells, and subsequently merged with the perpendicular RE (see Figure <ref>). For each of these families, the value of M_1 did not qualitatively change the shape of the angular momentum graphs. Below you see the graphs for these families over the relevant radii. Observe these curves are all strictly decreasing. So for the angular momentum range allowed by these curves, there is only one RE per L. For the trapezoid RE, the L^2 curves look qualitatively similar to the ones below. The L^2 equation <ref> for the trapezoid configuration becomes (r^2+B_1+B_2)^2/2(1/d^3_11+1/d^3_12). Note geometrically that when the length of the dumbbells are unequal, and r=0, the distances d_11,d_12 are strictly positive, and by inspection L^2 is finitely positive. When the lengths are equal, the distance d_11 goes to zero, and the expression becomes unbounded (as in Figure <ref>). Taking a derivative, we have: ∂_rL^2=-3/2r(r^2+B_1+B_2)^2(1/d^5_11+1/d^5_12)+2r(r^2+B_1+B_2)(1/d^3_11+1/d^3_12). We can see by inspection that for dumbbells of unequal length, this slope goes to zero as r→0. Additionally, since d_ij→ r as r→∞, the expression goes to 1, and angular momentum is unbounded as r→∞. Looking at the curvature at r=0, we calculate ∂_r^2L|_r=0. However, this gives a very complicated expression, ill-suited to determining the sign. If instead, we make substitutions <ref> in ∂_r^2L|_r=0, then after some simplification we get the following when ℓ_1<ℓ_2: - 3ℓ^7(1 + M)^2 - 3ℓ^6(9 + 14 M + 5 M^2) - 2ℓ^5(51 + 58 M + 15 M^2) - 10ℓ^4(21 + 16 M + 3 M^2) - 5ℓ^3(51 + 22 M + 3 M^2) - 2ℓ^2(93 + 8 M + 3 M^2) +28ℓ(M-3) + 8(M-3). Observe by inspection that each term is negative when M<3, or equivalently when M_1<3/4 (same result occurs for ℓ_2<ℓ_1 when M_2<3/4). So we have negative curvature (solid line in Figure <ref>). Observe that if M>3, the last two terms are positive such that for small ℓ (ℓ_1 small) we have positive curvature (dotted line in Figure <ref>). So the boundary behavior matches what we see in the graphs. However, proving the shape of the graph between the boundaries for all choices of parameters (ℓ_1,M_1) is nontrivial. Numerically, we find that the above graphed results reflect the possibilities. That is, when M_1<3/4 or with ℓ_1 large, we have one minimum, and as L^2 increases, we have zero, then two, then one RE. When M_1>3/4 and ℓ_1 small, we have no minimum, and as L^2 increases, we have zero, then one RE. And when ℓ_1=1/2, we have zero, then two RE. §.§.§ Energetic Stability for Equal Mass Now that we have located RE for the equal mass configuration, let us consider its stability. Since we have already characterized energetic stability for the colinear and perpendicular configurations, we will examine only the trapezoid configuration and the asymmetric bifurcation curves with L^2>0. As we saw in Section <ref>, to determine energetic stability we check if the RE are strict minima of the amended potential V. However, for the equal mass configuration, the amended potential V's Hessian is not (in general) block diagonal. So it is not simple to characterize conditions under which H is positive definite. However, note that for each choice of r, we can consider the two-dimensional trace and determinant of the amended potential V's Hessian (<ref>). We can therefore identify the regions where the RE would be 2D maxima, minima, or saddles. In our effort to identify the 3D minima, being a 2D minimum becomes a necessary criteria, and will help us eliminate unstable RE. After setting r, the configuration space consists of a (θ_1,θ_2) torus. In our analysis below, you can reference the graphs supplied in the equal mass bifurcation section of <ref>. These will provide visual references consistent with the stability we find below. Note that in addition to the RE graphed in these figures, we also hatch marked the positive regions of the two-dimensional V trace and determinant. This way, the RE are stable if they are in a crosshatched region. In Figure <ref> you see an example of such a graph (r=0.018, ℓ_1=3/4). We see that the only RE in the positive 2D determinant region are ℛ_C and ℛ_P_1. And of these, only ℛ_C also has a positive 2D trace. So we may conclude that ℛ_C is a 2D minimum, ℛ_P_1 is a 2D maximum, and ℛ_P_2,ℛ_T are 2D saddles. As noted earlier, we find ℓ_1=1/2 leads to qualitatively different bifurcations than when ℓ_1≠1/2. So let us explore the energetic stability of the RE for these two cases. Case: ℓ_1≠1/2 Recall the description for the ℓ_1≠1/2 curves given in the equal mass bifurcation section of <ref>, where we determined that, besides the colinear and perpendicular which we covered previously, the only curves with physically realizable angular momenta are the trapezoid and bifurcated curves ℬ_TP,ℬ_CP. So we restrict our analysis to these below. Looking at the specific case of ℓ_1=3/4, we calculate bifurcation radii of interest as: (r_1,r_2,r_3,r_4)≈(1/4,0.3600,0.3630,0.3686), and (r_5,r_6,r_7,r_8) ≈(0.3812,0.3865,0.3893,1/2), (consistent with the figures in the equal mass bifurcation section). In the tables below, we map the signs of the full 3D eigenvalues (Sgn(e_r,e_θ_i) ∈(± ,± ,±)) for each curve as r varies through the above mentioned radii. These signs are consistent with the graphs in the equal mass bifurcation section of <ref>. These signs are found by looking at the eigenvalues of the Hessian at the numerically located RE in the configuration space. Since the signs are from the 3D Hessian, the sign of the radial eigenvalue seen in the tables below will change at radii not indicated in our 2D images above. By comparing the 2D to the 3D eigenvalues, we can also discern which is the radial eigenvalue. Trapezoid0.5cm0cmIn addition to the bifurcation seen above at r_2≈ 0.360, for some r_t (which depends upon M_1), the trapezoid radial eigenvalue turns from negative to positive. Below is a table of the signs of H's eigenvalues (<ref>) as r ranges through the relevant radii. Observe we find no stability (Sgn(e_r,e_θ_i)(+,+,+)). table Equal Mass Trapezoid Eigensigns with ℓ_11/2 Below you find visualizations of the dumbbell configuration for a couple of radii in this range. TP Bifurcation 0.5cm0cm We saw bifurcations in Figures <ref>b, <ref>a at (r_2,r_7) ≈(0.3600,0.3893). Below is a table of the signs of H's eigenvalues (<ref>) as r ranges through the relevant radii. Note that the notation 0^{-,+,-} is used below to imply that the eigenvalue is near zero, but takes on positive or negative values depending upon the value of M_1. In particular, 0^{-,+,-} is meant to imply that for smaller values of M_1, the eigenvalue is negative, for middle range values of M_1 the eigenvalue is positive, and for larger values of M_1 the eigenvalue is again negative. Observe we find no stability (Sgn(e_r,e_θ_i)(+,+,+)). table Equal Mass Trapezoid→Perpendicular Eigensigns with ℓ_11/2Below you find visualizations of the dumbbell configuration for various radii in this range. CP Bifurcation 0.5cm0cm We saw bifurcations in Figures <ref>, <ref> at (r_4,r_6) ≈(0.3686,0.3865). Below is a table of the signs of H's eigenvalues (<ref>) as r ranges through these radii. Observe we find no stability (Sgn(e_r,e_θ_i) (+,+,+)). table Equal Mass Colinear→Perpendicular Eigensigns with ℓ_11/2 Below you find visualizations of the dumbbell configuration for various radii in this range. Stability Conclusions In our effort to find energetic stability, we were looking for eigenvalues (e_r,e_θ_i) such that Sgn(e_r,e_θ_i) =(+,+,+), which are associated with strict minima. In Section <ref>, we saw that ℛ_C, with sufficiently large radius such that ∂_rL^2>0, is energetically stable. We note from above that as r varies, none of the other RE curves' signs become (+,+,+). For the most part, we find saddles. But for the interval (0,r_7) of ℛ_P_1, for certain M_1 values we have maxima. Similarly we find maxima for ℛ_T on (r_2^+,π/2,π/2). Although, as has been noted previously, the positive definite nature of the kinetic energy in the Hamiltonian makes these RE saddles in the energy manifold. Case: ℓ_1=1/2 Now we will take a look at ℓ_1=1/2, since it has qualitatively different bifurcation curves than ℓ_11/2. Recall the description for the ℓ_1=1/2 curves given in the equal mass bifurcation section of <ref> where we determined that, besides the colinear and perpendicular which we covered previously, the only curves with physically realizable angular momenta are the trapezoid and ℬ_LP/RP. So we restrict our analysis to these below. In the following tables, we look at each of these curves, and map the signs of the full 3D eigenvalues (Sgn(e_r,e_θ_i) ∈(± ,± ,±)) as r varies through the above mentioned radii. These signs are consistent with the graphs in the equal mass bifurcation section of <ref>. These signs are found by looking at the eigenvalues of the Hessian at numerically located RE in the configuration space. Note that the signs of the 3D radial eigenvalue, seen in the tables below, will change at radii not suggested in our 2D images above. Trapezoid 0.5cm0cm In addition to the bifurcation seen in Figure <ref> at r=0, for some r_t (which depends upon M_1), the trapezoid radial eigenvalue turns from negative to positive. Below is a table of the signs of H's eigenvalues (<ref>) as r ranges through the relevant radii. Observe we find no stability (Sgn(e_r,e_θ_i) ≠(+,+,+)). table Equal Mass Trapezoid Eigensigns with ℓ_1=1/2 Below you find visualizations of the dumbbell configuration for a couple of radii in this range. Perpendicular Bifurcated 0.5cm0cm We saw a bifurcation in Figure <ref> at r_2≈ 0.3377. Below is a table of the signs of H's eigenvalues (<ref>) as r ranges through the relevant radii. Observe we find no stability (Sgn(e_r,e_θ_i)≠(+,+,+)). table Equal Mass Perpendicular Bifurcation Eigensigns with ℓ_1=1/2 0^+ r_2^- r_2^+ (-,+,-) (0^-,+,-) N/A Below you find visualizations of the dumbbell configuration for various radii in this range. Stability Conclusions for Equal Mass In our effort to find energetic stability, we were looking for eigenvalues (e_r,e_θ_i) such that Sgn(e_r,e_θ_i) =(+,+,+), which are associated with strict minima and stability. In Section <ref>, we saw that ℛ_C, with sufficiently large radii such that ∂_rL^2>0, is energetically stable. We note from above that as r varies, none of the other RE curves' signs become (+,+,+). For the most part, we find saddles in V. But we do find maxima in V for low radius ℛ_T, which are saddles for the Hamiltonian when considering the positive definite kinetic energy. §.§.§ Linear Stability for Equal Mass Despite finding energetic stability in only the colinear case, when we map the linear stability criteria (<ref>) to the equal mass configuration, we see some linear stability for each of the symmetric, but none of the asymmetric cases. As we have already examined linear stability for the colinear and perpendicular configurations, we will restrict ourselves to the trapezoid. Particularly, for small r in the θ_1θ_2-plane (see Figure <ref>) we have linear stability in the trapezoid case when ℓ_1≠1/2. Below are graphs in the θ_1θ_2 and rM_1-planes at r,ℓ_1,M_1 values where linear stability exists. In Figure <ref> we note a lack of stability for the trapezoid configuration when ℓ_1=1/2. We also see that as ℓ _1→ 1, stability appears to converge to those found by Beletskii and Ponomareva <cit.> for the dumbbell/point mass problem in their fig 5. All linear stability occurs at low r. Similar figures for ℓ_1<1/2 exist requiring M_2 large when ℓ_2 large. Physically speaking, linear stability here requires that if one body is long, it is also the massive body. This would require the shorter body to be less massive. This seems to fit well with how mass and size tend to work in real life, and so does not impose a requirement which is difficult to satisfy. Equal Mass Linear Stability Conclusions For the equal mass configuration, in addition to our previous colinear and perpendicular results, we found linear stability for the trapezoid case when r is small and ℓ_1≠1/2, but none for the asymmetric RE curves. As referenced earlier, we now provide a theorem which geometrically restricts the location of planar rigid bodies when in planar RE with a dumbbell. We noted earlier how the RE located in this paper obeyed these restrictions. § PERPENDICULAR BISECTOR THEOREM FOR RE OF A DUMBBELL AND RIGID PLANAR BODIES In 1990, Conley and Moeckel developed the perpendicular bisector theorem which restricts the possible geometries of central configurations <cit.>. For each pair of point masses r⃗_i,r⃗_j, the theorem asks one to consider the four quadrants formed by the line containing r⃗_i,r⃗_j, and its perpendicular bisector. The hourglass shape which is formed from the union of the 1st and 3rd quadrants is called a cone, similarly with the 2nd and 4th quadrants. The term “open cone” refers to a cone minus the axes. Let q = {r⃗_1, r⃗_2,...} be a planar central configuration and let r⃗_i and r⃗_j be any two of its points. Then, if one of the two open cones determined by the line through r⃗_i and r⃗_j and its perpendicular bisector contains points of the configuration, so does the other one. We will prove an extension of this theorem as it relates to a dumbbell and several rigid planar bodies in planar RE. In particular, the theorem will also apply to discretized bodies. A discretized body is one which consists of point masses, all connected by massless rods, with a point mass body being trivially discretized. And of course, a dumbbell is a discretized body. r0.41 1 [subfigure]justification=centering < g r a p h i c s > Dumbbell and Rigid Bodies Initial system rotation chosen such that the dumbbell is parallel to the horizontal axis. For this analysis, reference Figure <ref>. As before, we let r⃗_1 and r⃗_2 be the locations of the dumbbell's masses (with mass ratios x_1,x_2). We denote the body mass of the dumbbell as M_1, the dumbbell body as ℬ_1, and the other rigid bodies as ℬ_2,...,ℬ_n. We assume a reference frame rotating such that a RE configuration will be at equilibrium. Note that the dynamics do not depend upon our choice initial system rotation, so for convenience of calculation, and without loss of generality, we choose this rotation such that the dumbbell is parallel to the horizontal axis, see Figure <ref>. Let a dumbbell r_1r_2 and one or more planar rigid bodies ℬ_2,...,ℬ_n be in a planar RE. Then, if one of the two open cones determined by the lines through r_1r_2 and its perpendicular bisector contains one or more rigid bodies, the other open cone cannot be empty. The idea of the proof is to calculate the rotational acceleration θ̈ of the dumbbell, and to show that it will be nonzero if the rigid bodies are contained in just one open cone. Note that each of the point masses on the dumbbell experiences acceleration due to gravitation and centrifugal forces. Our choice of system rotation has positioned our dumbbell such that the rotational acceleration is determined by the second of our vector components (perpendicular to the dumbbell). Therefore, to determine the rotational acceleration of the dumbbell, we subtract the accelerations of the two point masses, and look at the vertical component. §.§ Centrifugal Force Note that the centrifugal forces for our masses are: 20ptF⃗_c_i=M_1x_iv⃗_i^2pt2/|r⃗_i|^2r⃗_i=M_1x_i·φ^2r⃗_i =:M_1x_i·φ^2(r_ix,r_iy). And the accompanying centrifugal accelerations can be written as k(r_ix,r_iy), with k depending upon rotation rate. The centrifugal accelerations perpendicular to the massless rod are kr_iy. Due to the dumbbell being parallel with the horizontal axis, note that the second (vertical) component of r⃗_i does not depend on i. Therefore, when we subtract these components to determine the rotational acceleration due to centrifugal force (θ̈_c=kr_2y-kr_1y), the terms cancel and we find centrifugal force does not contribute to rotational acceleration. §.§ Gravitational Force Observe that the gravitational effect of r⃗_1 on r⃗_2 is exactly canceled out by an equal and opposite force of r⃗_2 on r⃗_1 being transmitted through the massless rod connecting them. Therefore, for each point mass on the dumbbell, gravitationally we need only take into account the force exerted by the rigid bodies. Note that the gravitational force and acceleration on each r_i is -∂_r_iU and -∂_r_iU/M_1x_i, respectively. If we let δ(p⃗) represent the density function, we have: ∂_r_2U/M_1x_2-∂_r_1U/M_1x_1=-1/M_1(∂_r_2/x_2-∂_r_1/x_1)(∑_k=2^n∫_p∈ℬ_k(x_1/|p-r_1| ^2+x_2/|p-r_2|^2)δ(p) dp) =-1/M_1∑_k=2^n( ∫_p∈ℬ_k∂ _r_2( δ( p)/|p-r_2| ^2) -∂ _r_1(δ( p) /|p-r_1| ^2) dp) =2/M_1∑_k=2^n∫_p∈ℬ_k(p-r_2/|p-r_2|^4-p-r_1/|p-r_1|^4)δ(p) dp. Looking at the vertical components, we can calculate the total rotational acceleration for the dumbbell. And, since the dumbbell is horizontal, note that p_y-r_2y=p_y-r_1y, allowing for some simplification in the following calculation: ··θ=2/M_1∑_k=2^n∫_p∈ℬ_k(p_y-r_2y/|p-r_2|^4-p_y-r_1y/|p-r_1| ^4) δ( p) dp. =2/M_1∑_k=2^n∫_p∈ℬ_k(p_y-r_1y)(1/|p-r_2|^4-1/|p-r_1| ^4) δ( p) dp. Now consider the quadrants determined by the line through the dumbbell's rod and that rod's perpendicular bisector (see Figure <ref>). Since density is always positive, we see that if a particular ℬ_k is in the 4th quadrant, then |p⃗-r⃗_2|<|p⃗-r⃗_1|, p_y-r_iy<0 for all p⃗∈ℬ_k, and the integral will be negative. If a ℬ_k is in the 3rd quadrant, then |p⃗-r⃗_1|<|p⃗-r⃗_2|, p_y-r_iy<0 for all p⃗∈ℬ_k, and the integral will be positive. Similarly, the integral will be negative when a ℬ_k is in the 2nd quadrant and positive in the 1st quadrant. The preceding analysis for discretized bodies is nearly identical, except for the use of summations over the discrete points in <ref>, instead of integrals. Therefore, if the ℬ_k are all in the open cone of the 2nd and 4th quadrants, the dumbbell will accelerate clockwise. And if they are in the 1st and 3rd, the dumbbell will accelerate counterclockwise. So, for RE, either both cones are empty or both are occupied. CHAPTER: CONCLUSION § DUMBBELL/POINT MASS PROBLEM We verified the RE (colinear and isosceles) and stability for the dumbbell/point mass problem found by Beletskii and Ponomareva. We also found RE, including some stable RE in the overlapped colinear region. We performed bifurcation analyses for all RE in order to characterize qualitatively different regions of the L^2 and x_1 M_1 parameter space where the number of RE differ. Colinear Our bifurcation analysis discovered that in the non-overlapped colinear configuration, irrespective of x_1,M_1, for small angular momenta there are no RE. For sufficiently large angular momenta there are two RE (one at a closer radius, and another farther away). For the overlap region, when x_1>x_2 we found no RE for any angular momenta. However, we discovered that when x_1≪ x_2 we have either one or three RE depending upon the angular momentum, and when x_1<x_2 with x_1 sufficiently large we have one RE for all angular momenta. Stability We verified stability found in Beletskii and Ponomareva for the non-overlap region, with sufficiently large radius. In the overlapped region we also found radial intervals of energetic stability for sufficiently small x_1. Smaller x_1 is associated with larger intervals of stability. For physically realizable angular momenta, linear stability coincided with the energetic stability. Isosceles In our bifurcation analysis, we found as angular momentum increases, that for a sufficiently massive point mass body there is either no RE, or one RE. For a less massive point mass body, as angular momentum increases we find no RE, then two RE, then one RE (see Figure <ref>).Stability We found no energetic stability for the isosceles configuration, but for sufficiently large vertical body mass, we find radial intervals of linear stability. Depending on x_1,M_1, there are either one or two such radial intervals. § TWO DUMBBELL PROBLEM We found RE and examined stability for symmetric (colinear, perpendicular, equal mass) configurations, as well as asymmetric RE curves bifurcating from them. Colinear We discovered that irrespective of the choice of parameters, with sufficiently low angular momenta L, there are no RE. However, for some L_b>0, and at some radius r_b, two RE bifurcate. And for all angular momenta greater than L_b, there are two RE (see Figure <ref>). Stability We showed that the RE of the colinear configuration are stable when the radius is sufficiently large, in particular when ∂_rL^2>0. We found linear stability coincided with the energetic stability. Perpendicular We proved that for the perpendicular configuration, there is a family of isosceles RE (where d_11=d_21 and d_12=d_22). We found RE for every radius of the isosceles family. Numerically, we found there were many qualitatively different L^2 bifurcation graphs for the family, depending upon the choice of parameters (see Figure <ref>). Stability We showed there is no energetic stability for the isosceles family. However, we did find radial intervals of linear stability for for large (resp. small) M_1 when (θ_1,θ_2)=(π/2,0) (resp. (θ_1,θ_2)=(0,π/2)) (see Figure <ref>). Equal Mass We identified a symmetric equal mass trapezoid RE ((θ_1,θ_2)=(π/2,π/2)). Additionally, a bifurcation analysis (this time using r as our bifurcation parameter) revealed several families of RE bifurcating from the symmetric ones and curving through rθ_1θ_2-space. While some of these RE are nonphysical due to their complex or unbounded angular momenta, some are not. In particular, for ℓ_1≠π/2 we found a curve bifurcating from the trapezoid RE and subsequently merging with the perpendicular RE (see Figure <ref>), and a curve bifurcating from the colinear RE and subsequently merging with the perpendicular RE (see Figure <ref>). For ℓ_1=1/2 we found a curve bifurcating from a collision of the two dumbbells, and subsequently merging with the perpendicular RE (see Figure <ref>). Regarding the L^2 bifurcation, we found 3 possibilities for the trapezoid configuration. When M_1<3/4 or with ℓ_1 large, we have one minimum, and as L^2 increases, we have zero, then two, then one RE. When M_1>3/4 and ℓ_1 small, we have no minimum, and as L^2 increases, we have zero, then one RE. And when ℓ_1=1/2, we have zero, then two RE as L^2 increases. For the bifurcating families of asymmetric (and physically realizable) RE, we have exactly one RE for each angular momentum within the range of angular momenta occurring in these curves. Stability The trapezoid configuration and asymmetric RE showed no energetic stability. The asymmetric RE also showed no linear stability. However, we did find linear stability for the trapezoid case for small r when ℓ_1≠1/2. Perpendicular Bisector Theorem for a Dumbbell and Planar Rigid Bodies We also proved an extension of the Conley Perpendicular Bisector Theorem. Let a dumbbell r_1r_2 and one or more planar rigid bodies ℬ_2,...,ℬ_n be in a planar RE. Then, if one of the two open cones determined by the lines through r_1r_2 and its perpendicular bisector contains one or more rigid bodies, the other open cone cannot be empty. CHAPTER: APPENDICES tocchapterAppendices § NOTATION USED IN THIS PAPER Notation Meaning O Origin, also system's center of mass ℬ_i Gravitational bodies in the system C⃗,C⃗_i Location of center of mass for bodies r⃗_p Location of point mass body r⃗_i,r⃗_ij,r⃗_kj Locations of points on dumbbell or discretized body M_i Mass of body i r⃗, r Vector and distance between bodies' centers of mass R distance between bodies' centers of mass after change of variable ℓ_i Length of massless rod connecting point masses r⃗_ij on dumbbell ϕ Acute angle between positive horizontal axis and r⃗ θ,θ_i Angle between r⃗ and dumbbell's rod v⃗_i, v⃗_ij Velocity with which mass r⃗_i,r⃗_ij is moving ℒ,T,U,V Lagrangian, kinetic, potential energy, and amended potential d_i,d_ij Distance from r⃗_p to r⃗_i, and from r⃗_1i to r⃗_2j, respectively x_i,x_ij Dumbbell mass ratios B,B_i Moment of inertia for dumbbells F⃗_c_i Centrifugal force experienced by r⃗_i L Scalar angular momentum ℛ_C,ℛ_P_1,2,ℛ_T Equal mass colinear, perpendicular, and trapezoid RE ℬ_CC^±,ℬ_TP^±, ℬ_PC^±,ℬ_CP^± Equal mass ℓ_1≠1/2 RE curves bifurcating from symmetric RE ℬ_C^±,ℬ_T^±, ℬ_LP^±,ℬ_RP^± Equal mass ℓ_1=1/2 RE curves bifurcating from symmetric RE δ(p⃗) Density function 99 tocchapterBibliography =16pt newton1687 I. Newton, Philosophiae Naturalis Principia Mathematica, Londini: Jussu Societatis Regiæ ac Typis Josephi Streater. Prostat apud plures Bibliopolas, (1687). Euler1766 L. Euler, Considerationes de motu corporum coelestium (E304), Novi Commentarii academiae scientiarum Petropolitanae 1766(10), (1766), 544-558. Euler1767 L. Euler, De motu rectilineo trium corporum se mutuo attrahentium (E327), Novi Commentarii academiae scientiarum Petropolitanae 1767(11), (1767), 144-151. Laplace1787 P. Laplace, Mémoire sur les inégalités séculaires des planètes et des satellites, MARS, 1–50; OC., XI, (1787), 49–92. Dirichlet1846 P.G.L. Dirichlet, Über die Stabilität des Gleich-gewichts, CRELLE, J. Reine Angew. Math. 32, (1846), 85–88. Maxwell1859 J.C. Maxwell, On the stability of the motion of Saturn’s rings, An essay, which obtained the Adams Prize for the year 1956, in the university of Cambridge. Cambridge: Macmillan and CO. 23 Henrietta street, Covent garden, London, (1859), 1-71 Noether1918 E. Noether, Invariante Variationsprobleme, Kgl. Ges. d. Wiss. Nachrichten, Math.-phys. Klasse, (1918), 235-257. smale1970 S. Smale, Topology and mechanics, Inventiones mathematicae, 10, (1970), 305-331. beletskii1990 V.V. Beletskii, O.N. Ponomareva, Parametric Analysis of Relative Equilibrium Stability in a Gravitational Field, Kosmicheskie Issledovaniia, 28(5), (1990), 664-675. Moeckel1990R. Moeckel, On Central Configurations, Math. Z. 205, (1990), 499-517. wang1992 L. Wang, J.H. Maddocks, P.S. Krishnaprasad, Steady Rigid-Body Motions in a Central Gravitational Field, Journal of the Astronautical Sciences, 40(4), (1992), 449-478. marsden1992 J.E. Marsden, Lectures on Mechanics, Cambridge Univ. Press, (1992). Moeckel1994 R. Moeckel, Linear stability of relative equilibria with a dominant mass, J Dyn Diff Eqns, Vol 6, No. 1, (1994), 37-51. maciejewski1995 A.J. Maciejewski, Reduction, Relative Equilibria and Potential in the Two Rigid Bodies Problem, Celestial Mechanics and Dynamical Astronomy, 63(1), (1995), 1-28.6 scheeres2000 D.J. Scheeres, S.J. Ostro, R.A. Werner, E.I. Asphaug, R.S. Hudson. Effects of Gravitational Interactions on Asteroid Spin States, Icarus 147, (2000), 106-118. scheeres2001 D.J. Scheeres, Changes in Rotational Angular Momentum due to Gravitational Interactions Between Two Finite Bodies, Celest. Mech. Dynam. Astron. 81, (2001), 39-44. Rahman2002 Q.I. Rahman, G. Schmeisser. Analytic Theory of Polynomials, London Mathematical Society Monographs, New Series, No. 26, (2002), 366-368. ElipeETAL2006 A. Elipe, M. Palacios, and H. Pretka-Ziomek, Equilibria of the three-body problem with rigid dumb-bell satellite, Chaos Soliton Fract., 35, (2006), 830-842. scheeres2006 D.J. Scheeres, Relative Equilibria for General Gravity Fields in the Sphere-Restricted Full 2-Body Problem, Celestial Mechanics and Dynamical Astronomy, 94(3), (2006), 317-349. scheeres2007 D.J. Scheeres, Minimum Energy Configurations of Resting Equilibria, abstract presented at the 38th American Astronomical Society Division on Dynamical Astronomy, Ann Arbor, (2007). scheeres2009 D.J. Scheeres, Stability of the Planar Full 2-Body Problem, Celestial Mechanics and Dynamical Astronomy, 104(1-2), (2009), 103-128. scheeres2012 D.J. Scheeres, Minimum Energy Configurations in the N-Body Problem and the Celestial Mechanics of Granular Systems, Celestial Mechanics and Dynamical Astronomy, 113(3), (2012), 291-320. Kinoshita2013 H. Kinoshita, Stability motions of an axisymmetric body around a spherical body and their stability, Publ. Astron. Soc. Jpn., 22, (1970), 383–403. Michaely2017 E. Michaely, H. Perets, E. Grishin, On the Existence of Regular and Irregular Outer Moons Orbiting the Pluto-Charon System, The Astrophysical Journal, Volume 836, Issue 1, article id. 27, (2017). MaciejewskiETAL2018 A. J. Maciejewski, M. Przybylska, L. Simpson, and W. Szumiński, Non-integrability of the dumbbell and point mass problem, Celestial Mech. Dynam. Astronom., 117, (2018), 315-330. moeckel2018 R. Moeckel, Counting Relative Equilibrium Configurations of the Full Two-Body Problem, Celestial Mechanics and Dynamical Astronomy, 130(2), (2018). Naidua2020 S.P. Naidua, L.A.M. Bennera, M. Brozovica, M.C. Nolanb, S.J. Ostroa, J.L. Margotc, J.D. Giorginia, T. Hirabayashid, D.J. Scheeres, P. Pravecf, P. Scheirichf, C. Magrig, J.S. Jaoa, Radar observations and a physical model of binary near-Earth asteroid 65803 Didymos, target of the DART mission, Icarus, Volume 348, (2020). DilaoMurteira2020 R. Dilão, M. Murteira, Principal Periodic Orbits of the Keplerian Dumbbell System, Siam J. Applied Dynamical Systems, Vol. 19, No. 1, (2020), 181-207. Bistafa2021 S.R. Bistafa, Euler's three-body problem, Euleriana: 1(2), Article 6, (2021), 181.
http://arxiv.org/abs/2307.00858v2
20230703085730
Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding
[ "Zijian Dong", "Yilei Wu", "Yu Xiao", "Joanna Su Xian Chong", "Yueming Jin", "Juan Helen Zhou" ]
q-bio.NC
[ "q-bio.NC", "cs.LG", "eess.IV" ]
Brain TokenGT Z.Dong et al. Centre for Sleep and Cognition & Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore Department of Electrical and Computer Engineering, National University of Singapore, Singapore Integrative Sciences and Engineering Programme (ISEP), NUS Graduate School, National University of Singapore, Singapore Department of Biomedical Engineering, National University of Singapore, Singapore helen.zhou@nus.edu.sg Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding Zijian Dong1,2 Yilei Wu1 Yu Xiao1 Joanna Su Xian Chong1 Yueming Jin4,2 Juan Helen Zhou1,2,3() August 1, 2023 ============================================================================================================= Under the framework of network-based neurodegeneration, brain functional connectome (FC)-based Graph Neural Networks (GNN) have emerged as a valuable tool for the diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's disease (AD). However, these models are tailored for brain FC at a single time point instead of characterizing FC trajectory. Discerning how FC evolves with disease progression, particularly at the predementia stages such as cognitively normal individuals with amyloid deposition or individuals with mild cognitive impairment (MCI), is crucial for delineating disease spreading patterns and developing effective strategies to slow down or even halt disease advancement. In this work, we proposed the first interpretable framework for brain FC trajectory embedding with application to neurodegenerative disease diagnosis and prognosis, namely Brain Tokenized Graph Transformer (Brain TokenGT). It consists of two modules: 1) Graph Invariant and Variant Embedding (GIVE) for generation of node and spatio-temporal edge embeddings, which were tokenized for downstream processing; 2) Brain Informed Graph Transformer Readout (BIGTR) which augments previous tokens with trainable type identifiers and non-trainable node identifiers and feeds them into a standard transformer encoder to readout. We conducted extensive experiments on two public longitudinal fMRI datasets of the AD continuum for three tasks, including differentiating MCI from controls, predicting dementia conversion in MCI, and classification of amyloid positive or negative cognitively normal individuals. Based on brain FC trajectory, the proposed Brain TokenGT approach outperformed all the other benchmark models and at the same time provided excellent interpretability. § INTRODUCTION The brain functional connectome (FC) is a graph with brain regions of interest (ROIs) represented as nodes and pairwise correlations of fMRI time series between the ROIs represented as edges. FC has been shown to be a promising biomarker for the early diagnosis and tracking of neurodegenerative disease progression (e.g., Alzheimer's Disease (AD)) because of its ability to capture disease-related alternations in brain functional organization <cit.>. Recently, the graph neural networks (GNN) has become the model of choice for processing graph structured data with state-of-the-art performance in different tasks <cit.>. With regards to FC, GNN has also shown promising results in disease diagnosis <cit.>. However, such studies have only focused on FC at a single time point. For neurodegenerative diseases like AD, it is crucial to investigate longitudinal FC changes <cit.>, including graph topology and attributes, in order to slow down or even halt disease advancement. Node features are commonly utilized in FC to extract important information. It is also essential to recognize the significance of edge features in FC, which are highly informative in characterizing the interdependencies between ROIs. Furthermore, node embeddings obtained from GNN manipulation contain essential information that should be effectively leveraged. Current GNNs feasible to graphs with multiple time points <cit.> are suboptimal to FC trajectory, as they fail to incorporate brain edge feature embeddings and/or they rely on conventional operation (e.g., global pooling for readout) which introduces inductive bias and is incapable of extracting sufficient information from the node embeddings <cit.>. Moreover, these models lack built-in interpretability, which is crucial for clinical applications. And they are unsuitable for small-scale datasets which are common in fMRI research. The longitudinal data with multiple time points of the AD continuum is even more scarce due to the difficulty in data acquisition. In this work, we proposed Brain Tokenized Graph Transformer (Brain TokenGT), the first framework to achieve FC trajectory embeddings with built-in interpretability, shown in Fig. <ref>. Our contributions are as follows: 1) Drawing on the distinctive characteristics of FC trajectories, we developed Graph Invariant and Variant Embedding (GIVE), which is capable of generating embeddings for both nodes and spatio-temporal edges; 2) Treating embeddings from GIVE as tokens, Brain Informed Graph Transformer Readout (BIGTR) augments tokens with trainable type identifiers and non-trainable node identifiers and feeds them into a standard transformer encoder to readout instead of global pooling, further extracting information from tokens and alleviating over-fitting issue by token-level task; 3) We conducted extensive experiments on two public resting state fMRI datasets (ADNI, OASIS) with three different tasks (Healthy Control (HC) vs. Mild Cognition Impairment (MCI) classification, AD conversion prediction and Amyloid positive vs. negative classification). Our model showed superior results with FC trajectory as input, accompanied by node and edge level interpretations. § METHOD §.§ Problem Definition The input of one subject to the proposed framework is a sequence of brain networks 𝒢=[G_1,G_2,...,G_t,...,G_T] with T time points. Each network is a graph G=(V,E,A), with the node set V={v_i}_i=1^M, the edge set E = V × V, and the weighted adjacency matrix A∈ℝ^M × M describing the degrees of FC between ROIs. The output of the model is an individual-level categorical diagnosis ŷ_s for each subject s. §.§ Graph Invariant and Variant Embedding (GIVE) Regarding graph topology, one of the unique characteristics of FC across a trajectory is that it has invariant number and sequence of nodes (ROIs), with variant connections between different ROIs. Here, we designed GIVE, which consists of Invariant Node Embedding (INE) and Variant Edge Embedding (VEE). §.§.§ Invariant Node Embedding (INE). To obtain node embeddings that capture the spatial and temporal information of the FC trajectory, we utilized evolving graph convolution <cit.> for the K-hop neighbourhood around each node which could be seen as a fully dynamic graph, providing a novel "zoom in" perspective to see FC. As suggested in <cit.>, with informative node features, we chose to treat parameters in graph convolutional layers as hidden states of the dynamic system and used a gated recurrent unit (GRU) to update the hidden states. Formally, for each node v_i in V, we define a dynamic neighbourhood graph as 𝒢_i=[g_i1,g_i2,..,g_it,...,g_iT] (Fig. <ref>), in which g_it is the K-hop neighbourhood of node v_i at time point t, with adjacency matrix A_it. At time t, for dynamic neighbourhood graph 𝒢_i, l-th layer of evolving graph convolution first updates parameter matrix W^l_i(t-1) from the last time point to W^l_it with GRU, then the node embeddings H^l_it are updated to H^l+1_it for next layer using graph convolution network (GCN) <cit.>: W^l_it=GRU(H^l_it,W^l_i(t-1)); H^l+1_it=GCN(A_it,H^l_it,W^l_it) §.§.§ Variant Edge Embedding (VEE). For tasks such as graph classification, an appropriate representation of edges also plays a key role in the successful graph representation learning. To achieve edge embeddings, we first integrated graphs from multiple time points by defining Spatial Edge and Temporal Edge, and then obtained spatial and temporal edge embeddings by transforming an FC trajectory to the dual hypergraph. For each FC trajectory, we should not only investigate the edges between different ROIs in one static FC (i.e., spatial domain) but also capture the longitudinal change across different time points (i.e., time domain). Instead of focusing only on intrinsic connections (i.e., spatial edges (e_s)) between different ROIs in each FC, for each of the two consecutive graphs G_t and G_t+1, we added M temporal edges (e_t) to connect corresponding nodes in G_t and G_t+1, with weights initialized as 1. The attached features to spatial and temporal edges were both initialized by the concatenation of node features from both ends and their initial weights. Accordingly, one trajectory would be treated as a single graph for downstream edge embedding. We denote the giant graph with T time points contained as G^T, with weighted adjacency matrix A^T∈ℝ^TM × TM (Fig. <ref>). G^T was first transformed into the dual hypergraph G^T* by Dual Hypergraph Transformation (DHT) <cit.>, where the role of nodes and edges in G^T was exchanged while their information was preserved. DHT is accomplished by transposing the incidence matrix of the graph to the new incidence matrix of the dual graph, which is formally defined as: G^T=(X,M,E) ↦ G^T*=(E,M^T,X), where X∈ℝ^M × D is the original node features matrix with a D dimensional feature vector for each node, M∈ℝ^|E| × M is the original incidence matrix, and E∈ℝ^|E| × (2D+1) is the initialized edge features matrix. We then performed hypergraph convolution <cit.> to achieve node embeddings in G^T*, which were the corresponding edge embeddings in G^T. The hypergraph convolution at l^th layer is defined by: E^(l+1)=D^-1M^TW^*B^-1ME^(l)Θ where W^* is the diagonal hyperedge weight matrix, D and B are the degree matrices of the nodes and hyperedges respectively, and Θ is the parameters matrix. Interpretability is important in decision-critical areas (e.g., disorder analysis). Thanks to the design of spatio-temporal edges, we could achieve built-in binary level interpretability (i.e., both nodes and edges contributing most to the given task, from e_t and e_s, respectively) by leveraging HyperDrop <cit.>. The HyperDrop procedure is defined as follows: idx=TopE(score(E)); E^pool=E_idx; (M^pool)^T=(M^T_idx) where 'score' function is hypergraph convolution layers used to compute scores for each hypergraph node (e_s or e_t in the original graph). 'TopE' selects the nodes with the highest E scores (note: ranking was performed for nodes from e_s and e_t separately, and HyperDrop was only applied to nodes from e_s with hyperparameter E), and idx is the node-wise indexing vector. Finally, the salient nodes (from e_t) and edges (from e_s) were determined by ranking the scores averaged across the trajectory. §.§ Brain Informed Graph Transformer Readout (BIGTR) Proper readout for the embeddings from GNN manipulation is essential to produce meaningful prediction outcome for assisting diganosis and prognosis. The vanilla ways are feeding the Node Embeddings, and Spatial and Temporal Edge Embeddings generated from the GIVE module into pooling and fully connected layers. However, this would result in a substantial loss of spatial and temporal information <cit.>, especially under the complex settings of three types of spatial/temporal embeddings. Recently, it has been shown, both in theory and practice, that a standard transformer with appropriate token embeddings yields a powerful graph learner <cit.>. Here, treating embeddings output from GIVE as tokens, we leveraged graph transformer as a trainable readout function, named as Brain Informed Graph Transformer Readout (BIGTR) (Fig. <ref>). We first define the Type Identifier (TI) and Node Identifier (NI) under the setting of FC trajectory. Trainable TI encodes whether a token is a node, spatial edge or temporal edge. They are defined as a parameter matrix [P_v;P_e_s;P_e_t] ∈ℝ^3 × d_p, where P_v, P_e_s and P_e_t are node, spatial edge and temporal edge identifier respectively. Specifically, we maintained a dictionary, in which the keys are types of the tokens, the values are learnable embeddings that encodes the corresponding token types. It facilitates the model's learning of type-specific attributes in tokens, compelling attention heads to focus on disease-related token disparities, thereby alleviating overfitting caused by non-disease-related attributes. Besides, it inflates 1 G^T for an individual-level task to thousands of tokens, which could also alleviate overfitting in the perspective of small-scale datasets. Non-trainable NI are MT node-wise orthonormal vectors Q∈ℝ^MT × d_q for an FC trajectory with T time points and M nodes at each time. Then, the augmented token features become: z_v =[x_v,P_v,Q_v,Q_v] z_(u,v) =[x_(u,v),P_e_s,Q_u,Q_v] z_(v,v') =[x_(v,v'),P_e_t,Q_v,Q_v'] for v, e_s and e_t respectively, where node u is a neighbour to node v in the spatial domain and node v' is a neighbour to node v in the temporal domain, and x is the original token from GIVE. Thus, the augmented token features matrix is Z∈ℝ^(MT+|E|T+M(T-1)) × (h+d_p+2d_q), where h is the hidden dimension of embeddings from GIVE. Z would be further projected by a trainable matrix ω∈ℝ^(h+d_p+2d_q) × h'. As we targeted individual-level (i.e., G^T) diagnosis/prognosis, a graph token X_[graph]∈ℝ^h' was appended as well. Thus, the input to transformer is formally defined as : Z^in=[X_[graph];Zω] ∈ℝ^(1+MT+|E|T+M(T-1)) × h' § EXPERIMENTS §.§.§ Datasets and Experimental settings. We used brain FC metrics derived from ADNI <cit.> and OASIS-3 <cit.> resting state fMRI datasets, following preprocessing pipelines <cit.>. Our framework was evaluated on three classification tasks related to diagnosis or prognosis: 1) HC vs. MCI classification (ADNI: 65 HC & 60 MCI), 2) AD conversion prediction (OASIS-3: 31 MCI non-converters & 29 MCI converters), and 3) differentiating cognitively normal individuals with amyloid positivity vs. those with amyloid negativity (OASIS-3: 41 HC aβ+ve & 50 HC aβ-ve). All subjects have 2-3 time points of fMRI data and those with two time points were zero-padded to three time points. FC was built based on the AAL brain atlas with 90 ROIs <cit.>. The model was trained using Binary Cross-Entropy Loss in an end-to-end fashion. Implementation details could be found in supplementary materials. The code is available at <https://github.com/ZijianD/Brain-TokenGT.git> §.§.§ Results. AUC and accuracy are presented in Table <ref>. (Recall and Precision could be found in supplementary materials). Brain TokenGT and its ablations were compared with three types of baseline models, including 1) shallow machine learning: MK-SVM, RF and MLP; 2) one time point feasible deep learning: three representative deep graph models GCN <cit.>, GAT <cit.> and PNA <cit.>, and four state-of-the-art deep models specifically designed for FC: BrainnetCNN <cit.>, BrainGNN <cit.>, IBGNN+ <cit.> and BrainnetTF <cit.>; 3) multiple time points feasible deep learning: Onionnet <cit.>, STGCN <cit.> and EvolveGCN <cit.>. To ensure a fair comparison between models, the one-dimensional vectors flattened from FC in all time points were concatenated and used as input for the shallow learning model. For the one time point feasible deep learning models, a prediction value was generated at each time point and subsequently averaged to obtain an individual-level prediction. The experimental results (Table <ref>) demonstrate that the Brain TokenGT significantly outperformed all three types of baseline by a large margin. The ablation study further revealed that GIVE w/o e_t w/ GP outperformed EvolveGCN by adding VEE without e_t, which empirically validates the importance of edge feature embeddings in FC. The performance could be further improved by incorporating e_t, suggesting the efficiency of our GIVE design with spatio-temporal edges. Interestingly, BIGTR itself (i.e., the original features were directly input to BIGTR without GIVE) showed competitive performance with STGCN. Replacing GP with transformer (Ours w/o I) led to improved performance even without identifiers, indicating that the embeddings from GIVE may already capture some spatial and temporal information from the FC trajectory. The addition of identifiers further improved performance, possibly because the token-level self-supervised learning could alleviate the over-fitting issue and node identifiers could maintain the localized information effectively. §.§.§ Interpretation. Fig. <ref> shows the top 5 salient edges and nodes retained by HyperDrop for each of the three tasks. Consistent with previous literature on brain network breakdown in the early stage of AD <cit.>, parahippocampal, orbitofrontal and temporal regions and their connections contributed highly to all three tasks, underscoring their critical roles in AD-specific network dysfunction relevant to disease progression. On the other hand, superior frontal region additionally contributed to the amyloid positive vs. negative classification, which is in line with previous studies in amyloid deposition <cit.>. § CONCLUSION This study proposes the first interpretable framework for the embedding of FC trajectories, which can be applied to the diagnosis and prognosis of neurodegenerative diseases with small scale datasets, namely Brain Tokenized Graph Transformer (Brain TokenGT). Based on longitudinal brain FC, experimental results showed superior performance of our framework with excellent built-in interpretability supporting the AD-specific brain network neurodegeneration. A potential avenue for future research stemming from this study involves enhancing the "temporal resolution" of the model. This may entail, for example, incorporating an estimation of uncertainty in both diagnosis and prognosis, accounting for disease progression, and offering time-specific node and edge level interpretation. § ACKNOWLEDGEMENT This work was supported by National Medical Research Council, Singapore (NMRC/OFLCG19May-0035 to J-H Zhou) and Yong Loo Lin School of Medicine Research Core Funding (to J-H Zhou), National University of Singapore, Singapore. Yueming Jin was supported by MoE Tier 1 Start up grant (WBS: A-8001267-00-00). splncs04
http://arxiv.org/abs/2307.02250v1
20230705124507
Stress-testing Road Networks and Access to Medical Care
[ "Hannah Schuster", "Axel Polleres", "Johannes Wachs" ]
cs.SI
[ "cs.SI", "physics.soc-ph" ]
1 .001 Stress-testing Road Networks Hannah Schuster et al. mode = title]Stress-testing Road Networks and Access to Medical Care [1]The authors acknowledge support from an Austrian Applied Research Agency (FFG) grant: CRISP (FFG No 887554). 1,2]Hannah Schuster[type=editor, auid=000,bioid=1, orcid= xx] [1] [1] schuster@csh.ac.at 1,2]Axel Polleres 3,4,1]Johannes Wachs [1] johannes.wachs@uni-corvinus.hu [1]organization=Complexity Science Hub Vienna, addressline=Josefstädterstraße, city=Vienna, postcode=AT-1080, country=Austria [2]organization=Vienna University of Economics and Business, addressline=Welthandelsplatz, city=Vienna, postcode=AT-1020, country=Austria [3]organization=Corvinus University of Budapest, addressline=Fővám Tér 8, city=Budapest, postcode=1093, country=Hungary [4]organization=Centre for Economic and Regional Studies, Eötvös Loránd Research Network, addressline=Tóth Kálmán u. 4., city=Budapest, postcode=1097, country=Hungary This research studies how populations depend on road networks for access to health care during crises or natural disasters. So far, most researchers rather studied the accessibility of the whole network or the cost of network disruptions in general, rather than as a function of the accessibility of specific priority destinations like hospitals. Even short delays in accessing healthcare can have significant adverse consequences. We carry out a comprehensive stress test of the entire Austrian road network from this perspective. We simplify the whole network into one consisting of what we call accessibility corridors, deleting single corridors to evaluate the change in accessibility of populations to healthcare. The data created by our stress test was used to generate an importance ranking of the corridors. The findings suggest that certain road segments and corridors are orders of magnitude more important in terms of access to hospitals than the typical one. Our method also highlights vulnerable municipalities and hospitals who may experience demand surges as populations are cut off from their usual nearest hospitals. Even though the skewed importance of some corridors highlights vulnerabilities, they provide policymakers with a clear agenda. Road networks Health Care Stress-test Simulation [ [ August 1, 2023 =================== § INTRODUCTION Our dependence on road networks to access emergency medical care increases in two important ways during crises. Crisis events, such as natural disasters, can create increased demands for access to medical services by causing injuries directly. Additionally, these can also disrupt the functionality of these networks themselves, increasing the time it takes to get to a hospital. In acute cases, we know that delays can cause markedly worse medical outcomes for patients <cit.>. As climate change increases the frequency of severe weather events <cit.>, which can extensively disrupt road networks, we need to better understand not only the abstract resilience of infrastructure such as road networks <cit.> but also the specific weaknesses of these systems in terms of access to medical care. Indeed, we can expect such weaknesses in road networks: especially in geographically rugged terrain, transportation infrastructure is expensive and highly constrained by physical barriers <cit.>. Road networks are rightfully built with cost efficiency as a priority alongside robustness to periodic maintenance and disturbances. At the same time, these growing networks, like other complex systems, are known to be highly vulnerable to unanticipated shocks <cit.>. Much like the banking sector, in which an unexpected financial insolvency can cause cascades of bankruptcies <cit.>, local problems in road networks impact transportation through the whole system <cit.>. Hospitals also face critical “tipping-points” - above a certain capacity they deliver significantly worse care <cit.>. Likewise, macro-scale medical care systems can also break down in the face of unexpected shocks <cit.>. Policymakers in all three domains: finance, transport infrastructure, and medical care are increasingly turning to stress tests to analyze their systems and pinpoint weaknesses. Yet to date, little work has been done on how stresses and problems in one system can impact provision of services in another. Whatever the cause of a disruption, the complexity of these networks makes it difficult to predict the effect of one disruption on the functioning of the whole system. Given the significant potential coupling of risks in road transportation networks and access to and provision of medical care, we propose to develop a suitable stress test to examine how road network disruptions impact access to medical care. The aim of such a stress test is to highlight critical road segments or corridors that provide access to medical care, population centers at risk of being cut off from care, and hospitals that may see sudden surges in demand during crises. We implement this stress test by applying simulation analysis to data on road and healthcare infrastructure in Austria. Simulation analysis has proven an effective tool in modeling relative risks and the importance of components of complex systems  <cit.>. Elements of these systems highlighted by stress tests are natural candidates for resources and attention from planners. Our stress test of road networks and their provision of access to medical care presents three novel aspects. First, we develop a measure to quantify access from population centers to medical care. Most quantitative work on the resilience of transportation systems to date focuses on the impact of disruptions by determining the cost of disruption or by measuring the change of accessibility of the whole system during specific scenarios. However, during a disaster, changes in global accessibility or costs may be of minor concern compared to the specifics of which roads are used in the provision of essential services like healthcare or fire protection. We know that small differences in travel time to emergency care can have a significant impact on mortality and other patient outcomes <cit.>. To this end, we modify an existing measure of accessibility in road networks <cit.> in order to classify the importance of links in a network relating to the accessibility of municipalities to the closest hospitals. A second challenge in stress testing the resilience of road networks at the scale of a whole country is their size, which can make an exhaustive calculation and comparison of outages and their consequences intractable. We, therefore, introduce and stress test a coarse-grained simplification of the road system: we merge road segments connecting municipalities to create a backbone representation of the Austrian road network. We can stress test this network more extensively and show that derived insights can be transferred to the more realistic fine-grained system. A third contribution of our approach is that we quantify the impact of our stress tests along three orthogonal dimensions. We measure how road network disruptions limit people's access to hospitals, suggesting vulnerability of population centers. We quantify road importance by observing the effects of their disruption. Finally, we measure the vulnerability of hospitals to sudden surges in the population they are the first point of care for. Thus our framework provides multi-level insight. We note that our approach can be applied and generalised to both other countries (depending on data availability) or to the provision of other services in crisis situations (for instance, firefighting facilities). In the remainder of this paper, we first review the related literature on road network resilience and access to emergency medical care (<Ref>). We then introduce the case of the Austrian road network and relevant datasets, and describe the methods and measures used to study (<Ref>). We present and interpret the results of our stress tests in <Ref>. Finally, we conclude by discussing our method, including its limitations and avenues for future work in <Ref>. § LITERATURE REVIEW During crisis events impacting entire regions, the accessibility of medical care is crucial, given its potential to influence patient outcomes. Indeed, there is ample evidence of a direct effect of the travel distance to a hospital and the mortality of patients. A study using a national database in Japan concluded that the ambulance distance to hospitals significantly correlates with macro-regional mortality risks for particula acute diseases such as acute myocardial infarction and brain infarction <cit.>. Consequently, also Planned road closures and infrastructure disruptions result in worse mortality outcomes: for instance, previous work found a sharp increase in acute myocardial infarction or cardiac arrest hospitalizations among Medicare beneficiaries in 11 U.S. cities during major marathons <cit.>. In summary, the accessibility of emergency medical care depends crucially on road networks, which are also especially vulnerable to environmental perturbations such as extreme weather events <cit.>. Therefore, as numerous studies have shown that events like heatwaves, heavy rainfall, droughts, and tropical weather cyclones have become more frequent and intense globally since the 1950s  <cit.>, the vulnerability of road networks is likely to increase. In addition, the problem of transportation networks and accessibility is especially salient in geographic areas with rugged terrain <cit.> (as we face it for instance in alpine regions in Austria), since such conditions limit possible cost-efficient redundancies that would make such networks more robust. More generally, growing systems like road networks are known to be vulnerable to unanticipated shocks <cit.>. Indeed, there is a whole literature analyzing the resilience of networks that tend to function well in “normal” times but can fail catastrophically during unexpected disruptions. Researchers have begun to stress test these systems to analyze their weak points, where simulation analysis has demonstrated its efficacy in modeling relative risks and the importance of components of complex systems <cit.>. Particular applications of these methods include financial markets <cit.>, food suppliers <cit.>, regional economies <cit.>, ride-sharing systems <cit.>, and software systems <cit.>. Here, the resilience of a network is generally determined by monitoring the response of systems to the cumulative elimination of sections according to random order, deterministic order of criticality, and deterministic order in areas at high risk <cit.>. Insights gained through stress tests can be used to help guide planning resources and areas of attention, in order to improve the robustness of diverse networks while their functionality remains unchanged <cit.>. Methodological approaches to measuring resilience of road networks, in particular, vary from quantifying the travel cost of a disruption <cit.> to quantifying the risk to the overall network <cit.>. The healthcare system has specifically been studied from this perspective, especially since the Covid-19 Pandemic: the additional stress of the pandemic shed light on the various problems of healthcare systems. Stress tests of hospital networks and networks of doctors have demonstrated that macro-scale medical care systems can also breakdown in the face of unexpected shocks <cit.>, manifesting, for instance, in a sudden surge in patients which may overwhelm individual hospitals <cit.>. Despite the apparent interest, little work has been done to understand how the infrastructure that provides access to care is vulnerable to shocks. Therefore in the present paper, we concentrate on how road closures change patient flows and volumes to hospitals. While previous work has studied how transport infrastructure ensures the provision of essential goods to communities <cit.>, to the best of our knowledge, access to healthcare has not been considered from this perspective thus far. § DATA AND METHODS We now outline the data and methods we will use to stress test the Austrian road system. Our aim is to evaluate the importance of specific parts of the network in terms of the population's access to healthcare at hospitals. We first describe how we create an abstracted network representation of the road system. We call the edges or links in this resulting network corridors and define measures of corridor importance. We then outline the methodology of two kinds of stress tests we will apply to this network. Finally, we describe how to measure the impact of these tests on hospitals. §.§ Constructing a network of corridors There are many possible ways to represent a nationwide transport network. Our goal is to create a representation of the Austrian road network that is simplified enough so that extensive stress tests are computationally feasible and still fine-grained enough to capture important details. We begin with data from GIP (Graphenintegrations-Plattform) [https://www.gip.gv.at/] - an extensive open data source of Austrian transportation infrastructure segments, from hiking trails to highways and railroads. As we are interested in emergency response, we focus on roads that can be accessed via automobile. We first create a network of all roads, in which nodes are intersections of road segments and edges are roads. This is a very fine-grained representation of the system: with close to 1.5 million links and about 1.3 million nodes. Obviously, such a fine-grained representation presents a computational problem for network analysis and simulation: as we aim to simulate the removal of road segments and measure the impact on shortest paths to hospitals many times over, we therefore derive a coarse-grained abstraction of this network to keep shortest path calculations tractable while maintaining its core features. In the derived network nodes are municipalities connected by an edge if there is a road segment ending in both municipalities. In other words: two municipalities are connected in the network if there is a direct path between them. We call these edges accessibility corridors or corridors for short, as they represent an abstraction of road connections between municipalities. We also record how many real-world roads between the municipalities are combined in a single corridor. This information is later used in the analysis section to emphasize the importance of the corridor in a real-world context. The resulting network representation of Austria is visualized in Figure <ref>, with municipalities hosting a hospital highlighted. §.§ Measures of corridor importance Given our network representation of the Austrian road transportation network, we would like to quantify the importance of specific access corridors. The literature presents several ways to measure the importance of corridors and the impact of their closure on the movement of people, in general  <cit.>. Our research introduces a new method by taking the accessibility of critical infrastructure into account when measuring importance. Specifically, we observe the impact of corridor closures on the accessibility of a municipality to its closest hospital. Whether a corridor's closure causes a population to take a longer, indirect path to a hospital or forces them to go to a different hospital, we infer corridor importance from increases in travel times upon their removal weighted by the impacted population numbers. The changes in the accessibility measurement are used to implement a ranking of corridors, henceforth referred to as the acis. The acis can be used to assess the impact of the initial stress test on the accessibility corridor network by estimating how a deletion impacts the distance to the closest hospital weighted by the impacted people. To determine the acis, we start by calculating the integral of the cumulative population with respect to the distance for the baseline case and the stress-tested situation, using the trapezoid rule. Subsequently, the acis can be determined by computing the difference between the baseline integral and the stressed integral, which can be expressed by the following formula: ACIS = ∫_0^dist_max P_base(x) dx - ∫_0^dist_max P_stress(x) dx, where dist_max stands for the maximum distance between a hospital and a municipality in the original situation and the population functions P_base(x) and P_stress(x) characterize how many people have a hospital reachable within x km. Alternative Measures As an alternative to the acis we also calculated a measure based on <cit.>, where the authors introduce a measurement of a municipality's access to a full network of destinations based on the population distribution and the minimal distance to each node in the network. In our case, we modify this measure by switching the target variable to its closest hospital instead of all the other municipalities in the network given our focus on access to healthcare. The following equation represents our version of the accessibility measure, which we call the Hospital Accessibility of a municipality (HA_m): HA_m = max_h ∈ H(P_m/d(m,h)), where we measure the accessibility HA_m of a municipality m to the closest hospital by finding the maximum of the ratio of the population of the municipality P_m divided by its distance d to hospitals h in Austria. The municipality's population is included to give greater weight to those municipalities with more people because the probability of someone needing a hospital increases with increasing population. To calculate an overall, aggregated accessibility measure for the entire country, which we call its Hospital Accessibility HA, we use the following formula: HA = ∑_m ∈ M\ H(HA_m * P_m)/P_M\ H, where we calculate the sum of HA_m over all municipalities m in Austria, except for municipalities with a hospital, and then normalize each summand by a population factor P_m/P_M\ H, which takes the population P_M\ H of all Austrian municipalities without a hospital into account. This measure captures the overall efficiency of the corridor network in terms of how well it gets people from population centers to hospitals. To rank the importance of the different corridors, we calculated the difference between the baseline accessibility score and the overall accessibility after stress testing the network. Specifically, if we remove corridor k, we define its impact HA(k) as follows: HA(k) = HA_baseline - HA_\ k/HA_baseline *100, where HA_baseline is the accessibility in the original situation and HA_\ k is the overall hospital accessibility after accessibility corridor k was deleted from the network. As a third alternative besides the acis and HA measures of corridor importance, we also considered a popular way of ranking edges in networks called edge betweenness centrality. In our context, this defines the importance of a corridor as follows: c_B (corridor e)= ∑_s ∈ M,t ∈ H; d(s,t)≤ 100kmσ(s,t|e)/σ(s,t), where the betweenness centrality c_B of a corridor e is the sum of fractions of all shortest paths between a municipality s ∈ M and a hospital t ∈ H that use the corridor e, divided by the number of all shortest paths between them (denoted by σ(s,t)). In plain words, this measures calculates how often corridors appear on the shortest paths between all pairs of municipalities and hospitals in the country at most 100km apart from one another. §.§ Stress testing corridor networks In an effort to establish a ranking of the accessibility corridors based on their importance to hospital accessibility, we conducted two distinct stress tests of the Austrian accessibility corridor network. The first kind of test tracks the reaction of the system to the deletion of a single accessibility corridor. Specifically, we remove one link from the network and calculate the accessibility of each municipality to its closest hospital post-deletion. The ranking of the corridors was based on the resulting acis for each deletion: the higher the acis score a corridor receives, the higher its ranking. We show a concrete example of such a corridor deletion in Figure <ref>. On the left, we color municipalities by how long they must travel to reach a hospital when the network is functioning undisturbed. Following the removal of the focal corridor, visualized on the right, people in several municipalities must travel significantly farther to reach healthcare. This would be reflected in a large acis score for this corridor. While the results of the first stress test serve as a fine approximation for the topological importance of corridors, real world events often impact roads across wider geographic areas. For instance a weather event like a snow storm could impact roads across entire regions. Even when a single corridor or road may be closed, resulting congestion may cause significant delays for travelers on nearby alternatives. The second stress test thus introduces neighborhood outages of roads. It measures the system's functionality after the deletion of a corridor and its neighboring corridors. This idea was initially sparked by the observation that severe weather conditions often have a widespread impact across geographic space rather than being confined to a single location. To increase the potential volatility of the stress test, we first delete the focal corridor, then with a fixed probability p remove each of its immediate neighbors. This fixed probability p was chosen to simulate the decreasing severity of a weather event with increasing distance from its core, which is assumed to be at the focal corridor. In particular, we ran 100 simulations for each corridor and its neighborhood with p ∈ [0.1, 0.25, 0.5, 0.75]. In each simulation, the hospital accessibility of the network and the distance to the closest hospital for each municipality were calculated, and the same impact measures were calculated as for the single corridor removal stress test. §.§ Measuring hospital vulnerability While assessing the resilience of infrastructure networks and the accessibility changes caused by disturbances has been studied in previous research  <cit.>, less attention has been paid to how transportation infrastructure disturbances impact potential flows to healthcare centers. For instance, a key road closure could greatly increase the number of people going to a specific hospital as closest point of care. Therefore we also explored an alternative approach to measuring the impact of corridor deletions in terms of their impact on hospital catchment areas. In particular, we look for which hospitals become responsible for a significantly larger population as their closest point of care when specific corridors are closed. This allows us to measure the strain on hospitals resulting from corridor closures. To quantify this impact on hospitals, we assess how many people have to move from one hospital to another for each stress test, which can be mathematically written as: P_affected = ∑_M ∈ Change P_M, where we calculate the total affected Population P_affected by summing over all M ∈ Change, which is the portion of municipalities that have a new closest hospital after the simulated deletion of a corridor, and P_M stands for the population of municipality M. From this calculation, we are also able to calculate the new number of patients per hospital. Besides this, we use the different stress test results to calculate how often a hospital experiences a changing inflow due to a corridor deletion. Through the application of these measurements, we can better understand how hospital catchment areas change due to the alteration of the accessibility corridor network. For instance, a corridor closure may change the closest hospital for a significant number of people. The changing size of the hospital catchment area thus either causes a growth or reduction in the patient flow to specific hospitals, straining or relaxing those hospitals' capacity, respectively. By examining the effect of corridor deletions on hospital catchment areas, we can derive a map of redundancy relationships between hospitals. This map allows us to report hospitals that could be more prone to sudden patient influx during crisis events which lead to corridor closures. § RESULTS In this section, we present the results of our analyses. We first focus on the single corridor deletion stress test. We find that according to the ACIS measure, the impact of such deletions is highly heterogeneous: some corridors are significantly more important than the average one. We show a significant correlation between the ACIS measure and the HA measure of corridor importance in this scenario. We investigate the relationship between corridor ACIS score and the number of roads in a corridor, finding highly important corridors containing very few roads. We also analyze changes in travel times. We then analyze the results of the corridor neighborhood stress test. Finally, we present two case studies in which hospitals are often or significantly impacted by corridor closures. §.§ Single Corridor stress test First, we found that most accessibility corridors closures have a low impact on the population, see Figure <ref>. In this figure we plot the complementary cumulative density function (CCDF) of the acis score of each corridor. In general, the closure of any given corridor is a minor nuisance in terms of getting to a hospital. However, there are a few accessibility corridors which have a tremendous impact on the system if closed, observed in the right tail of this figure. Furthermore, the results of the neighborhood deletion stress test, see Figure <ref>, suggest that locally correlated corridor closures can have an even greater impact. This is our first important result: as a few corridors are much more important than the typical one, policymakers can focus their attention on just a few parts of the (abstracted) road network. Improvements to the resilience at these key points can make a significant difference in its resilience. A comprehensive representation of the simulation results using the acis of the single corridor stress test can be found in Figure <ref>. To provide context for these findings, we now interpret which corridors play a crucial role according to this first stress test. In the map, we see that the highlighted corridors seem to function as connectors to otherwise isolated dead-ends to the network or as critical connectors reducing travel time between different regions. Another category of highlighted corridors are short-cuts directly connected to a hospital. As corridors are abstractions that bundle together any number of roads between two neighboring municipalities, we look more closely at the relationship between the acis ranking and the number of roads within a corridor in the inset of Figure <ref>. We find that there are many examples of corridors containing just a few roads and having a high acis. These corridors are perhaps the most important ones to focus on: they are both systemically important and contain few local redundancies. This is especially relevant when the topography of Austria is considered as many valleys in the Alps are only connected to the rest of Austria by a single corridor. If a road like that is blocked, the access to a hospital of the municipalities in the valley is cut off. What about the other measures of corridor importance? Under the single deletion stress test scenario, acis and HA(k) are highly correlated (Spearman's ρ = 0.83). Edge betweenness centrality on the other hand is not significantly correlated (Spearman's ρ = 0.09) with the acis measure, nor with the HA(k) measure (Spearman's ρ = 0.098). As edge betweenness centrality evaluates corridor importance in terms of access to multiple hospitals, we focus on the other measures as they capture access to the closest point of care. Upon closer examination, we found that acis and HA(k) do deviate significantly from each other in the most important cases. If we consider the top 100 corridors according to either ranking, this correlation turns negative (Spearman's ρ = -0.26). This means that the two rankings significantly diverge in terms of which corridors they consider most important. In Figure <ref>, we plot the map of Austria with important corridors according to the HA(k) measure highlighted. We observe that the HA(k) measure tends to rank the last corridors leading directly to hospitals as the most important, while the acis measure tends to emphasize corridors that appear to bridge regions. Among the top 100 HA(k) ranked corridors, the average corridor contains 14 roads, while the top 100 acis ranked corridors contain on average only 11 roads. This suggests that the acis is ranking highly those corridors that bridge regions and are highly vulnerable due to their dependence on fewer road segments. In the rest of the paper, we therefore focus on the acis measure. To make the analysis more concrete, we report changes in driving times experienced by people following the single deletion stress test in Figure <ref>. Specifically, we examine the number of additional individuals who would need to drive at least 15, 30, or 60 minutes, assuming a tempo limit of 50 km/h, following a corridor deletion. We observe that a significant number of people would have to drive more than 15 minutes following such a deletion. As before, it is worth focusing on the extreme cases: some corridors cause thousands of people to have to drive over 60 minutes to get to a hospital. We report specific examples in Table <ref>: the deletion of the corridor reported in the first row, which contains a single road, causes a nearly 20-minute increase in driving time for over 10,000 people. Such delays can make a significant difference in critical care outcomes. To that end we report those corridors whose deletion increases average travel time by at least five minutes in <Ref>. Such a difference has been shown to cause statistically observable higher 30-day mortality rate in critical cases, cf. <cit.>. For example, the deletion of the corridor in the first row of this <Ref> leads to a mean increase of approximately 7 minutes for more than 40,000 people. §.§ Corridor Neighborhood stress test We now discuss the results of our second stress test, where we simulated the deletion of corridor neighborhoods to see the network's reaction to more significant alterations. To recapitulate, the main idea of the second stress test extend the first stress by introducing local geographic correlations in road closures, reflecting for example the broader impacts of extreme weather. Besides ranking the corridor neighborhoods, we will also compare the rankings to the initial stress test to see if the same areas are impacted. Each instance of the second stress test also focuses on a single focal corridor. It additionally considers all neighboring corridors, removing them from the system with a probability p. For each focal corridor we ran 100 simulations for each p ∈ [0.1, 0.25, 0.5, 0.75]. This approach yields a distribution of impact scores for each corridor. For each focal corridor and p we considered the mean and the 90th percentile of the acis of this distribution of results. For low values of p, i.e. 0.1, the Spearman correlation between acis of the single corridor deletion stress test and the neighborhood deletion stress test are high: 0.57 for the mean and 0.41 for the 90th percentile result. However, this correlation quickly drops as we consider higher likelihoods of correlated corridor failures. At p=0.75, the correlations drop to 0.08 for the mean and 0.03 for the 90th percentile. We may conclude this implies that when corridors are failing in a larger geographic area, as is often the case, the impact on hospital access is very different from the situation in which a single corridor is removed. Indeed, in <Ref>, we observe that the top 100 most important corridors according to the neighborhood deletion stress test are quite different from those under the single corridor deletion stress test when the deletion probability is increased. Calculating the overlap of the top 100 most impactful corridors of the single acis with the acis ranking considering a probability of 25% for neighborhood deletions shows that only 15% of the corridors are ranked in the top 100. This overlap is even lower for higher probabilities. Furthermore, it is apparent from this visualization that in many cases, important corridors are directly connected to a hospital forming small clusters, indirectly highlighting vulnerable neighborhoods within the corridor network as a whole. §.§ Hospital vulnerability In this section, we will study the vulnerability of hospitals based on the single corridor removal stress test. By closely examining the relations between accessibility corridor deletions and hospital catchment area shifts, we are able to derive a map of redundancies between hospitals, and to quantify which hospitals are at risk of suddenly becoming responsible for a significantly greater number of patients due to road closures. The resulting map provides detailed insight into the dynamics of patient flow between the hospitals caused by the alteration of the corridor network. The map and scatter plot inset in Figure <ref> offers a compelling portrayal of the frequency and magnitude of impacts that hospitals experience during the different stress tests. To delve deeper into the analysis, we have selected two illuminating cases that exemplify two kinds of vulnerable hospitals. Our first example, located in Kalwang, is a hospital that is only impacted by a few specific corridor closures. However, when those corridors close, the impact is extreme: with a 250% increase in the number of people in its catchment area per bed, see Table <ref>. These corridors would otherwise provide access to hospitals in Rottenmann (162 beds) or Leoben (408 beds). In other words, closures of nearby corridors can lead to dramatic surges at this hospital, with a capacity of only 72 beds. The second example, located in Ried im Innkreis, is rather potentially impacted by many corridor closures, but to a smaller degree. Over 20 different corridors can impact its catchment area, but they increase the population to bed ratio by less than 10%. In other words, this hospital will likely often see small increases in the population for which it is the first point of service. Such small increases can nevertheless be the source of significant volatility over time in hospital admittances. Even though the latter is an extreme case where the closure of accessibility corridors blocks the way to big hospitals and puts a lot of strain on a small hospital, it shows how small changes can have big effects. Both cases show how our developed method can be used to refine the analysis and offer another perspective to the stress test. § CONCLUSION AND FUTURE RESEARCH In this paper, we show the resilience of a population's access to healthcare can be meaningfully analyzed and quantified based on stress tests of road-based transportation networks. These stress tests can provide meaningful insights into the dependencies between different systems, in this case of the transportation system and hospitals. By ranking corridors based on their importance in terms of hospital access, we can identify corridors of interest for policymakers seeking to allocate limited resources. In particular, we show that there are high impact corridors containing few roads which provide access to emergency care for many people. We also show that some hospitals are vulnerable to sudden surges of people they are responsible for. Based on these results further investigations can be conducted, for instance by including the area around the corridors of interest. This would lead to guidelines to improve the underlying network's resilience and general access to health care. Analyzing the corridors' neighborhoods is another step from the abstract model to real-world scenarios where natural disasters impact whole regions. Our results show that certain road segments and corridors play a pivotal role in access to hospitals in Austria. The disruption of these roads during crisis scenarios can have a significant impact on travel time to hospitals for large numbers of people. Specific municipalities are especially vulnerable to the closure of specific road corridors. Hospitals can also be affected: a road closure can change which hospital is closest for large numbers of people. The skewed importance of some corridors highlights vulnerabilities but also gives policymakers something to focus on. Compared to previous work, we focus on the specific problem of accessibility of hospitals in our stress test, using a generated measurement as well as adapting an appropriate accessibility measure <cit.>. As we are interested in hospital accessibility, which is a local network problem, this derived measure of road importance is more appropriate than a global measure such as edge betweenness centrality. Further comparison shows that the measurement generated in this paper is a better fit for our problem since the accessibility measure from <cit.> over-emphasizes city size for our application. In the scenario described in the paper, the size of a hospital doesn't add to its attractiveness, and the focus is solely on the accessibility of medical care. If the size of the target is important to the simulation, for example, the size of a city or the number of hospital beds, an adaptation of <cit.> is more practical. However, in this simulation, we assume that fast access to health care is crucial and not influenced by the size of the hospital. Our work also has policy implications for the healthcare sector. Past research has demonstrated that hospital capacity has a tipping-point in terms of care quality: when occupancy exceeds a critical level of capacity, mortality outcomes worsen significantly <cit.>. Our work shows that such surges can occur due to the outcomes in another complex system. Flexible staffing and pooled capacity across hospitals, effective policies recommended in this previous work, should take into account how exogeneous shocks influencing transport networks can create surges and limit the effectiveness of these interventions. Rather than vulnerability to specific events (i.e., floods <cit.>), we consider abstract road closures. By coarse-graining the Austrian road network, we can run a more thorough stress test on a country-wide network. Both of these simplifications enable us to easily identify network sections of interest. More fine-grained versions of these sections can be further investigated in future work using more realistic stress tests. This would be especially tractable if policymakers wished to zoom in on a specific region or part of the country. Our study has several limitations. Roads in different parts of the country may be more or less vulnerable to closure, given factors like local weather and altitude. Future work should consider historical weather patterns and their correlation with road closures. More realistic stress tests can be developed in this way. Some but not all of the road corridors we highlight pass through extremely rugged terrain (i.e. the Alps). Creating redundancies in this context may be very expensive. In such areas it may be more useful to create redundancies at highly impact hospitals. Furthermore, our results show that overlaying the acis measurement with the number of real-world roads in a corridor can help to understand the implications of a corridor closure better. Therefore, we propose to update the introduce acis method by including a factor that takes the number of real-world roads into account. More generally, our approach to stress-testing road networks can be applied in a variety of contexts. For example, rather than measuring the accessibility of hospitals from population centers, we may measure accessibility of population centers from firefighting stations. Indeed, many critical services rely on the functionality of transportation systems like the road network. Social, demographic, and environmental factors suggest that these systems will only experience greater strain in the coming decades. Whether due to climate change, large-scale migration, or the aging of the population, resilience and robustness of these services as a function of the systems they depend on will merit increasing scrutiny. Integrating various forms of data, for example, in a knowledge graph designed for crisis management <cit.>, can greatly expand the potential scope of our simulations. The knowledge graph can be used in future works to determine additional risk factors for roads due to overlapping networks, e.g., river networks which can increase the risk to a road located close by or crossing the river. By adding these risk factors to the simplified network, the simulation can be adapted by updating the probabilities, and consequently, the relevance of the findings to real-world situations can be improved. plainnat
http://arxiv.org/abs/2307.01024v1
20230703135544
SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation
[ "Liangliang Yao", "Haobo Zuo", "Guangze Zheng", "Changhong Fu", "Jia Pan" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, 12 April 2023 Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals Domain adaptation (DA) has demonstrated significant promise for real-time nighttime unmanned aerial vehicle (UAV) tracking. However, the state-of-the-art (SOTA) DA still lacks the potential object with accurate pixel-level location and boundary to generate the high-quality target domain training sample. This key issue constrains the transfer learning of the real-time daytime SOTA trackers for challenging nighttime UAV tracking. Recently, the notable Segment Anything Model (SAM) has achieved remarkable zero-shot generalization ability to discover abundant potential objects due to its huge data-driven training approach. To solve the aforementioned issue, this work proposes a novel SAM-powered DA framework for real-time nighttime UAV tracking, i.e., SAM-DA. Specifically, an innovative SAM-powered target domain training sample swelling is designed to determine enormous high-quality target domain training samples from every single raw nighttime image. This novel one-to-many method significantly expands the high-quality target domain training sample for DA. Comprehensive experiments on extensive nighttime UAV videos prove the robustness and domain adaptability of SAM-DA for nighttime UAV tracking. Especially, compared to the SOTA DA, SAM-DA can achieve better performance with fewer raw nighttime images, i.e., the fewer-better training. This economized training approach facilitates the quick validation and deployment of algorithms for UAVs. The code is available at https://github.com/vision4robotics/SAM-DA. Segment anything model (SAM), domain adaptation, nighttime UAV tracking, high-quality training sample swelling, one-to-many generation, fewer-better training. SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation Liangliang Yao^†, Haobo Zuo^†, Guangze Zheng^†, Changhong Fu*, Jia Pan ^† Equal contribution. * Corresponding author. Liangliang Yao, Haobo Zuo, and Changhong Fu are with the School of Mechanical Engineering, Tongji University, Shanghai 201804, China E-mail: changhongfu@tongji.edu.cn Guangze Zheng and Jia Pan are with the Department of Computer Science, University of Hong Kong, Hong Kong 999077, China. Received July 3, 2023, Accepted xx ================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION § INTRODUCTION Object tracking has been applied for wide unmanned aerial vehicle (UAV) applications, e.g., geographical research <cit.>, dynamic object investigation <cit.>, search and rescue mission <cit.>, and visual location <cit.>. Using abundant daytime superior-quality tracking datasets <cit.>, state-of-the-art (SOTA) trackers <cit.> have attained remarkable performance. Nevertheless, the performance of these SOTA trackers is unsatisfactory in darkness due to the limited illumination, low contrast, and much noise of nighttime images in comparison to daytime ones <cit.>. The aforementioned distinctions bring the discrepancy in feature distribution between day and night images. A potential solution is capturing and annotating sufficient nighttime data for directly training effective nighttime trackers. However, it is expensive and time-consuming to label a large amount of high-quality tracking data under unfavorable lighting conditions. Considering the labeling cost of the nighttime image and the domain gap of day-night, domain adaptation <cit.> is introduced to solve the problem of nighttime UAV tracking. This method aims to transfer SOTA trackers developed for daytime situations to nighttime UAV tracking. In the source domain, the training data has well-annotated bounding boxes with high expenses by hand, whereas, in the target domain, the training samples are obtained by the automatic method <cit.> instead of manual annotation. However, the insufficient quality of target domain training samples limits the improvement of domain adaptation. The generation approach of training samples has difficulty extracting the potential object with precise pixel-level location and boundary from challenging nighttime images of UAV perspectives <cit.>. Furthermore, this method solely concentrates on one target domain training sample within a single nighttime image, disregarding abundant other valuable potential objects, i.e., one-to-one generation. Therefore, how to generate enormous and high-quality target domain training samples from every single raw nighttime image for robust day-night domain adaptation is an urgent problem. Recently, the Segment Anything Model (SAM) <cit.> has demonstrated an impressive zero-shot generalization capacity, allowing it to uncover numerous potential objects. This achievement can be attributed to its huge data-driven training approach with over one billion masks. Such kind of generalization ability enables SAM to be directly applied for various vision-based tasks without task-oriented training, including camouflaged object detection <cit.>, image inpainting <cit.>, medical image segmentation <cit.>, etc. Moreover, with the enormous parameters, the image encoder of SAM is capable of extracting the robust image features in various environments. Despite the above advantages, SAM is hard to be directly applied for nighttime UAV tracking due to the limited load source and computation power of the UAV. Thereby, how to effectively utilize the considerable zero-shot generalization ability of SAM for real-time nighttime UAV tracking is worth being explored carefully. This work introduces the superb SAM into the training phase of tracking-oriented day-night domain adaptation for the first time, proposing a novel SAM-powered domain adaptation framework, i.e., SAM-DA. Specifically, the inventive SAM-powered target domain training sample swelling is presented to determine enormous high-quality target domain training samples from every single challenging nighttime image, dubbed the one-to-many generation. Thereby, the dependence on the number of raw images required for adaptation training can be reduced to enhance generalization and prevent overfitting. With the improvement and increasement of target domain training samples, the adaptation effect of SOTA trackers for nighttime UAV tracking can be further boosted. Figure <ref> shows the tracking performance comparison of SAM-DA-Track and other SOTA tracking methods on a comprehensive long-term nighttime UAV tracking benchmark, i.e., NUT-L, which is a combination of long-term sequences from NAT2021-test <cit.> and UAVDark135 <cit.>. SAM-DA-Track symbolizes the version of the base tracker, i.e., SiamBAN <cit.>, using SAM-DA for adaptation training. N, T, S, B represent that the target domain training images of SAM-DA-Track are about 10.0%, 33.2%, 50.1%, and all of the entire NAT2021-train, respectively. The baseline is UDAT <cit.>. Compared to this method, SAM-DA-Track adopting the training framework SAM-DA can achieve superior tracking performance with less raw nighttime images, i.e., the few-better training. The main contributions of this work are as follows: * A novel SAM-powered domain adaptation framework, namely SAM-DA, is proposed for real-time nighttime UAV tracking. According to our knowledge, SAM-DA is the first work to combine SAM with domain adaptation for UAV tracking at night. * An innovative SAM-powered target domain training sample swelling is designed to determine enormous high-quality target domain training samples from every single raw nighttime image. * Comprehensive experiments on extensive nighttime videos verify the effectiveness and domain adaptability of SAM-DA for nighttime UAV tracking. Especially, SAM-DA realizes better performance with fewer raw images compared to the SOTA method. The above training approach promotes the quick validation and deployment of algorithms for UAVs. § RELATED WORK §.§ Nighttime UAV tracking Recently, nighttime UAV tracking has been utilized in a variety of practical applications, attracting widespread interest <cit.>. Initially, the tracking-oriented low-light enhancers <cit.> are designed to improve the nighttime tracking performance of the cutting-edge Siamese trackers <cit.>. Specifically, J. Ye et al. <cit.> develop an enhancer to iteratively mitigates the effects of inadequate illumination and noise. Afterwards, they present a spatial-channel Transformer-based low-light enhancer to achieve robust nighttime UAV tracking <cit.>. Besides, HighlightNet <cit.> is designed to illuminate potential objects for human operators and UAV trackers, improving the human-machine interaction. Nevertheless, this kind of plug-and-play method has a restricted relationship with tracking tasks, and the way to directly insert tracking models is unable to minimize the image feature distribution gap. In addition, the model parameters of the low-light enhancers will seriously increase the burden of the limited UAV computation and resource. §.§ Day-night domain adaptation Day-night domain adaptation has been used for a wide range of visual tasks <cit.> because it can decrease the domain gap and transfer information from the source domain (daytime) to the target domain (nighttime). X. Wu et al. <cit.> train a domain adaptation model for semantic segmentation at night using adversarial learning. Y. Sasagawa et al. <cit.> use domain adaptation to combine deep learning models from various disciplines in order to detect nighttime objects. Despite the rapid development in other vision tasks, day-night domain adaptation still lacks research for object tracking. Thereby, UDAT <cit.> introduces unsupervised domain adaptation into nighttime UAV tracking, thus increasing the tracking performance at night. However, the insufficient quality of target domain training samples constrains the advancement of the domain adaptation performance for nighttime UAV tracking. Due to the challenges of nighttime images from UAV perspectives, the existing generation approach <cit.> of training samples struggles to extract the potential object with exact pixel-level location and boundary. In addition, this kind of approach only considers one target domain training sample inside a single nighttime image, ignoring abundant additional worthwhile potential objects. §.§ Segment anything model SAM <cit.> has found extensive use in many kinds of computer vision tasks as a huge data-driven method. Trained with over a billion masks, the renowned SAM has extraordinary zero-shot generalization ability. The above capability allows SAM to be directly applied for different vision-based tasks. Specifically, L. Tang et al. <cit.> provide an initial assessment for the efficacy of SAM on the camouflaged object detection assignment. S. Roy et al. <cit.> seek to undertake an early assessment of the out-of-the-box zero-shot capabilities of SAM for medical image segmentation. Besides, T. Yu et al. <cit.> make the initial attempt at mask-free image inpainting and present a new paradigm of "clicking and filling" called Inpaint Anything, which is based on SAM. Despite its wide applications in other vision tasks, SAM has not been used for UAV tracking, especially in nighttime scenarios. § PROPOSED METHOD SAM-DA is introduced in this section, as depicted in Fig. <ref>. Given a nighttime raw image, the SAM-powered target domain training sample swelling is employed to determine enormous potential objects and provide their accurate pixel-level locations and boundaries. Then, the number of training samples swells from one to many within every single nighttime image based on the location and boundary. During the training pipeline, both the manually annotated source domain (daytime) training sample and automatically generated target domain (nighttime) training sample are leveraged to drive the following tracking-oriented day-night domain adaptation. This data-driven framework introduces SAM into the training phase of domain adaptation, improving the performance of the tracker in the nighttime scene. §.§ SAM-powered target domain training sample swelling Effective day-night domain adaptation requires enormous high-quality target domain training samples. Different from the existing one-to-one approach <cit.>, the proposed SAM-powered target domain training sample swelling utilizes the powerful zero-shot generalization ability of SAM to swell the number of high-quality training samples. The detail is introduced as follows. SAM-powered model. As shown in Fig. <ref>, the SAM-powered model is illustrated. Specifically, following the original SAM, a nighttime raw image 𝐈∈ℛ^H× W × 3 is first patchified to patch embedding 𝐈'. Subsequently, an encoder is utilized to extract feature embeddings denoted by 𝐅 as follows: 𝐈' = Patchify(𝐈) , 𝐅 = Encoder(𝐈') , where Patchify means that the image is projected linearly and added with position embeddings. Additionally, Encoder represents an MAE <cit.> pre-trained Vision Transformer (ViT) <cit.> . Afterward, a decoder predicts the mask embeddings 𝐌'. Then, a mask decider generates the image with enormous determined masks 𝐌. The process is defined as: 𝐌' = Decoder(𝐅) , 𝐌 = Maskdecider(𝐌') , where Decoder is a modification of a Transformer decoder block <cit.> and Maskdecider is a dynamic mask prediction head. These masks contain information about the potential object with accurate location and boundary. Hence, the boxes 𝐁_n around the masks are utilized to generate target domain training samples. n is the number of potential objects for a single nighttime image. It is noted that the original SAM is difficult to be utilized straightforwardly for real-time nighttime UAV tracking due to the restricted load source and processing capabilities of the UAV. Therefore, this work adopts SAM to generate high-quality target domain training samples for day-night domain adaptation training. Target domain training sample swelling. The target domain training sample swelling generates many high-quality training samples from one original nighttime image. Specifically, this work follows the data processing of COCO <cit.>. An image 𝐈 exhibits the dual functionality of serving as both a template frame and a search frame. As a template frame, the image is cropped into numerous target-centered image patches, denoted as template patches [𝐙_1,𝐙_2,...,𝐙_n], with the accurate locations and boundaries, which are subsequently resized to a fixed size (e.g., 127×127). Simultaneously, as a search frame, the image is cropped into an equal number of larger image patches, denoted as search patches [𝐗_1,𝐗_2,...,𝐗_n], also based on the accurate locations and boundaries, and resized to another size (e.g., 255×255). Patches containing the same target are paired between the template patches and search regions patches, forming the target domain training samples 𝐓_n. The whole target domain training sample swelling is defined as: [𝐙_1, 𝐙_2,...,𝐙_n] = Crop(𝐈;𝐁_i,s1) , [𝐗_1, 𝐗_2,...,𝐗_n] = Crop(𝐈;𝐁_i,s2) , 𝐓_n = {{𝐙_1,𝐗_1},{𝐙_2,𝐗_2},...,{𝐙_n,𝐗_n}} , where Crop is the crop operation to generate patches and 𝐁_i is the box of the i-th potential object. Besides, s1,s2 represent template size and search size, respectively. As shown in Fig. <ref>, the visualization represents the swelling method from one original nighttime image to many target domain training samples. The one-to-many generation revolutionizes the existing target domain training sample acquisition approach in the day-night domain adaptation from both the aspects of quantity and quality. §.§ Tracking-oriented day-night domain adaptation With a substantial amount of high-quality target domain training samples, a tracking-oriented day-night domain adaption is utilized to enhance the trackers’ nighttime performance by aligning the features from both domains. Following the baseline, the whole domain adaptation framework is divided into three parts: backbone, bridge layer, and discriminator. Backbone. In a general Siamese network-based tracker, feature extraction involves two branches, the template branch and search branch. These branches utilize an identical backbone network ℱ to generate feature maps from the template patch 𝐙 and search patch 𝐗, namely ℱ(𝐙) and ℱ(𝐗). Bridging layer. In consideration of the strong modeling capability of the vision Transformer for long-range inter-independencies, a bridging layer is designed as a Transformer structure to bridge the gap between the feature distributions <cit.>. Taking the search branch as an instance, positional encodings 𝐏𝐨𝐬 are added to the input feature ℱ(𝐗) ∈ℛ^N × H × W. The subsequent operation is the multi-head self-attention (MSA), which can be expressed as: ℱ(𝐗)'=MSA( 𝐏𝐨𝐬+ℱ(𝐗))+ 𝐏𝐨𝐬+ℱ(𝐗) , ℱ(𝐗)=LN(FFN(Mod(LN( ℱ(𝐗)')))+ ℱ(𝐗)') , where ℱ(𝐗)' refers to an intermediate variable and LN represents layer normalization. Additionally, FFN denotes the fully connected feed-forward network. Mod is a modulation layer in <cit.> to fully explore internal spatial information. Discriminator. A discriminator is employed to ascertain the origin of the modulated feature map, discerning whether they belong to the source domain or the target domain. The feature discriminator consists of a gradient reverse layer <cit.> and two Transformer layers. Given the modulated feature map ℱ(𝐗), the softmax function is performed and followed by a gradient reverse layer. Then the intermediate feature 𝐑 is passed through a convolution layer and concatenated with a classification token 𝐜𝐥𝐬 as: 𝐑 = GRL(Softmax(ℱ(𝐗))) , 𝐑' = Concat(𝐜𝐥𝐬, Conv(𝐑)) , where GRL represents the gradient reverse layer and Concat denotes channel-wise concatenation. Afterward, 𝐑' is input to two Transformer layers. Finally, the classification token 𝐜𝐥𝐬 is regarded as the final predicted result. Tracker head. After the bridging layer, a cross-correlation operation is calculated on the modulated features ℱ(𝐗) and ℱ(𝐙) to generate a similarity map. Finally, the tracker head performs the classification and regression process to predict the object position. As shown in Fig. <ref>, visual comparison of confidence maps generated by the baseline, the SAM-DA-Track-N, the SAM-DA-Track-T, the SAM-DA-Track-S, and the SAM-DA-Track-B. With a substantial amount of high-quality target domain training samples, the tracking-oriented day-night domain adaptation greatly improve the performance of the tracker at night. §.§ Loss functions Following the baseline, the loss of the domain adaptation framework consists of tracking loss and domain adaptation loss. Tracking loss. In the source domain, the tracking loss including classification and regression loss ℒ_tr between the manually-annotated ground truth and the predicted results are used. Domain adaptation loss. In adversarial learning, the least-square loss function <cit.> is employed to train the generator 𝒢, aiming at generating source domain-like features from target domain images to fool the discriminator 𝒟 while frozen. Here the feature extractor along with the bridging layer is regarded as the generator 𝒢. Considering both the template and search features, the adversarial loss ℒ_da is described as follows: ℒ_da = (𝒟(ℱ(𝐗_ t))-l_ s)^2+(𝒟(ℱ(𝐙_ t))-l_ s)^2 , where s and t refer to the source and the target domains, respectively. Besides, l_ s represents the label for the source domain. In summary, the total training loss for the domain adaptation framework is defined as: ℒ_ total = ℒ_ tr+λℒ_ da , where λ is a weight to balance the loss terms. λ is set as 0.01 in implementation. During the training process, the tracking network and discriminator 𝒟 are optimized alternatively. The loss function of the discriminator 𝒟 is defined as: L_𝒟 = ∑_d= s,t(𝒟(ℱ(𝐗_d))-l_d)^2+(𝒟(ℱ(𝐙_d))-l_d)^2 . § EXPERIMENTS §.§ Implementation details Data. SAM-powered target domain training sample swelling is mainly implemented on NAT2021-train to validate the influence of target domain training samples' quantity and quality on nighttime tracking performance. To validate the effectiveness of the proposed method, four versions of the target domain training set are obtained according to the number of used raw images, i.e., base (B), small (S), tiny (T), and nano (N). SAM-NAT-B swells all original images in the entire NAT2021-train, while SAM-NAT-N, SAM-NAT-T, and SAM-NAT-S only randomly sample parts of data with the ratio of about 10.0%, 33.2%, and 50.1%, respectively. Specifically, SAM is adopted for fully automatic training sample swelling, where the post-process of target domain tracking data follows COCO <cit.>. The qualitative visualization of training samples is shown in Fig. <ref>. Besides, the quantitative comparison of the training sample numbers between SAM-NAT and NAT2021-train is shown in Tab. <ref> and discussed in Sec. <ref> to demonstrate the training sample diversity of the proposed SAM-powered swelling. All the training is complemented with PyTorch on a single NVIDIA A100 GPU and follows <cit.> with the base tracker <cit.>. No additional data is introduced. Evaluation. To validate the tracking robustness in practical UAV applications against complicated challenges <cit.>, the long-term nighttime tracking performance is comprehensively evaluated. From the SOTA nighttime tracking benchmarks, NAT2021-test <cit.> and UAVDark135 <cit.>, the long-term tracking videos are combined following the rules in the baseline and form NUT-L, a comprehensive Long-term Nighttime UAV Tracking benchmark with 43 sequences and 95,274 images. One-pass evaluation is adopted, performances are ranked by success rate, precision, and normalized precision. All the evaluations are complemented with the same platform with training. §.§ Overall evaluation This section provides a comprehensive analysis of trackers in nighttime UAV tracking with practical UAV application scenarios. The proposed tracker is based on the SAM-DA framework and is named SAM-DA-Track. According to the version of training data, four trackers are acquired, namely SAM-DA-Track-B, SAM-DA-Track-S, SAM-DA-Track-T, and SAM-DA-Track-N. For fair comparison, SAM-DA-Track-B and other 13 SOTA trackers <cit.> are overall evaluated on NUT-L. Results in Fig. <ref> show the proposed SAM-DA-Track rank first with a large margin compared to other trackers. Specifically, SAM-DA-Track promotes baseline <cit.> by 14.1%, 13.4%, and 13.3% on success rate, normalized precision, and precision, respectively. Compared with base tracker <cit.>, SAM-DA-Track raises <cit.> by 22.9%, 23.6%, and 21.3% on three metrics. SAM-DA-Track presents better adaptability and practicality in variant nighttime conditions. The improvement attributes to the enormous high-quality training samples powered by the excellent zero-shot generalization ability and robustness of SAM. Figure <ref> exhibits some visual comparisons of trackers adopting SAM-DA, the baseline, and the base tracker. SAM-DA raises the nighttime aerial tracking performance of baseline substantially. §.§ Attribute-based performance Unlike daytime, objects in actual nighttime UAV tracking scenarios usually experience complex and varied lighting issues. This mainly includes two aspects, the target object undergoes drastic changes in illumination (illumination variation, IV) and the object is under extremely low light conditions (low ambient intensity, LAI). Figure <ref> presents the performance of SAM-DA-Track and other SOTA trackers against two challenges. Compared with the baseline, SAM-DA-Track achieved a 15.3%, 15.2%, and 15.1% improvement in the three metrics on IV, respectively. While on LAI, SAM-DA-Track promotes the baseline by 12.6%, 10.8%, and 11.1%. The evaluation of the lighting challenges has verified the robustness of SAM-DA-Track against the severe issues for practical nighttime UAV tracking. As a foundation model for segmentation, SAM presents its superior zero-shot performance even in extremely dark nighttime images. Furthermore, this encouraging phenomenon enables more powerful domain adaptation in other downstream tasks against specific challenges of nighttime domain data. §.§ Analysis on SAM-powered target domain training sample swelling Considering the scarcity of nighttime tracking data and the high cost of annotating data under unfavorable lighting conditions, the efficiency of utilizing existing nighttime data has been crucial. The core idea of SAM-DA is to utilize the strong zero-shot generalization ability and robustness of SAM to produce enormous high-quality training samples in the nighttime tracking data, thus improving day-night domain adaptation. The high quality of swelled training samples is shown in Fig. <ref>, where SAM is able to automatically produce pixel-level determined masks with clear boundaries even in the low-light environment. Notably, no low-light image enhancement is required, which is different from the baseline. On the other hand, the number of training samples is greatly enriched, which represents the diversity of swelled training samples. Compared with <cit.> and <cit.> used in baseline, SAM is able to discover 'anything' potential for tracking. In addition to common objects used in nighttime UAV tracking like cars and people, SAM also includes other valuable tracking candidates, e.g., monitors, and traffic signs in Fig. <ref>. Therefore, nighttime tracking benefits from the generalization ability and robustness of SAM against complicated scenes. Enlarged training samples. As shown in Tab. <ref>, the NAT2021-train includes 276,081 training samples in 276,081 training images, with only a single object in each image. A lot of rich and beneficial potential objects remain undiscovered. By contrast, SAM-NAT-N contains 1,608,843 training samples with only 10.0% of training images in NAT2021-train. The number of training samples of SAM-NAT-N is already 5.8 times of NAT2021-train, as shown in Fig. <ref>. Besides, SAM-NAT-T uses 33.2% of images and reaches 5,314,760 training samples, while SAM-NAT-S includes 50.1% of images and contains 8,042,926 training samples. SAM-NAT-B uses equal amounts of images with NAT2021-train and contains 16,073,740 training samples, which astonishingly reaches 58.2 times compared to NAT2021-train. Quantitative data comparison between SAM-DA and baseline is shown in Fig. <ref>. Enriched lighting conditions. Figure <ref> demonstrates the ambient intensity (AI) comparison between SAM-NAT and NAT-2021-train. The AI value is calculated based on the average lighting conditions of the image patches, where the lower the AI value, the darker the ambient environment in which the target object is located. The patches with AI value of less than 20 are regarded with the attribute of low ambient intensity <cit.>. Diverse lighting conditions in NAT2021-train are all enriched in SAM-NAT, especially for the training samples with AI value less than 20, i.e., low ambient intensity. The comparison validates that the proposed SAM-powered target domain training sample swelling is able to enrich the distinguished characteristics of the target domain (low light conditions in this case), thus providing a guarantee for improving the knowledge transfer ability of domain adaptation. §.§ Fewer-better training Since SAM-DA provides better training samples with fewer data, an intriguing topic is to discuss can tracker achieves better performance with less training. This is highly relevant to practical nighttime UAV applications, where the amount of training data is usually limited and quick training is required for timely implementation. The results in Fig. <ref> validate that even with very constrained training image proportion (10.0% on SAM-NAT-N) and training time (about 2.4 hours), SAM-DA-Track can achieve better performance (0.411) than baseline (0.378). It proves the practicality of higher training efficiency with less data. Compared with baseline, SAM-DA-Track achieves a promotion of 9.0% on success rate. With more training data (33.2%, on SAM-NAT-T) and longer training time (about 4 hours), SAM-DA-Track achieves further improvement (0.414). Moreover, on SAM-NAT-S with 50.1% of images and about 6 hours' training, SAM-DA-Track reaches 0.419 AUC score. With SAM-powered target domain training sample swelling on whole data (SAM-NAT-B) and the same training time as baseline, SAM-DA obtains further improvement (0.430). Compared with baseline, SAM-DA-Track achieves the promotion of 14.1%. The fewer-better training evaluation further validates the effectiveness of SAM-powered training sample swelling. It also demonstrates the proposed method is not data-hungry with enormous high-quality training samples, thus more suitable for practical UAV applications. §.§ Day-night feature distribution In order to further validate the effectiveness and the domain adaptability of the proposed method, this section includes visualizations of the day-night image features obtained from the base tracker, the baseline, and SAM-DA-Track. Figure <ref> shows the visualization results using t-SNE <cit.>. Comparing the proposed SAM-DA with the base tracker and the baseline, it can be observed that SAM-DA has further reduced the domain discrepancy significantly by leveraging enormous high-quality target domain training samples. As a data-driven method with remarkable zero-shot generalization ability, SAM has been proven that it can be applied for enlarging the target domain training samples for tracking-oriented day-night domain adaptation. § CONCLUSION This work is the first study to introduce the superior SAM into the training phase of day-night domain adaptation for nighttime UAV tracking, proposing a novel SAM-powered domain adaptation framework, i.e., SAM-DA. Specifically, the SAM-powered target domain training sample swelling is designed to determine enormous high-quality target domain training samples from every single challenging nighttime image. The above one-to-many generation significantly increases the high-quality target domain training samples for day-night domain adaptation. Consequently, the reliance on the number of raw images required for adaptation training can be decreased, enhancing generalization and preventing overfitting. Extensive evaluation on enormous nighttime videos shows the robustness and domain adaptability of SAM-DA for nighttime UAV tracking. Especially, SAM-DA has achieved better tracking performance with fewer raw training images. To summarize, this work can contribute to the advancement of domain adaptation for object tracking and other vision tasks in various unmanned systems. § FUTURE WORK According to the superior results mentioned above, this work has the potential to be further applied for abundant other applications and extensions as follows. More nighttime scenarios. This work has proved its adaptation effectiveness with the training and test data in the city scenarios. By virtue of the generalization and robustness of the proposed SAM-DA, this work can be considered for applications in more nighttime and low illumination scenarios, such as villages, wilderness, and even space. This is desirable for promoting the further development of transfer learning in multiple scenes. More nighttime missions. This work has greatly promoted the day-night domain adaptation with a novel pixel-level one-to-many generation method. SAM-powered target domain training sample swelling provides a new way to solve the expensive and time-consuming annotation for other nighttime missions, including nighttime recognition, nighttime detection, nighttime human pose estimation, and nighttime gesture estimation. This is beneficial for promoting the improvement of other nighttime missions. More sensor applications. The work has enhanced the UAV's ability to process low-light images at night. With knowledge transferring and model redesign, this work can be implemented for more sensor applications, such as infrared sensors, depth sensors, and thermal sensors. This is favorable for encouraging further implementation in other sensor applications. More application platforms. This work is deployed and implemented for UAVs at night with comprehensive experiments. Hence, this work can be studied for more unmanned systems with low computational power, including unmanned surface vehicles, unmanned underwater vehicles, unmanned ground vehicles, and other intelligent robots, etc. This is advantageous for encouraging the advancement of diverse unmanned systems. IEEEtran
http://arxiv.org/abs/2307.01712v1
20230704133120
Normal forms of ordinary linear differential equations in arbitrary characteristic
[ "Florian Fürnsinn", "Herwig Hauser" ]
math.CA
[ "math.CA", "cs.SC", "math.AC", "math.NT", "12H20 (Primary), 14G17, 34A05, 34M03, 47E05 (Secondary)" ]
Normal forms of ordinary linear differential equations in arbitrary characteristic Florian Fürnsinn, Herwig Hauser August 1, 2023 =================================================================================== For every linear differential operator L in one variable with convergent or formal power series coefficients we construct a function space on which L acts such that the equation Ly=0 admits a basis of solutions in . In characteristic zero, this is done by adding to the ring Ø of holomorphic functions at 0 an abstract primitive z of 1/x, say, the logarithm log(x). In positive characteristic p, the ring of constants consists of p-th powers and more primitives are required: namely, aside from z_1=z for 1/x, also a primitive z_2 of z_1^p-1, and then, iteratively, a primitive z_k+1 of z_k^p-1, is needed. The space consists, in the case of positive characteristic, of formal power series in x whose coefficients are polynomials in countably many variables z_i. It is then shown in all characteristics that the action of L on possesses a normal form. It is given by the initial operator L_0 of L: The action of L on is reduced to the action of L_0 by a linear automorphism u of , say, such that L∘ u^-1=L_0. As L_0 is an Euler operator, the equation L_0y=0 has the obvious solutions. From these, one obtains a full basis of solutions of Ly=0 in by pull-back with u. This gives, in the holomorphic characteristic zero case, a concise formulation and proof of the theorems of Fuchs and Frobenius. As to positive characteristic, results of Dwork are extended, implications to Grothendieck's p-curvature conjecture are discussed, and the construction of the characteristic p exponential function is described. [MSC2020: 12H20, 14G17, 34A05, 34M03, 47E05. Supported by the Austrian Science Fund FWF, projects P-31338 and P-34765. We are indebted to A. Bostan, M. Singer, F. Beukers, G. Teschl, F.J. Castro-Jiménez, L. Narváez, M. Wibmer, C. Chiu, C. Schindler, N. Merkl, M. Reibnegger, F. Lang, S. Schneider and S. Yurkevich for very valuable input. The lively feedback of the participants of an online lecture series given by the second author in spring 2021 helped very much to shape the contents and the exposition of the present note.] footnote-1 § INTRODUCTION Let L=p_n^n+p_n-1^n-1+… + p_1+p_0∈Ø[] be a linear univariate differential operator with holomorphic or formal power series coefficients p_i in Ø={x}, respectively, Ø= x, an arbitrary field. Write L=∑_j=0^n∑_i=0^∞ c_ijx^i^j for its expansion at 0, and denote by L_0 the initial form of L at 0, i.e., the Euler operator L_0=∑_i-j=τ^∞ c_ijx^i^j, where τ is the minimal shift i-j occurring in the expansion. The indicial polynomial χ=χ_L of L at 0 is defined as the polynomial χ(s)=∑_i-j=τ c_ijs^ j with s^ j=s(s-1)⋯ (s-j+1), and its roots in , respectively in an algebraic closure of , are the local exponents of L at 0. Clearly, L_0(x^k)=χ(k) x^k+τ. The objective of the present paper is to show that the operator L can be brought, by an automorphism u of a suitable function space on which L acts, into the normal form L_0, when considered as a linear map on , L∘ u = L_0:. In particular, the solutions y(x) of the associated differential equation L_0y=0 in give rise to solutions of Ly=0 via u(y(x)). If L_0 has the same order n as L - which corresponds to L having a regular singularity at 0 -, one thus recovers a basis of solutions of Ly=0. To formulate the respective normal form theorem with more detail, one has to distinguish the case of characteristic 0 from the case of positive characteristic p>0. Characteristic 0: We will equally consider holomorphic or formal power series coefficients, and write Ø for the -algebra of these, with = or a field of zero characteristic. Denote by the quotient field of Ø, consisting of meromorphic functions, respectively formal Laurent series. It is well known that solutions of Ly=0 may and most often will involve logarithms. We therefore extend Ø with the usual differentiation =d/dx to the differential ring [z] with derivation defined by x=1 and z =1/x. Here, the variable z plays the role of log(x) and is an abstract primitive of 1/x. Accordingly, L∈Ø[] induces a linear map on [z], the extension of L, and again denoted by :[z][z]. If all shifts i-j of L are ≥ 0, as we may and will assume upon multiplying L with a suitable monomial x^r, this map sends Ø[z] to Ø[z]. This convention simplifies the notation. Denote by Ω⊆ a (maximal) set of local exponents of L with integer differences. This makes sense since has characteristic 0, and thus ⊆⊆. We list the elements of Ω increasingly, ρ_1<ρ_2<⋯<ρ_r, where ρ_k < ρ_k+1 stands for ρ_k+1-ρ_k∈_>0, and denote by m_k≥ 1 the respective multiplicity of ρ_k as a root of χ. Set n_k=m_1+⋯+m_k and n_0=0. Then define the free 𝒪-module ^Ω=∑_k=1^r Ø x^ρ_k[z]_<n_k=⊕_k=1^r ⊕_i=n_k-1^n_k-1Ø x^ρ_kz^i, where x^ρ equals exp(ρlog(x)) if =, while, for arbitrary , it is just a symbol with derivation rule x^ρ=ρ x^ρ-1. Any L∈Ø[] with non-negative shifts acts naturally on ^Ω; we denote again by :^Ω^Ω the induced linear map. Let Ø be the -algebra of holomorphic functions at 0, or, respectively, of formal power series. Let L∈Ø[] be a linear differential operator with coefficients in Ø and shifts ≥ 0, and let _0 denote its initial form. In the convergent setting, assume that 0 is a regular singularity of L, say L_0= L. There exists a linear automorphism u of ^Ω transforming the linear map on ^Ω into its initial form _0, i.e., the following diagram commutes: ℱ^Ω[rd, "L"] d[origin=c]270≅[swap]u ℱ^Ω[r, "L_0"] ℱ^Ω A suitable automorphism u can be explicitly constructed from L. Varying the sets Ω of local exponents with integer differences one obtains: Let Ø, L, Ω and ^Ω be as above. Assume that L_0= L. Let x^ρ, x^ρ z,…,x^ρz^m_ρ-1, for ρ of multiplicity m_ρ varying over the local exponents of L, be the canonical basis of solutions of _0y=0 in =⊕_Ω^Ω. Then u(x^ρ), u(x^ρz), …, u(x^ρz^m_ρ-1) form a basis of solutions of y=0 in . Replacing z by log(x) one obtains in case Ø={x} the classical theorem of Fuchs and Frobenius. Note that log(x) may appear in the solutions with powers up to the sum n_k=m_1+⋯+m_k and not just up to the multiplicity m_k of ρ_k. Positive Characteristic: Let now be a field of characteristic p>0, and let Ø= x denote the ring of formal power series with quotient field = x. In this case, several complications arise: The derivation on has := x^p as field of constants, hence the linear independence of solutions of a differential equation Ly=0 has to be taken over this field. As in characteristic zero we will need an abstract primitive of x; it will be again denoted by z, taken as a variable, and satisfying z=1/x. Note then that z^p will again be a constant, z^p=0. This implies that also z^p-1 has no primitive in Ø[z]. Now, when solving differential equations in characteristic p, one realizes that such a primitive is eventually needed: so one writes z_1 for z and introduces an extra variable z_2 with z_2=1 x·1 z_1. Continuing in this way one is led to introduce a countable set of variables z_1,z_2,…, abreviated by z, and related by the formal differentiation z_i=1 x·1 z_1⋯ z_i-1. This formula mimics the differentiation rule for the i-fold composition log(log(…(log(x))…) of the complex logarithm. All this suggests to work over the field x(z_1,z_2,…)=(z) of rational functions in z_i with formal Laurent series as coefficients. As it turns out, this field is still too small to solve differential equations in positive characteristic. One has to take instead the larger field (z_1,z_2,…) x=(z) x. Here, the coefficients of a monomial x^k may be rational functions whose numerators and denominators have arbitrarily large degree (this is not the case for ()). Finally, one has to take care of monomials x^ρ where ρ∈ is a local exponent of the operator. As ρ lies in an algebraic closure of , and thus p·ρ=0, the prospective module to be considered, namely, ^ρ:=x^ρ(z) x, would not be well defined: for k∈, the product x^ρ· x^k could be equally read as x^ρ+p· x^k-p=x^ρ· x^k-p, with ambiguity in the second factor. To avoid this nuisance we introduce a further variable t “playing the role of a new x” and define ^ρ:=t^ρ(z) x together with the derivation t = 1 x t as the relevant module. We will later define an even smaller subspace, restricting the powers of the variables z_i as coefficients of powers of x to obtain a more precise statement. Let be a field of positive characteristic p, and set Ø= x. Let L∈Ø[] be a linear differential operator with coefficients in Ø and let _0 denote its initial form. For every local exponent ρ∈ of L, let ^ρ= t^ρ(z_1,z_2,…) x. There exists a linear automorphism u of ^ρ transforming the linear map on ^ρ induced by L into its initial form _0, i.e., such that ∘ u = _0:^ρ^ρ. We now pass on to the solutions of the associated differential equation Ly=0. If L∈Ø[] has initial form L_0, even constructing a basis of solutions of the “Euler equation” L_0y=0 is not obvious in positive characteristic. To do so, one has to specify first the field of constants in =⊕_ρ∈^ρ=⊕_ρ∈ t^ρ(z) x . The field of constants of is 𝒞:=⊕_ ρ∈_pt^ρ x^p-ρ(^p) x^p, where _p denotes the prime field of . We then have Let L_0=∑_i-j=τ c_ijx^i^j be an Euler operator. For any root ρ of χ_L_0 in , denote by m_ρ its multiplicity. A basis of solutions of L_0y=0 over the ring of constants 𝒞 in is given by y_ρ,i= t^ρ^i^* =t^ρ z_1^i z_2^⌊i/p^2⌋ z_3^⌊i/p^3⌋⋯, where i<m_ρ and i^*=(i,⌊i/p⌋, ⌊i/p^2⌋,⌊i/p^3⌋, …)∈^() is a string of integers with finitely many non-zero entries. In particular, the dimension of the solution space of L_0y=0 in over the constants is n= L_0. With this result in mind, the solutions of the general equation Ly=0 go along the same line as in Theorem <ref>, using now Theorem <ref> and Proposition <ref>. Let Ø, L, ρ and ^ρ be as in Theorem <ref>. Assume that L_0= L. Let t^ρ, t^ρ^1^*,…, t^ρ^(m_ρ-1)^*, for ρ of multiplicity m_ρ varying over the local exponents of L, be the canonical basis of solutions of _0y=0. Then u(t^ρ), u(t^ρ^1^*), …, u(t^ρ^(m_ρ-1)^*) form a basis of solutions over 𝒞 of y=0 in =⊕_ρ t^ρ(z) x. The solution of the exponential differential equation y'=y in characteristic 3, is given by exp_3 =1+x+2x^2+2x^3z_1+x^4(1+2z_1)+x^5z_1+2x^6z_1^2+x^7(1+2z_1+2z_1^2)+x^8(2+z_1^2)+ + x^9(2z_1+z_1^3z_2)+x^10(2+z_1+2z_1^2+z_1^3z_2)+…, see Example <ref> for other characteristics. Structure of the paper. Section <ref> starts with a review of univariate differential operators, the definition of their initial form, the indicial polynomial and the local exponents. We describe the solutions of Euler equations and introduce the differentiation of a differential operator with respect to the exponents (in the sense of Frobenius). Then, in Section <ref>, we construct the function space for equations in zero characteristic and then prove the respective normal form theorem, Theorem <ref>. This provides in Section <ref> the description of a full basis of solutions in case the origin is a regular singular point, see Theorem <ref>. For irregular singularities, we sketch in Section <ref> Merkl's algorithm of how to use the normal form theorem also in this case to obtain all solutions. The section also includes a brief discussion of the occurrence of apparent singularities and of Gevrey series in this context. Chapters <ref> and <ref> are devoted to positive characteristic. We start with the construction of primitives, the enhanced enlargement of function spaces, and the respective ring of constants (Sections <ref> and <ref>). These techniques are applied in section <ref> for solving Euler equations in characteristic p. Section <ref> contains the normal form theorem in positive characteristic, Theorem <ref>, together with its proof. This is then applied in section <ref> to construct the associated solutions of differential equations with regular singularity (now defined through the order condition on the coefficients as given by Fuchs' criterion in characteristic 0), in Theorem <ref>. With section <ref> we begin to look at concrete examples as are the exponential function and the logarithm in characteristic p. In section <ref> we study the case when only finitely many variables z_i are needed to solve the equations, and relate this to the nilpotence of the p-curvature as described by Dwork. Also we ask and answer the question when the differential equation has even polynomial solutions, thus generalizing a result of Honda. Section <ref> compares the two normal form theorems with Grothendieck's p-curvature conjecture as well as with Bézivin's conjecture. The delicacy lies in the fact that the algorithm provided by the normal form theorem in positive characteristic is not the reduction modulo p of the characteristic 0 algorithm. The difference is subtle, and we aim at highlighting the involved phenomena (some of them being of purely number theoretic flavor). The article concludes in section <ref> with the discussion of the integrality of the solutions, i.e., the question when the solutions of differential equations defined over have integer coefficients. § DIFFERENTIAL EQUATIONS IN CHARACTERISTIC ZERO §.§ Constructions with differential operators Singular differential equations. Let be given a linear ordinary differential equation Ly=p_n(x) y^(n)+p_n-1(x)y^(n-1)+… + p_1(x)y'+p_0(x)y=0, where L=p_n^n+p_n-1^n-1+… + p_1+p_0∈Ø[] is a differential operator. Here 𝒪 denotes denotes the ring of germs of holomorphic functions in one variable x at a given chosen singular point of L, say, the origin 0, or the ring of polynomials [x] or formal power series x over an arbitrary field of any characteristic. Moreover, =d/dx denotes the usual derivative with respect to x. Writing L=∑_j=0^n ∑_i=0^∞ c_ijx^i^j, the operator decomposes into a sum L=L_0+L_1+… +L_m+ … of homogeneous or Euler operators L_k =∑_i-j=τ_kc_ijx^i^j, where the shifts τ_0<τ_1<… of the operators L_k are ordered increasingly and all L_k are assumed to be non-zero. The term L_0 of smallest shift constitutes the initial form of L at 0, and τ:=τ_0 is called the shift of L at 0. Up to multiplying L with the monomial x^-τ we may assume (as we will do throughout) that L has shift τ=0; thus L_0=∑_i=0^n c_iix^i^i. The point x=0 is singular for L if at least one quotient p_i/p_n has a pole at 0 (otherwise, 0 is called non-singular or ordinary). It is a regular singularity (in the sense of Fuchs) if L_0 has again order n, i.e., if c_nn≠ 0. The indicial polynomial of L at 0 is defined as χ_L()=∑_i=0^n c_ii^ i =∑_i=0^n c_ii(-1)⋯ (-i+1). Here, ^ i denotes the falling factorial or Pochhammer symbol. Clearly, χ_L=χ_L_0, which we simply denote by χ_0. Its roots ρ in the algebraic closure of are the local exponents of L at 0, and m_ρ∈ will denote their multiplicity. (i) We can rewrite any differential operator in terms of δ:=x∂, the Euler derivative. The base change between x^n∂^n and δ is given by the Stirling numbers of the second kind S_n,k. This is readily verified using the recursion relation S_n+1,k=kS_n,k+S_n, k-1. This allows one to read off the indicial polynomial of an operator: If the initial form of an operator L is given by L_0=ϕ(δ) for some polynomial ϕ, then the indicial polynomial of the operator is χ_L=ϕ. (ii) The classical characteristic zero definition of a regular singular point of a differential equation using the growth of the local solutions cannot be translated to characteristic p. However, the equivalent characterization by Fuchs using the order of vanishing of the coefficients of the equation applies. We recall some basic facts from differential algebra. If (R,∂) is a differential ring (or field), a constant is an element r∈ R, such that ∂ r=0. The set of constants of R forms a subring (or subfield). A linear differential equation of order n has at most n linearly independent solutions in any differential field R over its field of constants. This is a simple corollary of Wronski's lemma, see <cit.>, p. 9, or <cit.>. A set of n linearly independent solutions is called a full basis of solutions of the equation in R. In particular, if L∈𝒪[∂] is a differential operator with holomorphic coefficients, then Ly=0 can only have n -linearly independent solutions in 𝒪(log(x)). From now on we stick to characteristic 0 and let 𝒪 be the ring of germs of holomorphic functions at 0. Euler equations. The solutions of Euler equations L_0y=0 are easy to find. They are of the form y_ρ,i=x^ρlog(x)^i, where ρ∈ is a local exponent and i varies between 0 and m_ρ-1. Here, x^ρ=exp(ρlog(x)) and log(x) may be considered either as a symbol subject to the differentiation rule x^ = x^-1 and log(x)= 1/x, or as a holomorphic function on _slit=_≥ 0 or on arbitrary simply connected open subsets of ^*={0}. Extensions of differential operators. The consideration of logarithms is best formalized by introducing a new variable z for log(x) <cit.>, <cit.>. To this end, equip the polynomial ring [] over the field =(Ø) of meromorphic functions at 0 with the -derivation : [] [], x = x=1, .6cm =x, (x^i^k) =(i+k)x^i-1^k-1. This turns [z] into a differential ring. It carries in addition the usual derivative _z with respect to z. The same definition applies to Ø x^ρ[z] for any ρ∈, taking x^ρ=ρ x^ρ-1. In 𝒦[z] every element has a primitive; it is the smallest extension of 𝒦 for which this holds true. Indeed, x^-1 has no primitive in 𝒦. The primitive of x^-1z^ℓ in 𝒦[z] is given by 1/ℓ+1z^ℓ+1, while the primitive of x^kz^ℓ for k≠ -1 is given by x^k+1p(z), where p is a polynomial of degree ℓ. Thus we may call 𝒦[z] the primitive closure of 𝒦. The j-fold composition ∘⋯∘ will be denoted by ^j. For a differential operator L=p_n^n+p_n-1^n-1+… + p_1+p_0∈Ø[] define its extension as the induced action on 𝒦[z], denoted by the same letter, :𝒦[z]→𝒦[z] If ρ∈ is a local exponent of L, we will likewise associate to the -linear map : x^ρ[z] x^ρ[z], x^ρ h(x) z^i↦(x^ρ h(x) z^i), called again the extension of L to x^ρ[z]. Whenever L has shift τ≥ 0 – as we will assume in the sequel – its extension sends Ø x^ρ[z] to Ø x^ρ[z] and thus defines a -linear map :Ø x^ρ[z]Ø x^ρ[z], x^ρ h(x) z^i↦(x^ρ h(x) z^i). The Leibniz rule gives Let L be an operator. Then, for ρ∈, h∈Ø, and i≥ 0, (x^ h(x)^i)_|=log(x)=L(x^ h(x)log(x)^i), where on the right hand side L acts via d/dx. In particular, the map Ø x^ρ []Ø x^ρ [log(x)] given by the evaluation z↦log(x) sends solutions of y=0 to solutions of Ly=0. The equation x^2y”+3xy'+1=0 with Euler operator L_0=x^2^2+3x+1 has indicial polynomial χ_0=ρ^ 2+3ρ^ 1+1=(+1)^2 with double root =-1. The solutions of L_0y=0 are y_1=x and y_2=xlog(x). The operator _0=x^2^2+3x+1 on Ø x[z] therefore has, as it should be, solutions x and x z. Indeed, _0(x)=L_0(x)=0, whereas (x z)=x^-2(-z+1) and ^2(x z)=(x^-2(-z+1))= -2 x^-3(-z+1) - x^-3=x^-3(2z-3) give _0(x z) = x (2z-3) +3 x (-z+1) +x z= 0. Function spaces. If L_0 is an Euler operator with exponents set Ω⊆ and if m_ρ denotes the multiplicity of ρ∈Ω, the -vector space _0=∑_ρ∈ΩØ x^ρ[z]_<m_ρ of polynomials in z of degree <m_ρ and with coefficients in Ø x^ρ is the correct space to look at for finding the solutions of the extended Euler equation _0y=0, since these are of the form x^ρ z^i, for ρ∈Ω and 0≤ i<m_ρ. The space _0 is, however, in general too small to contain the solutions of the extension y=0 if Ly=0 is a general equation with regular singularity and initial form L_0. A suitable enlargement of _0 is necessary. The method how to do this goes back to Fuchs, Frobenius, and Thomé; it requires some preparation. Differentiating differential operators. This technique first appears in the works of Frobenius. If s is another variable, write the j-th derivative of x^s=exp(slog(x)) as ^jx^s=s^ jx^s-j. Define then, for ℓ≥ 1, the ℓ-th derivative (^j)^(ℓ) of ^j as (^j)^(ℓ) x^s=(s^ j)^(ℓ)x^s-j, where (s^ j)^(ℓ) denotes the ℓ-th derivative of s^ j with respect to s. Clearly, (^j)^(ℓ)=0 for ℓ>j. Then, for a differential operator L=p_n^n+p_n-1^n-1+… + p_1+p_0 of order n, we get its ℓ-th derivative L^(ℓ) for ℓ≥ 1 as L^(ℓ)=p_n· (^n)^(ℓ)+p_n-1· (^n-1)^(ℓ)+… + p_1·()^(ℓ). This is no longer a differential operator; it is just a -linear map Ø t^ρØ x^ρ+τ, where τ is the shift of L. Remark. If we wish to work in arbitrary characteristic, it might be appropriate to define the ℓ-th derivative (^j)^[ℓ] of ^j differently as (^j)^[ℓ] x^t=1ℓ! (t^ j)^(ℓ)x^t-j=Δ^ℓ(t^ j)x^t-j, where Δ^ℓ(t^k) is defined as the divided or Hasse derivative of t^k, Δ^ℓ(t^k)=kℓ t^k-ℓ. The following facts are readily verified, cf. Lemmata <ref> and <ref> for similar results in positive characteristic. Let L always be a differential operator of order n and shift τ≥ 0. Let ρ∈ be arbitrary. The extension of L to Ø x^ρ[z] has expansion =L_x+L_x'_z +1 2! L”_x^2_z+… + 1 n!L^(n)_x_z^n, where the -linear maps L_x^(ℓ) act on Ø x^ρ while leaving all z^i invariant, and _z is the usual differentiation with respect to z. If L_0 is an Euler operator of order n with shift 0, indicial polynomial χ_0(s), and extension _0 to Ø x^ρ[z], then _0(x^^i)= x^·[χ_0()^i + χ_0'() i^i-1+1 2!χ_0”() i^ 2^i-2+… + 1 n!χ_0^(n)()i^ n^i-n]. The kernel of the extension _0 to _0=∑_ρ∈ΩØ x^ρ[z]_<m_ρ of an Euler operator L_0 with exponents ρ∈Ω⊆ of multiplicity m_ρ equals (_0)= ⊕_ρ∈Ω⊕_i=0^m_ρ-1 x^ρ z^i. A -basis of solutions of an Euler equation L_0y=0 is given by x^ρlog(x)^i, where ρ ranges over all local exponents of L_0 at 0 and 0≤ i <m_ρ, with m_ρ the multiplicity of ρ. (a) For the Euler operator L_0=x^2^2-3x+3 from before, with indicial polynomial χ_0(t)=(t+1)^2 and exponent ρ=-1 of multiplicity m_ρ=2, the extension _0=x^2^2+3x+1 to Ø x [z] has expansion _0(x^ρ z^i)=x^ρ[(ρ+1)^2z^i + 2(ρ+1)iz^i-1+2i(i-1)z^i-2] and kernel (_0)= x⊕ x z. (b) For the Euler operator L_0=x^3^3 -4x^2^2+9x-9 with indicial polynomial χ_0(t)=(t-1)(t-3)^2 and exponents 1 and 3 of multiplicity one and two, respectively, the extension _0=x^3^3 -4x^2^2+9x-9 to Ø x[z]⊕Ø x^3[z] has expansion _0(x^ρ z^i)= x^ρ[(ρ-1)(ρ-3)^2z^i + (3ρ-5)(ρ-3) iz^i-1+(6ρ -14)i^ 2z^i-2+ 6i^ 3z^i-3] and kernel (_0)= x⊕ x^3⊕ x^3 z. In order to apply the perturbation lemma <ref> below to an operator L acting on the space _0=∑_ρ∈ΩØ x^ρ[z]_<m_ρ one has to determine the image of the initial form _0 of . Write L=L_0-T. Assuming that L_0 has shift 0, it follows that T is an operator with shift >0, that is, it increases the order in x of elements of _0. Therefore, it sends _0 to _0 x=∑_ρ∈ΩØ x^ρ+1[z]_<m_ρ. One has no control about the precise image of : it can be equal to whole _0 x but it can also be much smaller. The perturbation lemma requires in any case the inclusion ()⊆(_0) of images. This would trivially hold if _0 were surjective onto _0 x. But this is not the case in general: it suffices to take L_0=x^2^2-x with local exponents σ =0 and ρ=2, both of multiplicity one. Then _0=Ø +Ø x^2=Ø and _0=L_0. The image of _0 under L_0 is L_0(_0)= x+Ø x^3⊊Ø x=_0 x, with a gap at x^2. However, if L=x^2^2-x-x=L_0-T, the operator T=x sends x∈_0 to x^2∉L_0(_0). So the perturbation lemma does not apply to this situation. The way out of this dilemma is a further enlargement of _0 to a carefully chosen function space containing _0. This enlargement will be explained in the next section. §.§ The normal form of differential operators When trying to lift, for an arbitrary operator L, the solutions x^ρlog(x)^k of L_0y=0 to solutions of Ly=0, two obstructions occur. First, ρ might be a multiple root of the indicial polynomial and logarithms already appear in the solutions of Ł_0y=0. Second, if ρ is not a maximal exponent of L modulo , that is, if ρ+k is again an exponent of L for some k>0, the lifting poses additional problems since higher powers of logarithms will occur among the solutions. We will approach and solve both problems simultaneously by using the extensions of operators L as defined above to appropriately chosen spaces for which the image of the action of _0 on equals x. In this situation, the perturbation lemma <ref> will apply to reduce : via a linear automorphism of to _0. Enlargement of function spaces. As was done already classically <cit.> p. 136 and 157, <cit.>, p. 362 and 364, <cit.>, p. 193, <cit.>, p. 221, it is appropriate to partition the set of exponents of a linear differential operator L into sets Ω⊆ of exponents whose differences are integers and such that no exponent outside Ω has integer difference with an element of Ω. We list the elements of each Ω increasingly, ρ_1<ρ_2<⋯<ρ_r, where ρ_k < ρ_k+1 stands for ρ_k+1-ρ_k∈_>0; denote by m_k≥ 1 the respective multiplicity of ρ_k as a root of the indicial polynomial χ_0 of L at 0. Set n_k=m_1+⋯+m_k and n_0=0. To easen the notation, we omit in each ρ_k the reference to the respective set Ω={ρ_1,…,ρ_r}. Instead of _0^Ω=∑_k=1^r Ø x^ρ_k[]_<m_k we will now allow polynomials in z of larger degree < n_k and take the module ^Ω=∑_k=1^r Ø x^ρ_k[]_<n_k=⊕_k=1^r ⊕_i=n_k-1^n_k-1Ø x^ρ_kz^i =⊕_k=1^r-1⊕_i=0^n_k-1⊕ _σ=ρ_k^ρ_k+1-1 x^σz^i⊕⊕_i=0^n_r-1Ø x^ρ_rz^i, equipped with the derivation from before (see Figure <ref>). The two different direct sum decompositions of will become relevant in a moment. Then set =⊕_Ω^Ω, the sum varying over all sets Ω of exponents with integer difference. As each summand ⊕_i=n_k-1^n_k-1Ø x^ρ_kz^i of ^Ω has rank m_k, it follows that is free of rank n over Ø. We illustrate the construction of the space ℱ^Ω with an example. Let L=x^5∂^5-2x^4∂^4-2x^3∂^3+16x^2∂^2-16x∂-x. It has indicial polynomial χ(s)=s^2(s-2)(s-5)^2. Therefore the local exponents are given by ρ_1=0, ρ_2=2 and ρ_3=5 with multiplicities m_1=2, m_2=1 and m_3=2. All local exponents differ by integers and we set Ω={0,2,5} as well as n_1=2, n_2=3 and n_5=5. Then the space ℱ^Ω is given by ℱ^Ω=𝒪⊕𝒪z ⊕𝒪x^2z^2 ⊕𝒪x^5z^3 ⊕𝒪x^5z^4. The exponents (k,i) of monomials x^k z^i in ^Ω are depicted in Figure <ref>. This example will illustrate why local exponents with integer difference have to be treated in a separate and quite peculiar way. Assume that the Euler operator L_0 has just two local exponents σ and ρ of multiplicities m_σ and m_ρ, respectively, say Ω ={σ,ρ}. If ρ-σ∉, then = Ø x^σ[z]_<m_σ⊕Ø x^ρ[z]_<m_ρ; if ρ-σ∈, then = Ø x^σ[z]_<m_σ+ Ø x^ρ[z]_<m_σ+m_ρ= Ø x^σ[z]_<m_σ⊕Ø x^ρ z^m_σ[z]_<m_ρ. The extension _0 of L_0 to has kernel x^σ[z]_<m_σ⊕ x^ρ[z]_<m_ρ in the first case, and x^σ[z]_<m_σ⊕ x^ρ z^m_σ[z]_<m_ρ in the second case. The respective images of _0 are Ø x^σ+1[z]_<m_σ⊕Ø x^ρ+1[z]_<m_ρ and Ø x^σ+1[z]_<m_σ⊕Ø x^ρ+1z^m_σ[z]_<m_ρ, so they equal x in both cases. If we would have taken in the second case where ρ - σ∈ is integral the space =Ø x^σ[z]_<m_σ + Ø x^ρ[z]_<m_ρ, the image of _0 would have been ⊕_k=1^ρ-σ-1 x^σ+k[z]_<m_σ⊕ x^ρ [z]_<m_σ-m_ρ⊕Ø x^ρ+1[z]_<max(m_ρ, m_σ)⊊ x, which is strictly included in x. Here x^ρ [z]_<m_σ-m_ρ is to be read as 0 for m_σ≤ m_ρ. Indeed, x^ρ z^m_σ-1∈ x is not in the image of _0. This would cause serious obstructions when trying to see on as a (negligible) perturbation of _0, since the higher order terms of may produce images in whole x. So the Perturbation Lemma <ref> below would not apply. The example suggests to admit in powers of the logarithm, say, of z, which exceed the respective multiplicity of the local exponent ρ appearing in the factor x^ρ. The following lemma gives a precise answer of how to proceed; see Lemma <ref> for the corresponding result in characteristic p. Let L∈Ø[] be an Euler operator with shift τ=0. Denote by Ω={ρ_1,…,ρ_r} a set of increasingly ordered local exponents ρ_k of L with integer differences and multiplicities m_k. Set n_k=m_1+… +m_k and =^Ω =∑_k=1^r Ø x^ρ_k[]_<n_k. (a) The induced map :ℱ→ℱ has image ()= x=∑_k=1^r Ø x^ρ_k+1[]_<n_k. (b) Its kernel ()=⊕_k=1^r x^ρ_k[]_<m_k (cf. Lemma <ref>) has direct complement = ⊕_k=2^r ⊕_i=m_k^n_k-1 x^ρ_k z^i ⊕⊕_k=1^r-1⊕ _e=1^ρ_k+1-ρ_k-1⊕_i=0^n_k-1 x^ρ_k+ez^i⊕⊕_i=0^n_r-1Ø x^ρ_r+1z^i, in . Thus the restriction _| defines a linear isomorphism between and x. (a) We show first that sends into x. Recall from Lemma <ref> that (x^^i)= x^·[χ()^i + χ'() i^i-1+1 2!χ”() i^ 2^i-2+… + 1 n!χ^(n)()i^ n^i-n]. Therefore, as χ^(ℓ)(ρ_k)=0 for 0≤ℓ<m_k, and using that n_k-m_k=n_k-1 for k≥ 2, it follows that sends into ∑_k=1^r Ø x^ρ_k[]_<n_k-m_k= ∑_k=2^r Ø x^ρ_k[]_<n_k-1⊆∑_k=2^r Ø x^ρ_k-1+1[]_<n_k-1⊆ x. Here, we use that ρ_k-ρ_k-1≥ 1. This proves ()⊆ x. For the inverse inclusion ()⊇ x it suffices to check that all monomials x^σ z^i∈ x lie in the image, where σ=ρ_k+e for some k=1,…,r and e≥ 1, and where i<n_k. We distinguish two cases. (i) If σ∉Ω, proceed by induction on i. Let i=0. By Lemma <ref>, (x^σ) = L_x(x^σ)+∑_j=1^n 1 j!L_x^(j)_z^j(x^σ) = L_x(x^σ) = χ(σ) x^σ≠ 0, since σ is not a root of χ. So x^σ∈(). Let now i>0. Lemmata <ref> and <ref> yield (x^σ z^i) = L_x(x^σ z^i)+∑_j=1^n 1 j!L_x^(j)_z^j(x^σ z^i) = χ(σ) x^σ z^i+χ^(j)(σ)x^σ∑_j=1^n i^ j j! z^i-j. By the inductive hypothesis and using again that χ(σ)≠ 0 we end up with x^σ z^i∈(). (ii) If σ∈Ω, write σ =ρ_k for some 1≤ k≤ r. As x^σ z^i=x^ρ_kz^i∈ x and ρ_1<ρ_2<⋯ <ρ_r, we know that k≥ 2 and x^ρ_k z^i∉x·∑_ℓ=k^r Ø x^ρ_ℓ[]_<n_ℓ. Hence x^ρ_k z^i∈ x·∑_ℓ=1^k-1Ø x^ρ_ℓ[]_<n_ℓ. This implies in particular that 0≤ i<n_k-1, which will be used later on. We proceed by induction on i. Let i=0. By Lemma <ref>, (x^ρ_kz^m_k) =∑_j=0^m_k-11 j!L_x^(j)_z^j(x^ρ_kz^m_k) + 1 m_k!L_x^(m_k)_z^m_k(x^ρ_kz^m_k) + ∑_j=m_k+1^n 1 j!L_x^(j)_z^j(x^ρ_kz^m_k) =∑_j=0^m_k-1(m_k)^ j j!χ^(j)(ρ_k)x^ρ_kz^m_k-j + χ^(m_k)(ρ_k)x^ρ_k =χ^(m_k)(ρ_k)x^ρ_k. Here, the sum in the first summand in the last but one line is 0 since ρ_k is a root of χ of multiplicity m_k, and for the same reason, the second summand χ^(m_k)(ρ_k)x^ρ_k is non-zero. So x^σ = x^ρ_k∈(). Let now i>0 and consider x^σ z^i=x^ρ_kz^i∈ x. We will use that i < n_k-1 as observed above. Namely, this implies that m_k+i< m_k+n_k-1=n_k, so that x^ρ_kz^m_k+i is an element of . Let us apply to it. Similarly as in the case i=0 we get (x^ρ_kz^m_k+i) = ∑_j=0^m_k-11 j!L_x^(j)_z^j(x^ρ_kz^m_k+i) +1 m_k!L_x^(m_k)_z^m_k(x^ρ_kz^m_k+i) + + ∑_j=m_k+1^n 1 j!L_x^(j)_z^j(x^ρ_kz^m_k+i) =(m_k+i)^ m_k m_k!χ^(m_k)(ρ_k)x^ρ_kz^i +∑_j=m_k+1^n (m_k+i)^ j j!χ^(j)(ρ_k)x^ρ_kz^m_k+i-j. The sum appearing in the second summand of the last line belongs to () by the induction hypothesis since m_k+i-j<i. As χ^(m_k)(ρ_k)≠ 0, we end up with x^σ z^i=x^ρ_kz^i∈(). This proves the inverse inclusion ()⊇ x and hence assertion (a). (b) From the shape of and () as depicted in Figure <ref> one sees that is a direct complement of () in . Hence _| is automatically injective and ()=()= x. Here is the result from functional analysis required for the proof of the normal form theorems in characteristic 0 and p. If ℓ:F G is a continuous linear map between complete metric vector spaces which decomposes into ℓ=ℓ_0-t with (t)⊆(ℓ_0) and satisfies s(t(f))≤ C· f, 0<C<1, for a right inverse s:(ℓ_0) F of ℓ_0:F(ℓ_0) and all f∈ F, then u=_F-st is a continuous linear automorphism of F which transforms ℓ into ℓ_0 via ℓ u =ℓ_0. The prospective inverse of u is v=∑_k=0^∞ (st)^k. It is well defined and continuous because of the estimate for st(f) and the completeness of F. Hence u is an automorphism of F. From ℓ_0s=_(ℓ_0) it follows that ℓ_0 sℓ_0=ℓ_0. From (t)⊆(ℓ_0) one gets that the compositions st and sℓ are well defined and that ℓ_0sℓ=ℓ holds. Then ℓ_0 u = ℓ_0(_F-s t) = ℓ_0(_F-s (ℓ_0-ℓ)) =ℓ_0(_F-s ℓ_0+s ℓ) = ℓ_0-ℓ_0 s ℓ_0+ℓ_0 s ℓ = ℓ_0 sℓ =ℓ, as required. This proves the result. Let L∈Ø[] be a linear differential operator with holomorphic coefficients at 0, initial form L_0 and shift τ=0. Denote by Ω={ρ_1,…,ρ_r} a set of increasingly ordered local exponents ρ_k of L with integer differences and multiplicities m_k. Set n_k=m_1+… +m_k and =^Ω =∑_k=1^r Ø x^ρ_k[]_<n_k. Let ,_0 act on via x=1 and z=x as above. Assume that L has a regular singularity at 0. (a) The composition of the inverse (_0_|): x of _0_| with the inclusion ⊆ defines a right inverse _0: x of _0, again denoted by (_0_|). Let : x be the extension of T=L_0-L to . The map u=_ - _0∘: is a linear automorphism of , with inverse v=u=∑_k=0^∞ (∘)^k:. (b) The automorphism v of transforms into _0, ∘ v=_0. (c) If 0 is an arbitrary (i.e., regular or irregular) singularity of L, statements (a) and (b) hold true with Ø replaced by the ring Ø of formal power series over or over an arbitrary algebraically closed field K of characteristic 0. (b) [...adjust, including complements of kernels] A right inverse of _0 can be given by an operator _0: x of the form _0(x^σ z^i)= 1χ_0(σ)[x^ρ z^i - χ_0'(σ)iχ_0(σ)x^ρ z^i-1 - [1 2χ_0”(σ)i(i-1)χ_0(σ)-χ_0'(σ)^2iχ_0(σ)^2]x^ρ z^i-2- …]. Proof. As =⊕^Ω and , _0 and are direct sums of the respective restrictions ^Ω^Ω, it suffices to show the assertion for each single summand ^Ω. So fix one set Ω of exponents with integer differences, and write instead of ^Ω for convenience. In particular, our slight abuse of notation for the sets Ω in the statement of the theorem is thus avoided. Note first that u is well defined since the map T increases the order of series and thus sends into x. Once we show that _0((f))≤ C f holds for some 0<C<1 and all f∈, the perturbation lemma <ref> implies that u=_-_0∘ is a linear automorphism of with ∘ u = _0, proving assertions (a) to (c) of the theorem. The proof of the estimate is split into two parts, first for formal power series and then for convergent ones, and uses a different metric in each case. (i) Formal case: Denote by Ø =K x the formal power series ring over an arbitrary field K of characteristic 0, equipped with the metric d(f,g)=2^-_0(f-g), where _0 denotes the order of vanishing at 0. Let denote the induced Ø-modules =⊗_K Ø and write again for the extension to . As increases the order of series in Ø, while _0 and _0 do not decrease the order, it follows that also _0∘ increases the order. It thus satisfies the inequality _0((f))≤ C· f from the beginning, for some 0<C<1, having set f= d(f,0)=2^- f. Therefore the von Neumann series v=∑_j=0^∞ (_0∘)^j defines a -linear map v:. Then it is clear that v=u=(_-∘). So u and v are automorphisms, and ∘ v =_0 by the perturbation lemma. This proves assertion (c) as well as the formal version of (a). (ii) Convergent case: To prove the same thing inside Ø, denote by Ø_s the subring of germs of holomorphic functions h such that h_s <∞. Here, s>0 and ∑_k=0^∞ a_kx^k_s:=∑_k=0^∞a_ks^k. It is well known that the rings Ø_s are Banach spaces, and that Ø=⋃_s>0Ø_s <cit.>. For s>0 sufficiently small, u restricts to a linear map u_s on the induced Banach space _s=(∑_k=1^r Ø x^ρ_k[]_<n_k)_s. For the convergence of v_s it therefore suffices to prove that _0∘_s<1, where -_s denotes the operator norm of bounded linear maps _s_s. Once this is proven, v_s=u_s holds as before and shows that u_s and hence also u are linear isomorphisms. This argument provides a compact reformulation of Frobenius' proof for the convergence of solutions <cit.>, p. 218. The inequality _0∘_s<1 is equivalent to the existence of a constant 0<C<1 such that _0((x^ h(x)z^i))_s ≤ C·x^ h(x)z^i_s for all x^ h(x)z^i∈_s. This will imply in particular that (_0∘)(_s)⊆_s. We will treat the case where ρ is a maximal local exponent of L modulo and a simple root of χ_0. In this case, no extensions of operators are required, and we can work directly with L, S and T and =Ø x^ρ. For non-maximal exponents there occur notational complications which present, however, no substantially new difficulty. So we shall omit the general case. For h=∑_k=0^∞ a_kx^k∈Ø and writing L=∑_j=0^n p_j(x)^j with p_j=∑_i=0^∞ c_ijx^i we have T(x^ h) = -∑_i-j>0∑_k=0^∞ (+k)^ j c_ija_kx^+k+i-j, and, recalling that L_0 is assumed to have shift 0, S(T(x^ h))= -∑_i-j>0∑_k=0^∞(+k)^ jχ_L(+k+i-j) c_ija_kx^+k+i-j. As i-j>0, k≥ 0, and is maximal, no +k+i-j appearing in the denominator is a root of χ_L. Hence the ratio (+k)^ jχ_L(+k+i-j)= (+k)^ j∑_ℓ=0^n c_ℓ,ℓ(+k+i-j)^ℓ is well defined. But c_n,n≠ 0 since 0 is a regular singularity of L, and hence (+k+i-j)^ n appears in the denominator with non-zero coefficient. As j≤ n this ensures that the ratio remains bounded in absolute value, say ≤ c, as k tends to ∞. Taking norms on both sides of the above expression for S(T(x^ h)) yields, for s≤ 1 and h∈Ø_s, the estimate S(T(x^ h))_s≤ c∑_i-j>0∑_k=0^∞c_ija_ks^+k+i-j= c∑_i-j>0c_ijs^i-j∑_k=0^∞a_ks^+k But by assumption, p_j=∑_i=0^∞ c_ijx^i∈Ø_s for all 0<s≤ s_0 and all j=0,…,n. This implies in particular ∑_i>j^∞ c_ijx^i∈Ø_s and then, after division by x^j+1 and since i> j, that ∑_i>j^∞ c_ijx^i-(j+1)∈Ø_s. We get for all s≤ r that ∑_i-j>0c_ijs^i-j=s·∑_i-j>0c_ijs^i-(j+1)≤ c' s for some c'>0 independent of s. This inequality allows us to bound S(T(x^ h))_s from above by S(T(x^ h))_s≤ cc's∑_k=0^∞a_ks^+k=cc'sx^ h_s. Take now s_0>0 sufficiently small, say s_0≤min(1,r) and s_0< 1 cc', and get a constant 0<C<1 such that for 0<s≤ s_0 one has S(T(x^ h))_s≤ C·x^ h_s. This establishes S∘ T_s<1 on _s for 0<s≤ s_0. By the Perturbation Lemma <ref>, u_s=__s-S∘ T is an automorphism of _s with inverse v_s=∑_k(S∘ T)^k. This completes the proof of the theorem. §.§ Solutions of regular singular equations As a first consequence of the normal form theorem <ref> we recover the classical theorem of Fuchs from 1866 and 1868 about the local solutions of differential equations at regular singular points <cit.>, <cit.>. The statement was reorganized and further detailed by Thomé and Frobenius in a series of papers between 1872 and 1875 <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. See also <cit.> formula (9), p. 19. Let L∈Ø[] be a linear differential operator with holomorphic coefficients and regular singularity at 0. For each set Ω of local exponents of L with integer differences, let u_Ω:^Ω^Ω be the automorphism of assertion (a) of the normal form theorem. (a) Varying Ω, a -basis of local solutions of L y=0 at 0 is given by y_ρ,i(x)=u_Ω(x^ρ z^i)_| z=log(x), for ρ∈Ω a local exponent of L of multiplicity m_ρ, and 0≤ i< m_ρ. (b) Order the exponents in a chosen set Ω as ρ_1<… <ρ_r and write m_k for m_ρ_k. Set n_k=m_1+… + m_k. Each solution related to Ω is of the form, for 1≤ k≤ r and 0≤ i<m_k, y_ρ_k,i(x)=x^ρ_k[f_k,i+… +f_k,0log(x)^i]+∑_ℓ=k+1^r x^ρ_ℓ∑_j=n_ℓ-1^n_ℓ-1 h_k,i,j(x) log(x)^j, with holomorphic f_k,i and h_k,i,j in Ø, where f_k,0 has non-zero constant term. Let Ω be a set of local exponents of L at 0 with integer differences and consider the space ^Ω =∑_k=1^r Ø x^ρ_k[]_<n_k as in the statement of the normal form theorem. Extend L and L_0 to =⊕_Ω^Ω. By Lemma <ref>, a -basis of solutions of _0 is given by the monomials x^ρ z^i, 0≤ i ≤ m_ρ-1, where ρ is a local exponent of multiplicity m_ρ. By assertion (d) of the normal form theorem and since L and L_0 have the same order n, the pull-backs u(x^ρ z^i) form a -basis of solutions of y=0. Now Lemma <ref> gives the result. The coefficient functions f_k,i and h_ρ,i,j∈Ø of the solutions in assertion (b) of the theorem are related to each other. For instance, if ρ is a maximal exponent in Ω of multiplicity m_ρ, then y_ρ,0 =x^· g_0 y_ρ,1 =x^· [g_1+g_0 log(x)] … y_ρ,m_ρ-1 =x^· [g_m_ρ-1+ g_m_ρ-2log(x)+… + g_1 log(x)^m_ρ-2 + g_0 log(x)^m_ρ-1], with holomorphic g_0,…,g_m_ρ-1, where g_0 has non-zero constant term. (i) If L has exactly two exponents σ and ρ, with ρ-σ∈_>0 and of multiplicities m_σ and m_ρ, respectively, we get accordingly = x^σ[Ø⊕⋯⊕Ø z^m_σ -1] + x^ρ [Ø⊕⋯⊕Ø z^m_σ+m_ρ-1]. which we rewrite as = x^σ[Ø⊕⋯⊕Ø z^m_σ -1] ⊕ x^ρ [Ø z^m_σ⊕⋯⊕Ø z^m_σ+m_ρ-1]. A basis of solutions of Ly=0 are Ø-linear combinations y_σ,0 =x^σ· h_0+x^ρ g_0log(x)^m_σ y_σ,1 =x^σ· [h_1+h_0 log(x)]+x^ρlog(x)^m_σ[g_1+g_0log(x)] … y_σ,m_σ -1 =x^σ· [h_m_σ-1+ h_m_σ-2log(x)+⋯ + h_1 log(x)^m_σ-2 + h_0 log(x)^m_σ-1]+ +x^ρlog(x)^m_σ·[g_m_ρ-1+⋯+ g_0 log(x)^m_ρ -1] y_ρ,0 =x^· f_0 y_ρ,1 =x^· [f_1+f_0 log(x)] … y_ρ,m_ρ-1 =x^· [f_m_ρ-1+ f_m_ρ-2log(x)+… + f_1 log(x)^m_ρ-2 + f_0 log(x)^m_ρ-1] with holomorphic f_0,…,f_m_ρ-1, g_0,…,g_m_ρ-1, h_0,…,h_m_σ-1. (ii) The function e^xlog(x) satisfies the differential equation Ly=0 for L=x^2∂^2+(1-2x)x∂ + x(x-1). A basis of solutions is completed by e^x. The initial form of L is L_0= x^2∂^2+x∂. Consequently, the only local exponent of L is 0 with multiplicity 2. The basis of solution is, as expected, contained in 𝒪⊕𝒪z. §.§ Applications in characteristic zero Irregular singularities. Whenever the point 0 is an irregular singularity of a differential operator L∈Ø[] with holomorphic coefficients, i.e., when n_0= L_0< L=n, Theorem <ref> does not provide a basis of solutions of Ly=0, but only n_0 linearly independent solutions thereof. It is well known that the solutions which are missing for a full basis are more complicated and may have essential singularities <cit.>. More specifically, they are of the form y(x)=exp(q(x)) · x^ρ·[h_0(x)+h_1(x)log(x)+…+h_k(x)log^k(x)], where q∈(x) is a rational function, ρ∈ a local exponent of L, and h_i holomorphic <cit.>, Thm. 3, <cit.>, Chap. XVII, p. 417. Actually, one can even take for q a Laurent polynomial q(x) =∑_r∈_>0 c_r x^-r, with c_r∈, almost all c_r=0. It suffices to take here r>0 since summands with non-negative exponents produce holomorphic factors in y(x). In <cit.>, Nicholas Merkl describes an algorithm how to construct these solutions by reducing the differential equation Ly=0 to various differential equations Ly=0, all with regular singularity at 0, to apply then to these new equations the normal form theorem in characteristic 0, Theorem <ref>, to obtain their respective solutions as in <ref>. It then suffices to pull back these solutions to the original equation via the inverse conjugations. Doing this for all induced equations Ly=0, one eventually obtains a basis of solutions of Ly=0. This shows that the normal form theorem <ref> is applicable to all linear differential equations with holomorphic coefficients to construct their solutions. In the irregular case, there is also a method to find the solutions using the Newton polygon of L: it is similar in substance, though more computational, see <cit.>, section 3.3, p. 90. We briefly sketch Merkl's algorithm: Let be given an operator L∈Ø[] of order n. Denote by δ=x the basic Euler operator, and define, for r∈_≥ 0 a positive rational number, the weighted operator δ_r= x^rδ. Here, x^r is considered either as a symbol or as a Puiseux monomial with (x^r)'=rx^r-1. Writing r=e/d with e,d∈, we may then expand formally L as a linear combination L=∑_j=0^n ∑_i=0^∞ c_ij x^i/dδ_r^j, with coefficients c_ijx^i/d. For each j, let i=i_j∈ be minimal with c_ij≠ 0 (we suppress here the reference to r). Then define the weighted initial form L_0,r and the weighted indicial polynomial χ_r of L with respect to r as L_0,r =∑_j=0^n c_i_jjδ_r^j∈[δ_r], χ_r =∑_j=0^n c_i_jj s^j∈[s]. For r=0 we just get the classical initial form L_0=L_0,0 and its indicial polynomial χ=χ_0 defined earlier. Note that for generic r, the polynomial χ_r will be a monomial and hence have the unique root 0 in . The interesting values of r occur when χ_r is at least a binomial and thus also has roots ≠ 0 in . These values of r correspond to the slopes of the Newton polygon of L and are also known as dicritical values or weights <cit.> section 3.3, p. 90. The dicritical weighted local exponents of L with respect to r are defined as the non-zero roots of χ_r in . We set Ω_r = {ρ∈, χ_r(ρ)=0}, Ω^*_r =Ω_r{0}= {ρ∈^*, χ_r(ρ)=0}. Here, Ω^*_r is non-empty if and only if r is dicritical for L. Merkl then proves The number of classical local exponents of L plus the number of dicritical weighted local exponents of L with respect to rational weights r>0, both counted with their multiplicities, equals n, the order of L. In the case of a regular singularity, all local exponents are classical and no weighted local exponents appear. So we will assume henceforth that there is at least one weighted local exponent ρ, for some dicritical r∈^*. Choose and then fix such a pair. After these preparations, the first step in the algorithm is to replace in the differential equation Ly=0 the variable y by exp(-ρ r x^-r)y=e^-ρ r x^-ry. This substitution corresponds to a conjugation of L with the multiplication operator given by the indicated exponential function. If we write L=∑_j=0^n a_j(x)δ_r^j the conjugated operator is, see <cit.> p. 13., given as L=∑_j=0^n(∑_k=j^n k jρ^k-ja_j(x))δ_r^j. It is then shown that the conjugation associated to a weighted local exponent ρ of weight r>0 translates the weighted local exponents of L by ρ, i.e., L has weighted local exponents σ-ρ with respect to r <cit.>, Prop. 3.10, p. 29. In particular, the original ρ becomes 0 and is thus no longer dicritical for L. Iterating this process of conjugation one arrives at a differential equation which has no dicritical weights at all. This is equivalent to saying that the final differential operator L^* has a regular singularity at 0. Thus the normal form theorem <ref> applies to L^* and produces as many linearly independent solutions of L^*y=0 as its order indicates, using Theorem <ref>. Tracing back the conjugations of L and varying the algorithm over all dicritical weights r and their weighted local exponents ρ, one ends up with a full basis of solutions of the original differential equation. This reproves in a constructive and systematic way Fabry's theorem about the existence and description of the solutions of irregular singular differential equations. The divergent series y(x) =∑_k=0^∞ k! x^k+1 satisfies the second order equation Ly=x^3y”+(x^2-x)y'+y=0. The initial form of L at 0 is given by the first order operator L_0=-x+1. Hence 0 is an irregular singularity of L. The function z(x)= exp(-1 x) is a second solution of Ly=0; it is no longer a formal power series. Apparent singularities. The formulas for the solutions of Ly=0 are somewhat complicated whenever the sets Ω of local exponents are not single valued. But if Ω={ρ} has just one element ρ, i.e., no other local exponent of L is congruent to ρ modulo , and if ρ has multiplicity m_ρ, the respective solutions are simpler, of the form, for 0≤ i<m_ρ, y_ρ,i(x)=x^ρ[f_i+… +f_ilog(x)^i]. If some local exponents have multiplicity ≥ 2 logarithms are forced to appear. If all local exponents are simple roots of the indicial polynomial, it may happen that no logarithms appear in the solutions. This situation is known as the presence of apparent singularities. Let L∈Ø[] be a differential operator with holomorphic coefficients and regular singularity at 0. Assume that all local exponents are integers and simple roots of the indicial polynomial of L at 0, and write L=L_0-T with initial form L_0 of L. If (T)⊆(L_0) in Ø, the local solutions of Ly=0 at 0 are holomomorphic functions. This is an immediate consequence of the proof of the normal form theorem, since in case (T)⊆(L_0) no extensions of the differential operators to larger function spaces involving logarithms are needed. As the local exponents are integral, the assertion follows from the description of the solutions. Gevrey series. By a theorem of Maillet, every power series solution y(x) of an equation Ly=0 with holomorphic coefficients is a Gevrey-series, i.e., there exists an m∈ such that the m-th Borel transform y(x)=∑_k=0^∞ a_kx^k↦ y(x)= ∑_k=0^∞a_k (k!)^mx^k of y(x) converges <cit.>. This result can also be seen as a consequence of the normal form theorem: It suffices to apply the norm estimates in part (ii) of the convergence proof to the series h(x) = ∑_k=0^∞a_k (k!)^m x^k with m=n-n', where n' denotes again the order of the initial form L_0 of L at 0. Exploiting this one proves that the composition of the automorphism v=u of Ø with the m-th Borel transform sends the solutions x^ρ of L_0y=0, for ρ∈ a local integer exponent of L, to a convergent power series x^ρ h(x). The key step is to see that the ratio (+k)^ jχ_L(+k+i-j)= (+k)^ j∑_ℓ=0^n c_ℓ,ℓ(+k+i-j)^ℓ will be replaced by (+k)^ jχ_L(+k+i-j)= (+k)^ j∑_ℓ=0^n' c_ℓ,ℓ(+k+i-j)^ℓ(k!)^m to obtain the required convergence. We omit the details. § DIFFERENTIAL EQUATIONS IN POSITIVE CHARACTERISTIC §.§ The lack of primitives in characteristic p From now on let be a field of characteristic p>0. If we try to transfer the description of a basis of solutions of differential equations over to fields of characteristic p, substantial obstructions occur, as the following example shows. (i) Let n∈ and let, for S_i,j the Stirling numbers, L=δ^n=(x∂)^n=x^n∂^n+S_n,n-1x^n-1∂^n-1+S_n,2x^n-2∂^n-2+… +S_n,1x∂+S_n,0. If we interpret L as a differential operator in x [∂] and solve the equation Ly=0 in x [z], we obtain a full basis of solutions {1, z, …, z^n-1} over . In characteristic p the field of constants clearly contains x^p [z^p]. So for n>p the set {1, z, …, z^n-1} cannot be a full basis of solutions, as 1 and z^p are linearly dependent over the field of constants. In some sense this boils down to the fact that a primitive of z^p-1 cannot be expressed in terms of z^p, in fact, z^p-1 does not have a primitive in x[z] at all. (ii) Consider the Euler operator L=x^2∂^2+x∂+2. Solving Ly=0 in characteristic 0 we notice that the local exponents are given by √(2) and -√(2) and a basis of solutions is given by the “functions” x^-√(2) and x^√(2), which are defined in sectors without a branch cut of the logarithm around 0. In _7 x the monomials x^3 and x^4 are solutions of the equation. However, in _5 no square root of 2 exists and thus it is impossible to solve the Euler equation Ly=0 in _5 x[z]. In order to resolve this issue in positive characteristic, we will construct in the next section a differential extension of x(z) which will contain a full basis of solutions for any linear differential operator with regular singularity at 0. Regularity is again needed to ensure the existence of as many local exponents as the order of the differential equation indicates. The extension will overcome the two aforementioned difficulties: the lack of primitives of certain elements and the lack of solutions to Euler equations. §.§ The Euler-primitive closure ℛ of x For each ρ∈ let t^ρ be a symbol. It will play the role of the monomial x^ρ from before; if ρ lies in the prime field of we may substitute x for t to recover the classical setting. We will call ρ the exponent of t in t^ρ. Further, let ℛ= ⊕_ρ∈t^ρ(z_1, z_2, … ) x be the direct sum of Laurent series in x with coefficients in the field of rational functions over in countably many variables z_i, multiplied with the monomials t^ρ. We will simply write (z) and (z^p) instead of (z_1,z_2, …) and (z_1^p,z_2^p, …). We consider ℛ as a ring with respect to the obvious addition and the multiplication given by (t^ρ f)· (t^σg)=t^ρ+σ(f· g) for ρ, σ∈, f,g∈[] x. In other words, we form the group algebra of the additive group of over (z)((x)). We will write t^0=1 and accordingly have (t^ρ)^p =t^ρ p=1 and ℛ^p=(^p) x^p. Equip ℛ with the derivation =_R satisfying: x = 1, t = t 1/x, t^ρ=ρ t^ρ1/x, z_1=1/x, z_2=1/x1/z_1, z_k=1/x1/z_1⋯ z_k-1= z_k-1/z_k-1, k≥ 1. This turns ℛ into a differential ring. The action of ∂ on z_i is chosen to mimic the usual derivation of the i-fold composition log(… (log(x))…) of the complex logarithm with itself. Indeed we have, writing log^[i] for the i-fold repetition of the logarithm (log^[i](x))^'=1/x·log(x) ·log(log(x))⋯log^[i-1](x). Similar constructions with iterated logarithms in positive characteristic were already considered by Dwork <cit.>, p. 752. (i) The ring ℛ is not an integral domain. Indeed, (1+t+… +t^p-1)(1-t)=1-t^p=0. Thus we are not able to form its quotient field and use the machinery of differential fields, as e.g. Wronski's Lemma and the concept of a basis of solutions. Still, in the course of the next sections, we will be able to provide a precise description of the solutions of a differential equation Ly=0 in ℛ. (ii) The derivation leaves the summands of the direct sum ℛ invariant, i.e., one has (t^ρ() x)⊆ t^ρ() x. This is the reason for not simply defining ∂ t^ρ = ρ t^ρ-1 but rather t^ρ=ρ t^ρ1/x. (iii) Note that the elements of ℛ may have unbounded degree in each of the variables z_i, only the coefficient of a given power of x has finite degree. This differs from the situation in characteristic 0 where the exponent of the logarithm in a solution of the equation Ly=0 is bounded for each differential operator. (iv) The doubly iterated logarithm log(log(x)) of characteristic 0 does not satisfy any homogeneous linear differential equation with holomorphic coefficients, but only the non-linear equation xy”+y'+x(y')^2=0. Alternatively, it satisfies the inhomogeneous equation xlog(x)y'=1 in which the logarithm appears as a coefficient. In characteristic p this reads as x z_1z_2'=1. For elements of ℛ the exponents of x are integers, while the exponents of t are elements of the field (formally, t^ρ for ρ∈ is just a symbol). However, we will see that the exponents of x and t interact in a certain way. We will use the following convention: In case that ρ is in the prime field _p of , we may write x^ρ_ for x^ρ where ρ_∈{0,1,…, p-1} is a representative of ρ. Conversely we may write t^k_ for t^k for some k∈, where k_∈_p⊆ is the reduction of k modulo p. Before we proceed, we will determine the constants of ℛ. Denote by (^p) the subfield (z_1^p, z_2^p,…) of (). A simple computation shows that monomials of the form t^ρ z_i^p x^mp-ρ, for any ρ in the prime field _p of and any m∈, are annihilated by . And, actually, these monomials already yield the entire field of constants, namely, The ring of constants of (ℛ,) is 𝒞:=⊕_ ρ∈_pt^ρ x^p-ρ(^p) x^p, where _p denotes the prime field of . Moreover, 𝒞 is a field. Let f ∈⊕_ρ∈ t^ρ() x and assume that f=0. Taking derivatives in ℛ preserves the summands of the direct sum, so it suffices to find constants of the form t^ρ h for some ρ∈ and h∈() x. Fix some ρ∈. As for all k∈ the derivation maps t^ρ()x^k into t^ρ()x^k-1 by definition, it further suffices to find constants of the form t^ρ h x^k, where h∈(). Therefore we are reduced to search for elements t^ρ h x^k of ℛ with (t^ρ h x^k)=0. Write h=g_1/g_2 for g_1, g_2∈[]. Then (t^ρ h x^k)=0 is equivalent to (t^ρ g_1g_2^p-1 x^k)=0, as g_2^p is a constant. So without loss of generality we may assume that h∈[] is a polynomial. We expand: 0=(t^ρ h x^k)=t^ρ(( h) x + (k+ρ) h)x^k-1. Let l be minimal such that h∈[z_1,…, z_l]. Consider one monomial ^α=z_1^α_1⋯ z_l^α_l of h, whose exponent α is maximal among the monomials of h with respect to the component-wise ordering of ^l. Taking the derivative ∂ of a monomial in decreases the exponents of at least one of the z_i and does not increase the other. It therefore yields a sum of smaller monomials with respect to the chosen ordering. Thus, in xh the coefficient of z^α vanishes by the maximality of the exponent. If we compare coefficients of t^ρ x^k-1z^α in Equation (<ref>) we get k+ρ=0. So it follows that ρ∈_p and that k≡ρ p. Moreover, we see from Equation (<ref>) that h=0. This is clearly equivalent to h∈[ ^p]. Together with the reductions from above this proves that the ring of constants of ℛ is indeed ⊕_ ρ∈_pt^ρ x^p-ρ(^p) x^p. Finally, we show that 𝒞 is a field. Let f=∑_ρ∈_pt^ρ x^p-ρ f_ρ∈𝒞, where f_ρ∈( ^p) x^p. Then we have f^p=∑_ρ∈_pt^pρ x^p^2-pρ f_ρ^p=∑_ρ∈_p x^p^2-pρ f_ρ^p∈(^p) x^p, where f_ρ^p∈( ^p^2) x^p^2. The element f^p vanishes precisely if f_ρ vanishes for all ρ∈_p, as the exponents of x in each of the summands are from a different residue class modulo p^2. Thus, f^p is a unit for all f≠ 0 and we see that (f^p-1)(f^p)^-1 is an inverse to f. Example <ref> (cont.) Let us come back to Example <ref> (i) with k=p+1 and the operator L=(x∂)^p+1∈[x][∂]. In ℛ we have (x)^p+1(z_1^pz_2)=0. So we have found another solution to the equation Ly=0. This completes a basis of a p+1- dimensional vector space of solutions over the constants of ℛ, namely {1,z_1^1, z_1^2,… ,z_1^p-1, z_1^pz_2}, as those elements are 𝒞-linearly independent. For the Euler operator L=x^2∂^2+x∂+2∈_5[x][∂] from Example <ref> (ii) we can also find a basis of solutions in ℛ over 𝒞. It is given by the monomials t^ω and t^-ω, where ω∈_25 is a square root of 2. From what we have seen it is reasonable to call ℛ the Euler-primitive closure of x. §.§ Extensions of Euler operators to the ring ℛ Our goal now is to prove that Euler operators admit “enough” solutions in ℛ= ⊕_ρ∈t^ρ() x and then to compute these solutions. For this we first investigate how Euler operators act on monomials t^ρ z^α x^k, see Lemma <ref>. For a multi-index α∈^()={(α_i)_i∈|α_i=0 for almost all i} we write ^α for z_1^α_1⋯ z_n^α_n, if α_j=0 for j>n. We define a partial ordering on ^() by β≺_e α if e(β):=β_1+pβ_2+p^2β_3+…<α_1+pα_2+p^2α_3+…=:e(α), where β_i, α_i∈{0,1,…, p-1} are chosen such that β_i≡β_i p respectively α_i≡α_i p. In other words ≺_e is induced by the inverse lexicographic ordering on _p^() via the element-wise reduction modulo p of elements of ^(). We also write ^β≺_e ^α if β≺_e α. Let α∈^(). Then (x∂) z^α is a sum of monomials that are smaller than z^α with respect to ≺_e and there is exactly one summand z^γ with e(γ)=e(α)-1. In particular, e(α) is the minimal number j such that (x∂)^j(z^α)=0. Let α=(α_1,α_2,…). We compute: ^α=1/x∑_i=1^tα_iz_1^α_1-1z_2^α_2-1⋯ z_i^α_i-1z_i+1^α_i+1⋯ z_t^α_t_=: ^γ(i). If α_i≢0 p, then clearly γ(i)≺_e α, otherwise its coefficient in (x∂) ^α vanishes. A fast computation shows that if j is the least index, such that α_j≠ 0, then e(γ(j))=e(α)-1. Moreover, e(γ(j))<e(α)-1 for all other j. This proves in particular that e(α) is the minimal number j such that (x)^jz^α=0. Let s be a variable and k∈. We define the j-th Hasse derivative or divided derivative of s^k by (s^k)^[j]=kjs^k-j; extend it linearly to [s] <cit.>. We will apply it below to the indicial polynomial χ_L of an operator L, viewed as a polynomial in the variable s. The next three lemmata are, as in the case of characteristic zero, inspired by Frobenius' “differentiation with respect to local exponents” <cit.>. See Lemmata <ref> and <ref> for the corresponding results in characteristic zero. Let k,l∈. Then we have (s^ k)^[l]+(s^ k)^[l+1](s-k)=(s^ k+1)^[l+1]. Let j∈, k∈, α∈^(). Then we have ^j(t^s x^k ^α)=t^s x^k-j((s+k)^ j^α + ((s+k)^ j)^[1]x^α +… + ((s+k)^ j)^[j](x)^j ^α). The proof uses induction on j. For j=0 the claim is obvious. Assume now the formula holds for some j≥ 0. Applying yields ^j+1(t^s x^k ^α)=(t^s x^k-j((s+k)^ j^α + ((s+k)^ j)^[1]x^1^α +… + ((s+i)^ j)^[j](x)^j ^α)) =t^s x^k-j-1(s+k-j) ((s+k)^ j^α + ((s+k)^ j)^[1]x^α +… + ((s+k)^ j)^[j](x)^j ^α)+ + t^s x^k-j((s+k)^ j^α + ((s+k)^ j)^[1](x)^α+… + ((s+k)^ j)^[j](x)^j ^α) =t^s x^k-j-1((s+k-j)(s+k)^ j+ ((s+k-j)((s+k)^ j)^[1]+(s+k)^ j)x^α+…) =t^s x^k-j-1((s+k)^j+1^α + ((s+k)^j+1)^[1]x^α +… + ((s+k)^j+1)^[j+1](x)^j+1^α), where we have used the previous lemma in the last step. From this we get: Let L be an Euler operator of order n with indicial polynomial χ_L. Then for any α∈^(), k∈ and ρ∈ we have L(t^ρ x^k^α)=t^ρx^k(χ_L(ρ+k) ^α+χ_L'(ρ+k)x(^α)+… + χ_L^[n](ρ+k)(x)^n(^α)). For a field K of characteristic 0 a polynomial q∈ K[s] has a j-fold root at β∈K if and only if the first j-1 derivatives of q vanish in β, but the j-th derivative does not. This very statement is false in characteristic p, but if one replaces derivatives with Hasse derivatives it holds true. Let q∈[s] be a polynomial. Then a is an j-fold root of q if and only if q^[i](a)=0 for i<j, but q^[j](a)≠ 0. With these results we can finally solve Euler equations in the ring ℛ. We prove that, similar to the complex case in Lemma <ref>, the solutions form a vector space of dimension n over the constants 𝒞⊆ℛ. Let L be an Euler operator of order n and let Ω:={ρ_1,…, ρ_k} be the set of local exponents of L at 0 with multiplicities m_ρ_1,… ,m_ρ_k. The solutions of Ly=0 in form a 𝒞-subspace of dimension n. A basis is given by y_ρ, i:=t^ρ^i^*, ρ∈Ω, i<m_ρ, where i^*=(i, ⌊ i/p ⌋, ⌊ i/p^2 ⌋, ⌊ i/p^3 ⌋,…)∈^(). Before we prove the proposition let us consider an example. Consider the differential operator L=x^6∂^6 +x^4∂^4+x^3∂^3+x^2∂^2∈_2[x][∂] with indicial polynomial χ_L(s)=(s-1)^5s. As the operator has order 6 one expects 6 solutions of Ly=0, independent over 𝒞. The proposition asserts that a basis is given by 1, x, x z_1, x z_1^2z_2, x z_1^3z_2, x z_1^4z_2^2z_3. Indeed, one easily verifies that all these monomials are solutions and are 𝒞-linearly independent. The operator L is 𝒞-linear and maps t^ρ x^k( ) into itself. Therefore it suffices to find solutions of Ly=0 of the form t^ρ f() x^k, where f∈( ). Further we can argue similar as in Proposition <ref>: we write f=g_1/g_2 for g_1, g_2∈[]. If t^ρ f() x^k is a solution, then so is g_2()^p(t^ρ f() x^k)=t^ρ g_1()g_2()^p-1 x^k, as g_2^p∈[^p]⊆𝒞. So we may assume without loss of generality that 0≠ f∈[]. Let z^α be the largest monomial of f() with respect to the ordering ≺_e. By Lemma <ref> and the linearity of L we obtain L(t^ρ f() x^k)=t^ρ(χ_L(ρ+k)f()+χ_L^[1](ρ+k)(x)f()+… +χ_L^[n](ρ+k)(x)^nf()). Hence L(t^ρ f() x^k) vanishes if and only if χ_L(ρ+k)f()+χ_L^[1](ρ+k)(x)f()+… +χ_L^[n](ρ+k)(x)^nf() vanishes. We compare the coefficients of monomials in starting with the largest. All appearing monomials are smaller than or equal to z^α by Lemma <ref> and for all monomials ^γ in the summand χ_L(ρ+k)(x∂)^j we have e(γ)≤ e(α)-j. So in order for the sum to vanish, χ_L(ρ+k) has to vanish by comparing coefficients of ^α. Further, by comparing coefficients of the next smaller monomials, we obtain χ_L^[1](ρ+k)=0 or (x∂)z^α=0, i.e. e(α)=1. Inductively we obtain that the sum vanishes, if and only if χ_L^[ℓ](ρ+k)≠ 0 implies that e(α)<ℓ. Put differently, by Lemma <ref>, if ρ+k is a local exponent of L of multiplicity m_ρ+k, then e(α)<m_ρ+k. Thus we can give a complete description of the elements in the kernel of L. They are of the form t^ρ x^k z^α, where e(α)<m_ρ+k. A quick calculation using Lemma <ref> shows that the last condition is fulfilled for multi-indices, whose entries differ by multiples of p from i^* for i=0,…, m_ρ+k-1. This shows on the one hand that the elements y_ρ, i are indeed solutions of Ly=0. On the other hand, the elements y_ρ, i are chosen such that ρ ranges over all local exponent of L exactly once. For ρ+k to be a local exponent, i.e., a zero of χ_L, we may add multiples of p to k, or subtract an element of the prime field from ρ and add it to k. Those transformations can be realized by multiplying a solution t^ρ x^k f() by an element from 𝒞. So indeed, all solutions of Ly=0 are linear combinations of the elements y_ρ, i; that is they generate the 𝒞-vector space of solutions. Assume now that a 𝒞-linear relation between the solutions y_ρ, i exists. Let 𝒟:=⊕_ ρ∈_pt^ρ x^p-ρ[^p] x^p. As 𝒞= Quot𝒟, it suffices to consider a relation with coefficients in 𝒟. Let Ω=_j Ω_j be the set of all local exponents, where two local exponents ρ, σ are in the same subset Ω_j if and only if their difference is in the prime field. Assume that ∑_j ∑_ρ_∈Ω_j i < m_ρ y_ρ, i· d_ρ, i=0 for some d_ρ, i∈𝒟. As the exponents of t of elements of 𝒟 are in the prime field of , it follows that for each j the sum ∑_ρ_∈Ω_j i< m_ρ y_ρ, i· d_ρ, i vanishes. So it suffices to focus on relations between solutions corresponding to local exponents in the same set set Ω_j. Without loss of generality Ω_j=_p, the prime field of . We consider now a relation of the form ∑_ρ∈_p i<m_ρ y_ρ, i· d_ρ, i=0. Without loss of generality we may assume that at least one of the constants d_ρ, i has order 0 in x and let f_ρ, i∈[^p] be its constant term. Taking the coefficient of the monomial with smallest degree with respect to x in the sum above, we obtain a relation of the form ∑_ρ∈_p i<m_ρ t^ρ z^α_i· f_ρ, i=0. This sum vanishes if and only if the summand for each ρ∈_p vanishes. Furthermore the multi-exponents i^*=(i, ⌊ i/p⌋, ⌊ i/p^2⌋,…) are defined such that no two of them differ by multiples of p in every component. Thus f_ρ, i=0 for all ρ and i, as required. Finally, note that ∑_ρ∈Ωm_ρ=n, as χ_L is a polynomial of degree n. So the dimension of the space of solutions is indeed n. §.§ The normal form theorem in positive characteristic We have seen that a basis of solutions of Euler equations is of a very special form. It is not to be expected that solutions of general differential equations with regular singularities are similarly simple. In the following let ρ be a fixed local exponent of L at 0 of multiplicity m_ρ. We define a function ξ:→ ξ(0)=m_ρ, ξ(k+1)=ξ(k)+m_ρ + k +1, where m_ρ + j=0 if ρ + j is not a root of the indicial polynomial. In other words ξ(k)=m_ρ+m_ρ+1+… + m_ρ+k. Note here that if k>p the summand m_ρ appears at least twice in the sum. Moreover we define the ρ-function space ℱ=ℱ^ρ_L associated to L as ℱ^ρ_L:=t^ρ∑_k=0^∞⊕_α∈𝒜_k^α x^k, where 𝒜_k:={α∈^() | α_1<ξ(k), α_j+1≤α_j/p for all j∈} is a finite subset of ^(). Note that ℱ only depends on the initial form of the differential operator L, more precisely, only on the multiplicity of all local exponents of L that differ from ρ by an element of the prime field of . Consider the differential operator L=x^3∂ ^3+2x^2∂^2+L∈_3 x [∂], where L∈_3 x[∂] has positive shift. The local exponents of L are 0 with multiplicity 2 and 1 with multiplicity 1. The monomials in ℱ^0_L are depicted below in Figure <ref>. Let L∈ x [∂] be a linear differential operator and let ρ be one of its local exponents. The space ℱ^ρ_L=ℱ is invariant under all differential operators with non-negative shift. In particular we have Lℱ⊆ℱ. We can rewrite any differential operator with non-negative shift in terms of the operator δ=x∂ instead of ∂, where the base change between x^n∂^n and δ^n is given by the Stirling numbers, see Remark <ref>. So we investigate the action of δ on a monomial t^ρ x^i z^α∈ℱ, where α=(α_1,…, α_n)∈^(). We compute as in Lemma <ref> δ (t^ρ x^k z^α) = x(t^ρ x^k z^α)=t^ρ x^k( (k+ρ)z^α + ∑_j=1^nα_j z_1^α_1-1⋯ z_j^α_j-1z_j+1^α_j+1⋯ z_n^α_n). We want to show that all exponents of monomials with non-zero coefficient in the sum above are in 𝒜_k. It is clear that α∈𝒜_k by assumption, so it remains to prove that if α_j≢0 p then (α_1-1,…, α_j-1, α_j+1, …, α_n)∈𝒜_k for j=1,…, n. If α_l+1≤α_l/p then also α_l+1-1≤(α_l-1)/p for l<j. It remains to show that α_j+1> (α_j-1)/p implies α_j≡ 0 p. For this we see that from α_j-1< pα_j+1≤α_j it follows indeed that p divides α_j = pα_j+1. Let L_0∈ x[∂] be an Euler operator with local exponent ρ and associated ρ-function space ℱ. Then L_0(ℱ)=ℱ x. First we show that any monomial in ℱ gets mapped to ℱ x under L_0. Let t^ρ x^k z^α∈ℱ. By Lemma <ref> we have L_0(t^ρ x^k^α)=t^ρx^k(χ_L(ρ+k) ^α+χ_L'(ρ+k)x(^α)+… + χ_L^[n](ρ+k)(x)^n(^α)). By Lemma <ref> this expression is contained in ℱ. The first m_ρ+k summands of the sum vanish due to Lemma <ref>. In the remaining summands x is applied at least m_ρ+k times to ^α, decreasing the exponent of z_1 by at least m_ρ+k. Thus for each monomial with non-zero coefficient in χ_L^[m_ρ+k](ρ+k)(x)^m_ρ+k(^α)+… + χ_L^[n](ρ+k)(x)^n(^α) the exponents of are in 𝒜_k-1 and thus L_0(t^ρ x^k^α)∈ℱ x. Now we show that every monomial of ℱ x is in the image of ℱ under L_0. We proceed by induction on e(α)=α_1+pα_2+p^2α_3+… Let t^ρ x^k+1 z^α∈ℱ x; that is α∈𝒜_k. Assume that ρ+k+1 is an ℓ-fold root of χ_L, where ℓ is set equal to 0 if ρ+k+1 is not a root at all. We define an element β∈𝒜_k+1 such that L_0(t^ρ x^k+1^β)=t^ρ x^k+1^α+r, where r is a sum of smaller monomials with respect to ≺_e. Set β_1=α_1+ℓ, β_j=α_j+⌊β_j-1/p ⌋-⌊α_j-1/p⌋. As t^ρ x^k ^α∈ℱ, we have α_1<ξ(k) and therefore β_1=α_1+ℓ<ξ(k+1). Moreover, we know that α_j≤⌊α_j-1/p⌋ and therefore also β_j=α_j+⌊β_j-1/p⌋-⌊α_j-1/p⌋≤β_j-1/p. By construction we have β_1=α_1+l and thus β_1<ξ(k+1). Altogether this proves β∈𝒜_k+1. Finally we show that L_0(t^ρ x^k+1^β) is of the desired form. Again by Lemma <ref> we have L_0(t^ρ x^k+1^β)=t^ρx^k+1(χ_L(ρ+k+1) ^β +… + χ_L^[n](ρ+k+1)(x)^n(^β)) As ρ +k +1 has multiplicity ℓ as a zero of χ_L, the first ℓ summands of this expansion vanish, according to Lemma <ref>. If one expands the further summands using the Leibniz rule one gets a sum of monomials of the form c_γ t^ρ x^k+1^γ, with c_γ∈. The exponents γ are in 𝒜_k and by Lemma <ref> we have e(γ)≤ e(β)-ℓ=e(α). Only one of these summands fulfils e(γ)=e(α). It is of the form c_α t^ρ x^k+1^α by construction. Now by the induction hypothesis, all other summands are in the image of ℱ under L_0; they are in ℱ x because of Lemma <ref>. Thus, t^ρ x^k+1^α∈ L_0(ℱ), which concludes the proof. The proof of the surjectivity of L_0 is constructive: For each monomial t^ρ x^kz^α in ℱ x one constructs a monomial t^ρ x^kz^β in ℱ, such that L_0(t^ρ x^kz^β)=c t^ρ x^kz^α+r, where r is a sum of smaller monomials. If r=0 we divide by c and are done. Otherwise we iterate the construction for all monomials in r and subtract the monomials obtained this way from c^-1 t^ρ x^kz^β. After at most e(α) steps r=0 and we have constructed an element of ℱ which is sent to t^ρ x^kz^α by L_0. The kernel of the restriction of L_0 to ℱ=ℱ^ρ_L is topologically spanned over by monomials of the form t^ρ x^kz^α, where α∈𝒜_k={α∈^() | α_1<ξ(k), α_j+1≤α_j/p for all j∈} with e(α)<m_ρ+k. Consequently, a direct complement ℋ of L_0|_ℱ is topologically spanned by monomials of the form t^ρ x^kz^α, where α∈𝒜_k with e(α)≥ m_ρ+k. We have seen that e(α) is the least number k, such that (x∂)^k^α=0. So every monomial t^ρ x^kz^α with e(α)<m_ρ+k is in the kernel of L_0 according to Lemma <ref>. Arguing as in the proof of Proposition <ref> we see that those elements indeed span L_0|_ℱ. Now we are ready to state and prove the normal form theorem. Let be an algebraically closed field of characteristic p. Let L∈ x[∂] be a differential operator with initial form L_0 and shift τ=0 acting on ℛ= ⊕_ρ∈t^ρ(z) x. Let ρ be a local exponent of L at 0 and ℱ=t^ρ∑_k=0^∞⊕_α∈𝒜_k^α x^k⊂ the associated ρ-function space. * The map L_0|_ℋ: ℋ→ℱ x is bijective and the composition of its inverse (L_0|_ℋ)^-1:ℱ x→ℋ composed with the inclusion ℋ⊆ℱ defines a 𝒞-linear right inverse S:ℱ x→ℱ of L_0. * Let T=L_0-L:ℱ→ℱ x. Then the map u=_ℱ-S∘ T:ℱ→ℱ is a continuous 𝒞-linear automorphism of ℱ with inverse v=∑_k=0^∞ (S∘ T)^k:ℱ→ℱ. * The automorphism v of ℱ transforms L into L_0, i.e., L∘ v=L_0. For (i) note that by Proposition <ref> the map L_0:ℱ→ℱ x is surjective and thus the restriction to a direct complement of its kernel is bijective. Clearly S then defines a right inverse of L_0. One easily checks that the construction of preimages of L_0 mentioned in Remark <ref> is 𝒞-linear. The assertions (ii) and (iii) are an application of the Perturbation Lemma <ref>. We view elements of ℱ as power series in x and equip ℱ with the x-adic topology, which turns it into a complete metric space. The operator T has positive shift by definition and thus increases the order in x of a monomial t^ρ x^kz^α and thus of any element of ℱ. The operator S maintains the order in x of a monomial as L_0 does so. So the composition S∘ T increases the order of any element. Furthermore, T maps ℱ to ℱ x= (L_0). So we may apply the perturbation lemma and the claim follows. §.§ Solutions of regular singular equations The normal form theorem allows us to describe all solutions of differential equations with regular singularities. Let L∈ x [∂] be a linear differential operator with regular singularity at 0 acting on ℛ. Let ρ∈ be a local exponent of L. Denote by u_ρ:ℱ^ρ_L→ℱ^ρ_L the automorphism associated to ρ given in (ii) of the normal form theorem. The solutions of the differential equation Ly=0 in ℛ form an n-dimensional 𝒞-vector space. A basis is given by y_ρ,i=u_ρ^-1(t^ρ z^i^*), where ρ varies over the local exponents of L at 0 and 0≤ i< m_ρ, with i^*=(i, ⌊ i/p⌋, ⌊ i/p^2⌋,…). By the normal form theorem and the description of the solutions of Euler equations (Proposition <ref>), we have L(y_ρ,i)=L∘ u_ρ^-1(t^ρ z^i^*)=L_0(t^ρ z^i^*)=0, so these functions clearly are solutions of the differential equation Ly=0. Let now y be any solution of Ly=0. Again, as L commutes with the direct sum decomposition of ℛ=⊕_ρ∈t^ρ() x and upon multiplication with constants of the form x^kp we may assume that y is of the form y=t^ρ(∑_k=0^∞ f_k()x^k) for f_k∈(). If we write L=L_0-T we obtain L_0y-Ty=0, where T has positive shift, i.e., it strictly increases the order in x. Thus, t^ρ f_0() is a solution to the Euler equation Ly=0 and therefore t^ρ f_0()=∑_(σ, i)c_σ, it^σ^α_i, where σ varies over the local exponents, 0≤ i<m_σ, and c_σ, i∈𝒞 is homogeneous of order 0 in x. We compute L(y-∑_(σ, i)c_σ, iy_ρ, i)=L(-∑_(σ, i)c_σ, iu_σ^-1(t^σ^α_i))=-∑_(σ, i)c_σ, iL(u_σ^-1(t^ρ^i^*))=0. Note that for all f∈ℱ we have ord_x (f-u(f))> ord_x f, i.e., the monomial of order 0 remains unchanged under u. So y-∑_(σ, i)c_σ, iy_σ, i has positive order in x. Iteration yields constants d_σ, i∈𝒞 with y=∑_(σ, i)d_σ, iy_σ,i. Thus, y is a linear combination of y_σ, i. Conversely, any such linear combination is a solution of Ly=0. The linear independence of the solutions y_ρ, i can be reduced to the linear independence of the solutions of the Euler equation, which was proven in Proposition <ref>. This proves that the solutions of Ly=0 in ℛ form an n-dimensional 𝒞-vector space with basis y_ρ,i, where ρ varies over the local exponents and 0≤ i <m_ρ. We have assumed for convenience that our field is algebraically closed. If this is not the case, e.g., in the case of a finite field _p, there is no need to pass to the entire algebraic closure. In the constructions involved in the normal form theorem for an operator L one has to find the roots of the characteristic polynomial χ_L∈[s], the local exponents ρ. Further we have to evaluate the characteristic polynomial at the values ρ+k for elements k of the prime field of . Thus, if χ_L splits over the normal form theorem works without problems within . Otherwise it is sufficient to pass to a splitting field of χ_L to describe a full basis of solutions. (i) The space ℛ provides us with n linearly independent solutions for any operator with a regular singularity at 0 in characteristic p. It is minimal in the following sense: we only introduce a new variable z_i whenever the algorithm constructing solutions forces us to do so, i.e., when we have to divide by p. It is possible to choose a system of representatives Λ⊆ of the set /_p of residue classes and to then define ℛ:=⊕_ρ∈Λt^ρ() x. It suffices to construct solutions in of any linear differential equation Ly=0 having a regular singularity at 0, similarly as above. For example, if σ∈ is a local exponent of an Euler operator and there is ρ∈Λ with ρ+k=σ for some σ∈_p and k∈_p, then t^ρ x^k is a solution of the equation Ly=0. This construction has the advantage that the constants are much simpler, as they are given by the elements of 𝒞_ℛ= (^p) x^p. However, this procedure requires a choice of a system of representatives of /_p. (ii) In characteristic 0 a minimal extension of x in which every regular singular equation has a full basis of solutions is the universal Picard-Vessiot ring or field for differential equations with regular singularities, discussed in <cit.>. § EXAMPLES AND APPLICATIONS IN CHARACTERISTIC P §.§ Examples [Exponential function in characteristic 3] We consider the equation y'=y. Solving over the holomorphic functions, or in x one obtains the exponential function as a solution. However there is no reduction of this function modulo any prime, as all prime numbers appear in the denominators of the expansion of the exponential function. But one can obtain solutions modulo p for any prime in ℛ using the normal form theorem. Pick for example p=3. Write L=x∂-x=δ-x, so our equation is equivalent to Ly=0. The only local exponent of the equation is 0, thus one needs to compute the series ∑_n=0^∞ (S∘ T)^n(1). The operator T is simply given by the multiplication by x, where S is, as constructed above, a right-inverse of L_0=x∂. One obtains: (S∘ T)^1(1) = S(x) = x, (S∘ T)^2(1) = S(x^2) = 2x^2, (S∘ T)^3(1) = S(2x^3) = 2x^3z_1, (S∘ T)^4(1) = S(2x^4z_1) = 2x^4z_1+x^4, (S∘ T)^5(1) = S(2x^5z_1+x^5) = x^5z_1, (S∘ T)^6(1) = S(x^6z_1) = 2x^6z_1^2, (S∘ T)^7(1) = S(2x^7z_1^2) = 2x^7z_1^2+2x^7z_1+x^7, (S∘ T)^8(1) = S(2x^8z_1^2+2x^8z_1+x^8) = x^8z_1^2+2x^8, (S∘ T)^9(1) = S(x^9z_1^2+2x^9) = x^9z_1^3z_2+2x^9z_1. One gets the solution 1+x+2x^2+2x^3z_1+x^4(1+2z_1)+x^5z_1+2x^6z_1^2+x^7(1+2z_1+2z_1^2)+x^8(2+z_1^2)+x^9(2z_1+z_1^3z_2)+…, which could be considered as the exponential function in characteristic 3. Note that obtaining the rightmost column needs some computational effort. One has to follow the steps described in Remark <ref>. There seems to be no obvious pattern in the coefficients of the obtained power series. Similarly, one can compute the exponential functions exp_p for other characteristics p. For p=2 the first terms are 1+x+x^2z_1+x^3(z_1+1)+x^4(z_1^2z_2+z_1)+x^5z_1^2z_2+x^6(z_1^3z_2+z_1^3)+x^7(z_1^3z_2+z_1^2z_2+z_1^3+z_1+1)+… and for p=5 we get 1+x+3x^2+x^3+4x^4+4x^5z_1+x^6(4z_1+1)+x^7(2z_1+2)+x^8(4z_1+1)+x^9z_1+3x^10z_1^2+… The series exp_p seems to have some remarkable properties. Let us considers the constant term in z, i.e. exp_p(x,0,0,…). Computations suggest that y=exp_3(x,0,0,…) satisfies x^3y^3 + xy^2 - y + 1=0 and y=exp_5(x,0,0,…) satisfies x^10y^5 + x^6y^4 + x^4y^3 - x^3y^3 + 2x^2y^2 + 2xy^2 - 2y + 2=0, i.e. these series seem to be algebraic over _p(x). Similar observations were made for other characteristics as well. This motivates the following challenge: Let L∈_p[x][∂] be a differential operator and ρ∈_p a local exponent of L. Let u be the automorphism described in the normal form theorem in positive characteristic, Theorem <ref>. Determine the cases where (u^-1(x^ρ))_| z_i=0 is algebraic over _p(x). In the next example, the answer is immediate. We consider the minimal complex differential equation Ly=0 for y(x)=-log(1-x)=x+x^2/2+x^3/3+…∈ x. It is given by L=x^2∂^2-(x^2∂+x^3∂^2). The local exponents are 0,1 and a basis of solutions in x is given by {1, y}. Reducing L modulo a prime number p one again finds the local exponents 0,1. Clearly y_0,0=u_0^-1(1)=1. Further we compute y_1,0=u_1^-1(t^1)=∑_k=0^∞ (S∘ T)(t^1)= t(1+x/2+x^2/3+… +x^p-2/p-1+x^p-1z_1). Here only adjoining the variable z_1 instead of countably many z_i is necessary to obtain enough solutions. In the next section we will describe the class of operators, where the addition of finitely many of the variables z_i suffice. §.§ Special cases Equations with local exponents in the prime field. The situation becomes much easier if we consider a linear differential equation Ly=0, whose local exponents are all contained in the prime field _p⊆. In this case there is no need to introduce monomials t^ρ with exponents ρ∈. We define the differential subfield 𝒦 of ℛ as 𝒦:=() x. One easily checks that 𝒦 is indeed differentially closed with respect to _ℛ. Moreover, its field of constants is given by 𝒞_𝒦=(^p) x^p. The assumption on the local exponents allows one to modify the normal form theorem to use the function space 𝒢^ρ:=x^ρ∑_k=0^∞⊕_α∈𝒜_k^α x^k, instead of ℱ^ρ=t^ρ∑_k=0^∞⊕_α∈𝒜_k^α x^k, by “substituting t=x” and analogously one obtains a full basis of solutions over 𝒞_𝒦 in 𝒦: For each local exponent ρ one computes u^-1(x^ρ) instead of u^-1(t^ρ), where u is the automorphism described in the normal form theorem. Polynomial solutions. It is well-known that if a Laurent series solution y∈_p x to Ly=0 for an operator L∈_p[x][∂] with polynomial coefficients exists, then there already exists a polynomial solution to the equation, see <cit.> p. 174. We generalize the result to solutions involving only finitely many of the variables z_i. Let be a field of characteristic p. Let L∈[x][∂] be a differential operator with local exponent ρ∈. Let y∈ t^ρ[z_1,…, z_ℓ] x be a solution of the differential equation Ly=0 involving only finitely many of the variables z_i. Let c∈. Then there exists a polynomial q∈[x, z_1,…, z_ℓ], such that L (t^ρ q)=0 and y-t^ρ q∈ t^ρ x^c+1[z_1,…, z_ℓ] x. In particular, if a basis of power series solutions of Ly=0 in ⊕_ρ t^ρ[z_1, …, z_ℓ] x exists, then there already exists a basis of polynomial solutions in ⊕_ρ t^ρ[z_1, …, z_ℓ, x]. The proof of Honda can be easily adapted to this generalisation. However, we give a more conceptual proof. We consider t^ρ[z_1,…, z_ℓ] x as a free [z_1^p,…, z_ℓ^p] x^p-module of rank p^ℓ+1 with basis 𝒢={t^ρ x^kz^α| k∈{0,1,…, p-1}, α∈{0,1,…, p-1}^ℓ}. Without loss of generality assume that ρ=0. We can write y(x)=∑_g∈𝒢y_g(z_1^p,…, z_ℓ^p, x^p)g with series y_g ∈[z_1,…, z_ℓ] x. Then Ly=∑_g∈𝒢y_g(z_1^p,…, z_ℓ^p, x^p)L(g)=0 implies that the series y_g(z_1^p,…, z_ℓ^p, x^p) form a [z_1^p,…, z_ℓ^p] x^p-linear relation between the polynomials L(g) in the finite free [z_1^p,…, z_ℓ^p, x^p]-module [z_1,…, z_ℓ, x] for g∈𝒢. By the flatness of [z_1^p,…, z_ℓ^p] x^p over [z_1^p,…, z_ℓ^p,x^p] there are polynomials q_g(z_1^p,…, z_ℓ^p, x^p)∈[z_1^p,…, z_ℓ^p, x^p] approximating y_g(z_1^p,…, z_ℓ^p, x^p) up to any prescribed degree and such that ∑_g∈𝒢q_g(z_1^p,…, z_ℓ^p, x^p)L(g)=0. Now set q(z_1,…, z_ℓ, x)=∑_g∈𝒢q_g(z_1^p,…, z_ℓ^p, x^p)g to get the required polynomial solution of Ly=0. (i) Assume that L∈[x][∂], where is a finite field of characteristic p with algebraic closure . Then if y∈ t^ρ x is a solution obtained by the normal form theorem, we already have y∈ t^ρ(ρ) x, where (ρ) is a finite extension of . Recall the operators S and T from the normal form theorem: S is a right inverse to L_0 and T=L-L_0. It holds S(x^ρ+k+p)=x^pS(x^ρ +k) and T(x^ρ+k+p)=x^pT(x^ρ+k). There are only finitely many n-tuples of elements from (ρ). Write y=t^ρ(a_0+a_1+a_2x^2+…). Two n-tuples of consecutive coefficients a_i of y, starting at powers of an index divisible by p, have to agree. Thus the sequence (a_i)_i∈ becomes periodic. Hence it suffices to take a suitable sufficiently large k to obtain a polynomial solution (1-x^kp)y of Ly=0, which approximates y to a prescribed degree c. (ii) The algorithm from the normal form theorem may but need not provide us with a polynomial solution of Ly=0, when applied to an operator L in _p[x][∂]. To see this consider the following two examples: (a) Let L=x∂-x^2∂-x and y_L(x)=1/1-x the solution of the equation Ly=0. Over _p we compute using the algorithm from the normal form theorem with L_0=x∂ and T=x^2∂ +x and obtain u^-1(1)=∑_k=0^∞ (S_L ∘ T_L)^k = 1+x+x^2+… +x^p-1∈_p[x], a polynomial solution. So we obtain u^-1(1)=∑_k=0^∞ (S_L ∘ T_L)^k = 1+x+x^2+…+ x^p-1∈_p[x], a polynomial solution. (b) Let now M=(-x-2x^4)+(x+x^2+-2x^4-x^5+x^7)∂. The equation My=0 is satisfied by the algebraic function 1+x/1-x^3. Reducing modulo 3 we get T=(x+2x^2∂)+(2x^4∂) + (2x^4+x^5∂)+ (2x^7∂)=T_1+T_3+T_4+T_6 and the initial form M_0=x∂. We compute the solution u^-1(1)=∑_i=0^∞ a_ix^i∑_i=0^∞ (S∘ T)^i(1)=1+x+x^4+x^7+x^10+… Because the maximal shift of T is 6 and (a_1, a_2, a_3, a_4, a_5, a_6)=(a_4, a_5, a_6, a_7, a_8, a_9) the sequence of coefficients of this series becomes periodic, as described in (i), with period length 3. Thus, the solution obtained by the normal form theorem in characteristic p agrees with the reduction modulo p of the solution obtained in characteristic 0. (iii) The latter of the two examples from above illustrates that the degree of a minimal degree polynomial solution of a differential equation in characteristic p need not be p-1, as one could expect. Indeed using the periodicity of the coefficients of the solution from above one obtains that y(x)= u^-1(1)-x^3u^-1(1)=1+x-x^3 is a polynomial solution. Any other polynomial solution has to be a multiple of y with a constant. Indeed, making the ansatz (1+x-x^3)·(1+c_1x^3+c_2x^6+⋯)=1+ax+bx^2 one immediately obtains c_1=1, which leads to a contradiction. Therefore no polynomial solution of degree less than 3 exists. The p-curvature. Let L be a differential operator. We define the p-curvature of L to be the action of multiplication by ∂^p on the space [x][∂]/[x][∂]L. Operators with nilpotent p-curavture. One class of operators with all local exponents in the prime field of turn out to be operators with nilpotent p-curvature. An alternative description of these operators was provided by Honda <cit.> p. 176: We say that an equation Ly=0 of order n has sufficiently many solutions in the weak sense if Ly=0 has one solution y_1∈ x and recursively the equation in u' of order n-1 obtained from Ly=0 by the ansatz y=y_1u has sufficiently many solutions in the weak sense. A linear differential operator L∈[x][∂] has nilpotent p-curvature if and only if the equation Ly=0 has sufficiently many solutions in the weak sense. Indeed, the following theorem holds: Let L∈[x][∂] be a differential operator with nilpotent p-curvature. Then its local exponents are in the prime field _p⊆. For a proof, see <cit.> p. 179. Further, there is another interesting characterisation of operators with nilpotent p-curvature due to Dwork <cit.>. They are exactly those operators, for which finitely many of the variables z_i suffice to obtain a full basis of solutions: An operator L∈[x][∂] has nilpotent p-curvature if and only if there is l∈ such that Ly=0 has a full basis of solutions in (z_1,…, z_l) x over its field of constants (z_1^p,…, z_l^p) x^p. This is a generalisation of a result of Honda, who proved the result for l=1 and operators of order smaller than p, see <cit.> p. 186. For example, the operator annihilating log(1-x), discussed in Example <ref>, has nilpotent p-curvature. Let L∈[x][∂] be an operator with nilpotent p-curvature. Then there is ℓ∈, such that there is a basis of polynomial solutions of Ly=0 in (x,z_1,…, z_ℓ). This immediately follows from Theorem <ref> and Lemma <ref>. §.§ The Grothendieck p-curvature conjecture We now turn to conjectures of Grothendieck-Katz, André, Bézivin, Christol, the Chudnovsky brothers, Matzat and van der Put about the algebraicity of solutions of linear differential equations with polynomial coefficients defined over <cit.>. The goal is to study them using the normal form theorems in characteristic 0 and p. It is a classical result, already known to Abel, that algebraic power series satisfy a linear differential equation with polynomial coefficients. The intriguing and meanwhile notorious problem is to characterize those differential equations which arise in this way, a question which appears over and over again in the literature (Abel, Riemann, Autonne, Fuchs, Frobenius, Schwarz, Beukers-Heckman, ...). In the previous section we have studied operators with nilpotent p-curvature. We want to study now operators L with vanishing p-curvature, i.e., L divides ∂^p from the right. The vanishing of the p-curvature of an operator can be described in terms of its solutions: Let L∈[x][∂] be a differential operator, where denotes a field of characteristic p. Then L admits a full basis of solutions over x^p in [x] if and only if the p-curvature of L vanishes. The original abstract formulation and a proof can be found in <cit.>, a more “down-to-earth” proof in <cit.>. Compare this result also to Corollary <ref>. In the following let L∈[x][∂] be a differential operator defined over and denote by L_p∈_p[x][∂] the differential operator that arises from reducing the coefficients of L modulo p, whenever this is defined. The reduction L_p is defined for all but finitely many prime numbers p. We are interested in the interplay between solutions of the equations Ly=0 and L_py=0. Most prominent here is the Grothendieck p-curvature conjecture. We now give an elementary formulation of the Grothendieck p-curvature conjecture. In this formulation the p-curvature does not appear, however Cartier's Lemma <ref> establishes the connection. Let L∈[x][∂]. Assume that L_py=0 has a basis of _p x^p-linearly independent solutions in _p x for almost all prime numbers p. Then there exists a basis of -linearly independent algebraic solutions of Ly=0 in x. One can easily generalize this conjecture to number fields, by replacing with K=(α), for α an algebraic number, and _p by the residue fields of 𝒪_K modulo its prime ideals 𝔭. The case of order one equations is equivalent to a special case of a theorem of Kronecker (which, in turn, is a special case of Chebotarev's density theorem) <cit.>. Katz has proven the conjecture for Picard-Fuchs equations <cit.>. There have been recent and quite technical advances in the conjecture by various people, but the general case (even for order two equations) seems to still resist. Bost has established a more general variant of the conjecture for algebraic foliations and subgroups of Lie-groups <cit.>, <cit.>, Thm. 2.4. Progress was also made by Farb and Kisin<cit.> as well as Calegari, Dimitrov and Tang <cit.>. An apparently weaker statement than the Grothendieck conjecture was proposed by Bézivin. Let L∈[x][∂] be a differential operator. Assume that Ly=0 has a basis of -linearly independent solutions in x. Then these solutions are algebraic over (x). The validity of the Grothendieck p-curvature conjecture implies the validity of the Bézivin conjecture. In other words: The hypothesis of the Bézivin conjecture implies the hypothesis of the Grothendieck p-curvature conjecture. Assume that y∈ x is an integral solution of Ly=0. Its reduction modulo all prime numbers is well-defined and a solution to L_py=0. For p larger than the maximal difference of the local exponents of L, a basis of solutions of Ly=0 gets mapped by reduction to a basis of solution modulo p. The condition on p is necessary to ensure that the reductions of the solutions do not become linearly dependent over _p x^p. Thus by the Grothendieck p-curvature conjecture Ly=0 has a basis of algebraic solutions and y, as a linear combination of those algebraic solutions, is algebraic itself. A substantial advance towards the Grothendieck p-curvature conjecture would be to prove the inverse implication of Lemma <ref>: in fact it would transfer the problem from positive characteristic to characteristic 0. To approach the converse implication, it is reasonable to compare the algorithm of the normal form theorem in characteristic 0 applied to an operator L to the algorithm of the normal form theorem in characteristic p, applied to the reduction L_p of the operator L modulo p. We investigate in the next paragraphs how the normal form theorems could be used to achieve this. The problem which arises lies in the observation that the characteristic p algorithm does not entirely coincide with the reduction modulo p of the algorithm in zero characteristic. Very subtle disparities appear, and this makes it hard to deduce properties of the characteristic zero solutions from the characteristic p solutions, in particular, to prove their algebraicity. One hope is, however, to be able to compare the Grothendieck-Katz conjecture with the Bézivin conjecture. We will use the following number theoretic result: Let f∈[x] be a polynomial of degree n, let s∈ and n_1,…, n_s with n_1+…+n_s=n. The density of prime numbers p for which the reduction of f modulo p splits into k factors of degrees f_1,…, f_k is equal to the number of permutations of the roots of f in the Galois group of f consisting of s cycles of lengths f_1, … ,f_s. In particular, f splits into linear factors over [x] if and only if its reduction modulo p splits in _p[x] into linear factors for almost all primes p. This version was proven by Frobenius, while similar results were formulated by Kronecker before. It is also an easy corollary of the Chebotarev density theorem. We now describe consequences of the hypothesis of the Grothendieck p-curvature conjecture. They were already collected by Honda and we refer for parts of the proof to his article. However, for the last assertion we give a different proof. It compares the two algorithms obtained from the normal form theorems in characteristics 0 and p. m L u T S For an operator L∈[x][∂] in characteristic 0 and a fixed prime p we denote in the sequel by =L_p∈_p[x][∂] the reduction of L modulo p, whenever this reduction is defined. Let L∈[x][∂] be a differential operator with polynomial coefficients over . Assume that the induced equations y=0 modulo p have an _p x^p-basis of power series solutions in _p x, for almost all primes p. Then * The operator L has a regular singularity at 0. * The local exponents of L at 0 are pairwise distinct rational numbers ρ_i∈. * There exists a -basis of Puiseux series solutions y(x) of Ly=0 in ∑_ρ_ix^ρ_i x, where ρ_i ranges over the local exponents of L. In particular, this basis is independent of the variables t and z_i. For part (a) use <cit.>, Corollary p. 178, combined with Theorem. <ref> and Lemma <ref> from above. (b) See <cit.>, Thm. 2, p. 179, combined with Thms. <ref> and <ref> from above. We provide here a variant of Honda's proof. As a consequence of (a) there are n local exponents of L, counted with multiplicity, n= L. Moreover, for almost all primes p, the local exponents of have to be elements of the prime field _p. Indeed for any local exponent ρ∉_p we obtain using Theorem <ref> a solution of the form t^ρ f(x)∈ t^ρ x, contradicting the existence of a basis of n solutions of y=0 in _p x. It is then shown as in <cit.> that the local exponents of L are pairwise incongruent modulo almost all primes p. The indicial polynomial χ_L of L has coefficients in and its reduction modulo p splits into linear factors over _p for almost all primes p. Thus, by Theorem <ref>, χ_L splits into linear factors over . It follows that all local exponents of L are rational. Assume now that two local exponents are congruent modulo some p. Then their reduction modulo p is a local exponent of of multiplicity at least 2. So, Theorem <ref> together with the remarks in section <ref> upon avoiding the variable t, yield a solution of y=0 of the form u^-1(x^ρ z_1), where u is the automorphism of the normal form theorem in positive characteristic, Theorem <ref>. This solution now depends on z_1, contradictory to the assumption. This proves (b). (c) By Theorem <ref> a basis of solutions of Ly=0 lies in ∑_ρ_ix^ρ_i x[z], the sum varying over all local exponents ρ_i of L. It remains to prove that these solutions are independent of z. So assume the contrary: let f be a solution which depends on z. Without loss of generality we may assume that f=u^-1(x^ρ)=x^ρ(1+a_1(z)x+a_2(z)x^2+…) for some local exponent ρ∈ of L and some a_i∈[z]. Let ∈ be the first index where a_ depends on z. We will construct from f a solution g of y=0, for a suitable prime p, which involves z_1-terms which are not p-th powers. This will produce the required contradiction. The construction of g is, in fact, quite subtle. We have to run the two normal form algorithms for the construction of f and g simultaneously in characteristic 0 and p as long as no z appears in characteristic 0. At the moment when z occurs for the first time, say, in the computation of the coefficient a_ of f, a careful comparison ensures that z_1 shows up also in the expansion of g in the characteristic p algorithm. We choose the prime p subject to the following conditions: * p> n= L; * There is a basis of solutions of y=0 in _p x; * p does not divide any of the denominators of the local exponents of L; * p does not divide any of the denominators of the coefficients of a_1,…, a_k. Let Λ be the set of positive integers ℓ smaller than such that σ:=ρ+ℓ is a local exponent of . Here we write σ and ρ for elements in as well as for the representatives in {0,1,…, p-1} of their reduction modulo p. We define g= ^-1(x^ρ(1+∑_ℓ∈Λa_ℓx^ℓ))=x^ρ(1+b_1(t,z)x+b_2(t,z)x^2+…), where is the automorphism of 𝒢^ρ=x^ρ∑_k=0^∞⊕_α∈𝒜_k^α x^k from the normal form theorem, Theorem <ref>, compare again to the remarks on avoiding the variable t in Section <ref>. The additional summand ∑_ℓ∈Λa_ℓx^ℓ in the inner parenthesis of g is required to make f and g coincide up to degree -1. We will show that g is a solution of y=0 and that its coefficient b_ involves z_1. The first thing is easy since, by the normal form theorem <ref>, (g)=(_0∘)(g)=_0(x^ρ(1+∑_ℓ∈Λa_ℓx^ℓ))=0, for ρ+ℓ is a local exponent of _0 and hence _0(x^ρ+ℓ)=0. This proves g=0. Next we prove inductively that b_ℓ=a_ℓ for ℓ≤-1, i.e., that the expansion of g up to degree -1 equals the reduction of the respective expansion of f. This part is a bit computational. Write T=L-L_0 and =-_0 as earlier for the tails of L and . We expand T and as sums of Euler operators T=T_1+⋯+T_r, =_1+⋯+_r, Similarly, we define S and as the inverses S=(L_0|_ℋ)^-1 and =(_0|_ ℋ)^-1 of L_0 and _0, respectively, on direct complements of their kernels, as described in the normal form theorems, Theorems <ref> and <ref>. We now distinguish two cases. (i) Assume first that ρ+ℓ is not a local exponent of L. Rewriting the differential equations Ly=0 and y=0 as linear recursions for the coefficients of the prospective solutions we obtain, for ℓ≤ m-1, a_ℓ=S(∑_k=1^r T_k(a_ℓ -kx^ρ+ℓ -k)), and b_ℓ=(∑_k=1^r _k(b_ℓ -kx^ρ+ℓ -k)), where both sums in the parentheses are homogeneous of degree ρ+ℓ in x. By induction on ℓ we may assume that b_ℓ-k=a_ℓ-k equals the reduction of a_ℓ-k for all k=1,…,r. Hence this also holds for b_ℓ=_ℓ. (ii) Assume now that ρ+ℓ is a local exponent of L. Here, the formula for b_ℓ is different, by the very definition of g, b_ℓ=(∑_k=1^r_k(b_ℓ -kx^ρ+ℓ -k)) + a_ℓ. Now, as ρ+ℓ is a local exponent of L and hence also of , the image (x^ρ+ℓ) will involve z_1. Therefore, as b_ℓ does not involve z_1 by assumption, we get _k(b_ℓ -kx^ρ+ℓ -k)=0. Hence again b_ℓ=a_ℓ for all ℓ≤-1. This proves in both cases that g is the reduction of f modulo p up to degree -1. To finish the proof we will show that b_ involves z_1. This will produce the required contradiction. As a_ depends on z by assumption, ρ+ is necessarily a local exponent of L and thus of . Hence (x^ρ+) will depend on z_1. Recall that a_=S(∑_k=1^r T_k(a_ -kx^ρ+ -k)), and b_=(∑_k=1^r _k(b_-kx^ρ+-k)). From the already established equalities b_ℓ=a_ℓ for ℓ≤-1 it follows that _k(b_-kx^ρ+-k) is the reduction modulo p of T_k(a_ -kx^ρ+ -k). Now, if T_k(a_ -kx^ρ+ -k) were 0, its image a_ under S were zero, which is excluded by the choice of . So this term is non-zero. But then it suffices to choose p sufficiently large such that also the reduction _k(b_-kx^ρ+-k) is non-zero. Similarly as before we then get that b_ involves z_1, contradiction. We illustrate the crucial step in the proof of (c) by an example. The operator L=x^2∂^2-3x∂-3x-x^2-x^3 has the solution f(x)=u^-1(1)=1+a_1x+a_2x^2+… = 1-x+1/2x^2-1/2x^3-1/2x^4z+…, so a_4=-1/2z is the first coefficient which depends on z. Assume that there was a full basis of solutions in _3 x. The local exponents in characteristic 3 are 0 and 1, so Ω_4={1, 3}. We compute, using T=x^2+x^3 and L_0=x^2∂^2, the expansion of the following solution u_3^-1(1+2x+x^3)=1+2x+2x^2+x^3+…, which agrees with the reduction of f up to order 3. However, the next term in the expansion is S_3(x^4)=x^4z_1, so u_3^-1(1+2x+x^3)∉_3 x. §.§ Outlook If one wants to pursue the goal of proving the equivalence of the Grothendieck p-curvature conjecture and the Bézivin conjecture, number theoretic obstacles occur. A power series y(x)∈ x is called globally bounded if there is an integer N such that y(Nx)∈ x. In other words, there are only finitely many prime numbers p appearing in the denominators of the coefficients of y and they only grow geometrically. A theorem of Eisenstein <cit.> says that any algebraic power series is globally bounded. To prove that the validity of the Bézivin conjecture implies the validity of the Grothendieck p-curvature conjecture it suffices to show that for a linear differential equation Ly=0 whose reduction L_py=0 has a full basis of solutions in _p x the basis of solutions in characteristic 0 is globally bounded. For this it is natural to try to compare the algorithms from the normal form theorems in characteristic 0 and p further. Ideally, p would not appear in the denominators of solutions in characteristic p if and only if there is a basis of solutions in _p x of L_py=0, at least for almost all p. However, the situation is not as easy as one might hope, as the following two examples illustrate: (i) The first example shows that for finitely many primes it may happen that a full basis of solutions of the reduction of a linear differential equation modulo p exists, although p appears in the denominator of one of the solutions in characteristic 0. The solution of ∂ - nx^n-1 for n∈ is e^x^n, a power series where each prime number appears eventually in the denominators. However, for all prime numbers p dividing n, the reduction of the equation modulo p is an Euler equation having the solution 1∈_p x. As this can happen only for a finite number of primes, this does not contradict the Grothendieck p-curvature conjecture. (ii) The next example shows that to rule out the appearance of the prime factor p in the denominators of a solution of Ly=0 it is not sufficient to work on the level of individual solutions associated to a local exponent and its reduction. If possible at all, it has to take into account the existence of a full basis of solutions. The power series y(x)=∑_k=1^∞ a_kx^k=∑_k=1^∞k(k+2)/(k+1)x^k=3/2x+8/3x^2+15/4x^3+24/5x^4+…=log(1-x)/x+x/(x-1)^2 is annihilated by the third order operator L=x^3∂^3+4x^2∂^2+x∂-1-(x^4∂^3+8x^3∂^2+13x^2∂+3x). This operator L is hypergeometric, i.e., T=L-L_0 is an Euler operator with shift one. Moreover, y is annihilated by the second order operator M=3x^2∂^2+3x∂-3+(x^4-4x^3)∂^2+(3x-12x^2)∂+x^2-4x, which is not hypergeometric. The operator M is a right divisor of L, as one verifies that (-1/x-3x∂-1/x-3)M=L. Let us first concern ourselves with the operator L. Its local exponents are -1 with multiplicity two and 1 with multiplicity 1. We have y=3/2· u^-1(x), where u is the automorphism described in the normal form theorem in characteristic 0. Moreover we compute u^-1(x^-1)=x^-1 and u^-1(x^-1z)=x^-1z. Thus a basis of solutions of Ly=0 is given by y, x^-1 and x^-1log(x). For all prime numbers p the coefficient of x^p-2 in the expansion of y is divisible by p, while the denominators of a_1,…, a_p-2 are not. Thus y_p:=∑_k=1^p-2a_kx^k is well defined in characteristic p and a solution to the equation L_py=0. It is given as u^-1_p(x) where u_p is the automorphism defined in the normal form theorem in characteristic p. The series y is not algebraic, as it is not globally bounded. In fact any prime number p appears in the denominators of the coefficients a_i. However, the solution in characteristic p corresponding to the reduction of the local exponent 1 is a genuine power series. Other linearly independent solutions in characteristic p are x^-1 and x^-1z_1. We see that in neither characteristic there is a basis of power series solutions. Let us now turn to the operator M, which has local exponents -1 and 1 as well, both with multiplicity 1. A basis of solutions is given by x^-1 and y. This does not contradict the Grothendieck p-curvature conjecture, as y_p is not a solution of M. For L the construction was very dependent on the fact that the equation is hypergeometric, which is no longer the case for M. There still remain several questions about linear differential equations over fields with positive characteristic. For linear differential equations with holomorphic coefficients there is a criterion by Fuchs characterizing regular singular points of an operator L <cit.>. A point a∈ℙ^1_ is at most a regular singularity of L if and only if there is a local basis of solutions of Ly=0, which grows at most polynomially when approaching a. One would expect a similar criterion in characteristic p: an n-dimensional vector space of solutions in ℛ over the constants 𝒞 should suffice to conclude that 0 is a regular singular point of L. The needed framework could be provided by adapting the solution theory of linear differential equations with holomorphic coefficients and an irregular singularitiy at 0 of N. Merkl, described in section <ref> to positive characteristic. More precisely, this should allow the definition of a ring ℛ in which every linear differential equations with an irregular singularity at 0 in positive characteristic has a basis of solutions. The corresponding criterion in characteristic p should then read: A linear differential equations Ly=0 admits a basis of solutions in ℛ⊆ℛ if and only if 0 is a regular singularity of L Moreover, the solutions of differential equations in ℛ need to be better understood. For example one would expect some kind of pattern in the exponential function in positive characteristic discussed in Example <ref>. However, no such structure seems obvious. Also the question of the algebraicity of the constant term raised in Example <ref> and Problem <ref> deserves some attention and should be studied for the constant terms of solutions of any equation. In addition there is hope to extract information about the p-curvature of linear differential operator L in positive characteristics from the description of a full basis of solutions in the differential extension ℛ of . In <cit.> Bostan, Caruso and Schost describe an algorithm on how to effectively compute the p-curvature of a differential operator. They work over the ring x^ dp of series of the form f=a_0+a_1γ_1(x)+a_2γ_2(x)+…. The elements γ_i(x) are formal variables, but should be thought of x^i/i!. The multiplication on x^ dp is consequently given by γ_i(x)γ_j(x)=i+jiγ_i+j(x). In a suitable extension extension of x^ dp, accounting for local exponents outside the prime field, they construct a basis of solutions of Ly=0. From this basis they compute, passing to systems of first order equations, the matrix representation of the p-curvature. A similar program seems feasible working in ℛ instead of x^ dp. Finally, there remain, of course, the Grothendieck p-curvature conjecture and the Bézivin conjecture. As Example <ref> shows, the algorithms of the normal form theorems in characteristic p and 0 show some unexpected discrepancy. The hope that solutions of the reduction of differential operators are reductions of solutions of the operator seems to be unfounded. However, the phenomena shown require further investigation. tocsectionReferences University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz 1, 1090, Vienna, Austria Email: mailto:florian.fuernsinn@univie.ac.at Email: mailto:herwig.hauser@univie.ac.at
http://arxiv.org/abs/2307.03054v1
20230706151811
Multi-source imagery fusion using deep learning in a cloud computing platform
[ "Carlos Theran", "Michael Alvarez", "Emmanuel Arzuaga", "Heidy Sierra" ]
cs.DC
[ "cs.DC" ]
Multi-source imagery fusion using deep learning in a cloud computing platform Carlos Theran1, Michael Alvarez2, Emmanuel Arzuaga12 and Heidy Sierra1 Laboratory for Applied Remote Sensing, Imaging and Photonics 1Department of Computer Science & Engineering 2Department of Electrical & Computer Engineering University of Puerto Rico Mayaguez ========================================================================================================================================================================================================================================================================== Given the high availability of data collected by different remote sensing instruments, the data fusion of multi-spectral and hyperspectral images (HSI) is an important topic in remote sensing. In particular, super-resolution as a data fusion application using spatial and spectral domains is highly investigated because its fused images is used to improve the classification and tracking objects accuracy. On the other hand, the huge amount of data obtained by remote sensing instruments represent a key concern in terms of data storage, management and pre-processing. This paper proposes a Big Data Cloud platform using Hadoop and Spark to store, manages, and process remote sensing data. Also, a study over the parameter chunk size is presented to suggest the appropriate value for this parameter to download imagery data from Hadoop into a Spark application, based on the format of our data. We also developed an alternative approach based on Long Short Term Memory trained with different patch sizes for super-resolution image. This approach fuse hyperspectral and multispectral images. As a result, we obtain images with high-spatial and high-spectral resolution. The experimental results show that for a chunk size of 64k, an average of 3.5s was required to download data from Hadoop into a Spark application. The proposed model for super-resolution provides a structural similarity index of 0.98 and 0.907 for the used dataset. § INTRODUCTION Nowadays, many satellites that capture different information from objects have been developed to study Earth's surface features such as vegetation, rock formations, soil, water, snow, and human structures; generating huge amounts of data. For instance, the Airborne Imaging Spectrometer (AVIRIS) has been used in a large number of experiments and field campaigns <cit.>, the Hyperion instrument on board National Aeronautics and Space Administration (NASA)’s Earth Observing One (EO-1) spacecraft <cit.>, and Compact High Resolution Imaging Spectrometer (CHRIS) on ESA’s Proba-1 microsatellite <cit.>, among others. These satellites capture images daily in various domains, for example, images with high spatial resolution, high spectral resolution, or temporal resolutions. In this manner, remote sensing is defined as big data problem, following the 5Vs definition of Big Data (volume, variety, velocity, veracity, and value) <cit.>, carrying new challenges on data storage, data management, and data processing. In order to overcome the challenges related to data storage and management, researchers have proposed some parallel and distribute techniques using super-computers <cit.>. However, cloud computing technology has gained a lot more of attention due the advantage of commodity computer and storage devices. Its popularity has increased for the big pool of resources offered to users with low-cost, high-availability, scalability, storage, and computing power. For example, cloud computing platforms such as Google Cloud, Amazon Web Service, and Azure are provided by computer and software giant companies such as Google, Amazon, and Microsoft, respectively. The major advantages of cloud computing technology on big distributed data processing are: high reliability provided by fault tolerance mechanisms, scalability by virtualization technology, easy parallel programming, low cost on storage, and computing devices <cit.>. In addition, the data processing problem has been tackled by using machine learning techniques to analyze, discriminate and classify information from the given data. Thus, this paper proposes to use a cloud-based platform to manage remote sensing big data using Hadoop Distributed File System (HDFS) <cit.>. Hadoop splits data sets (files) into chunk and distributes them across commodity computers called nodes, which are connected to each other and work together as a single system. Then, to analyze and process the data, a special node called client node can be used to access this data using different SparkSQL queries or MapReduce functions. Spark is a well known big data tool used in cloud computing processing. It is composed of a set of records or subjects of specific types, in which the data is partitioned and distributed across multiple nodes in the cluster. The main property of Spark is the Resilient Distributed Dataset (RDD) has the ability to store the data in the memory of each node, and makes it possible to process the data in parallel <cit.>. In this work, a cloud computing platform for remote sensing storage, management, and processing is developed using Hadoop and Spark. Also, a different approach based on Long Short Term memory is implemented to fuse Hyperspectral and Multispectral images. As a result, images with high-spacial and high-spectral resolution is obtained. The proposed approach is trained using different patch sizes preventing the spatial loss, and intrinsically provides data augmentation strategy. We use the structural similarity index, and signal to noise ratio to evaluate the quality of the fused images. This paper is organized as follows: Section 2 provides the methodology use for cloud computing and the proposed fusion method, Section 3 presents the description of the data, Section 4 presents experimental procedure, Section 5 presents the experimental results and, section 6 conclusion. § METHODOLOGY This section provides the cloud computing platform's configuration details as a resource for management, storage, and processing data fusion task in remote sensing imaging. Moreover, this section describes how HSI and MSI are formatted to be supported by HDFS. In this manner, we can overcome the management and storage challenges in the imagery area. In addition, a new approach for HSI and MSI fusion based on Long Short Term Memory (LSTM) is presented. This fusion approach is used as a test base to provide the performance of the proposed cloud configuration. §.§ Hadoop Distributed File System HDFS is a particular distributed file system for efficient management and storage of massive structured and unstructured data. Also, It is designed to run on commodity hardware <cit.>. The popularity of HDFS is highly fault-tolerant and reliable access to the data throughout applications. Our cloud configuration consists of HDFS master/slave architecture. The Master or Namenode manages the operations related to the metadata and filesystem namespace and regulates access to files by clients. Such operations are closing, opening, and renaming files and directories. Meanwhile, the slaves or Datanodes are in charge of storing the chunks generated by the file splitting. In addition, It is responsible for reading and writes requests made by the client through the Namenode. Also, Datanodes perform chunks creation, deletion, and replication against a request from the NameNode. Our cloud configuration uses Apache YARN (Yet Another Resource Negotiator) that provides support for computing distributed paradigms, which was built focus on strong fault tolerance for massive, data-intensive computation <cit.>. YARN works through APIs that request and works with cluster resources; consequently, it is the Hadoop's cluster resource management system. YARN application can allocate resources in the cluster by making all of its requests upfront or dynamically requesting the resources. In our case, an upfront approach is adopted because our application is based on Apache Spark <cit.>, which is discussed later. Spark can be deployed in two different modes for running on YARN: YARN client mode, where the driver runs in the client, and YARN cluster mode, where the diver runs on the cluster in the YARN application master. Since we decide to have the opportunity to debug the output of our programs immediately, The YARN client mode was selected as a part of our cluster configuration. On the other hand, using YARN cluster mode, we are not able to have the interact component, such as spark-shell for scala or java applications and pyspark for python applications. Also, launching Spark application in client mode, the driver runs in the client process, and the application master is only used for requesting YARN resources. As a result, we are not overloading our cloud infrastructure if many clients request services on the cloud. §.§ Apache Spark Apache Spark is a cluster computing framework that introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost <cit.>. Spark is well known for its ability to load large working dataset in memory. Consequently, it can improve the performance time of our application. The literature has demonstrated that Spark is the most appropriate API to be integrated within Hadoop <cit.>. For example, Gu shows that Spark can achieve better performance in response time since the in-memory access over the cluster's distributed machines. As a result, Spark is faster than Hadoop in iterative operations, but a penalty in memory consumption is paid <cit.>. Also, applications running as standalone on Spark have demonstrated to be faster than using only Hadoop <cit.>.On the other hand, Spark can work with DataFrame as well. In this case, Spark uses a module named Spark SQL for structured data processing. DataFrame is a structured collection of data organized into named columns, constructed from structured data files, tables, external databases, or existing RDDs. It follow then, the proposed data fusion for super-resolution makes use of these two types of data structure. Mainly, This RDD and DataFrame are responsible for reading and write the HSI into our HDFS. Figure <ref> presents a our cloud infrastructure. §.§ Data Fusion for Super-resolution HSI Data fusion is becoming the preferred option to improve the data collected by multi-sources. As a result, we can achieve inferences that are not obtaining from a single source. Hence, satellite imagery has adopted data fusion techniques for the high availability of data in varied resolution domains (spatial, spectral, and temporal) of the same spacial scene, and an improvement over these resolution domains can be performed. Particularly, due to the high spectral content of HS image, it has gained relevant attention in remote sensing for image enhancement using data fusion techniques <cit.>. However, typical hyperspectral sensors generate images with a high spectral resolution while sacrificing spatial resolution. Then, to overcome the lack of spatial resolution from the HSI, it is fuse with MSI due to its high spatial resolution. This technique has gained relevant attention to generate images with high-spatial and high-spectral resolution (HSaHSe) <cit.>. In this paper we provide a alternative approach base on the well known LSTM <cit.> to fuse HSI and MSI. The LSTM network is the inclusion of a self-loop that avoid the exploiting gradient and vanishes gradient problem presented in other networks <cit.>. This network use a set of parameters to control the shared information through different hidden layers, these parameters are defined as follows for a time t∈𝐍. [ f_i; i_i; ĉ_i; c_i; o_i; h^t; ] =σ( b_i^f + ∑_j U_i,j^f x_j^(t) + ∑_j W_i,j^f h_j^t-1) σ( b_i + ∑_j U_i,j x_j^(t) + ∑_j W_i,j h_j^t-1) tanh( b^c_i + ∑_j U_i,j^c x_j^(t) + ∑_j W_i,j^c h_j^t-1) f^(t)_i × c_i^(t-1)+ i^(t)_i ×ĉ_i^(t) σ( b^o_i + ∑_j U_i,j^o x_j^(t) + ∑_j W_i,j^o h_j^t-1) tanh(c_i^t)o_i^t where x^(t) is the actual input vector, h^(t) is the output of the LSTM at the current hidden layer, and b^(f,c,o), U^(f,c,o), W^(t,c,o) are biases, input weights, and recurrent weights, respectively. The proposed approach uses the LSTM to propagate the learned spatial information through given sequences of different patch sizes, which are taken from the same spatial scene. The proposed approach is divide in three different steps; Initially a data prepossessing is required to separate the load content and the spectral content from the HSI, which has a low-spatial resolution. Also, a quinctic decimation over the MSI is perform. The two results of these processes are concatenated to generate the result of the first step. Secondly, an LSTM model is trained using different blocks size to generate an image with high spatial resolution. Finally, the spectral content extracted in the first step is multiply with the output of the LSTM. Figure <ref> shows the workflow of the super-resolution process fusing HSI and MSI. § DATA DESCRIPTION The experimental data used in this work consist in two data set, Indian Pines hyperspectral image and Enrique reef hyperspectral image. Indian Pines had the following characteristic. This image was gathered by the AVIRIS sensor and consists of 145× 145 pixels and 224 spectral bands in the wavelength range 400 to 2500 nm. The number of bands were reduced to 200 by removing high water absorption bands. This scene has 16 classes: Alfalfa, Corn-notil, Corn-mintil, Corn, Grass-pasture, Grass-trees, Grass-pasture-mowed, Hay-windrowed, Oats, Soybean-notill, Soybean-mintill, Soybean-clean, Wheat, Woods, Buildings-Grass-Trees-Drives, and Stone-Steel-Towers. See figure <ref>. The Enrique Reef hyperspectral images consists 102 bands taken from the from the AISA Eagle sensor, this image was acquired in 2007. The spatial resolution of this data is 1m.There are 6 classes: Mangrove, Deep water, Coral, Sand, Sea grass, and Flat reef. See figure <ref> These images are used to generate simulated data using the procedure presented in <cit.>. As a results, a low spatial resolution hyperspectral image is simulated by applying image decimation by a factor of 4 to the original dataset. Likewise, a high spatial resolution multispectral image is simulated from the original data set by averaging bands from different spectral ranges: 1) Blue: 445-516nm, 2) Green: 506-595nm, 3) Red: 632-698nm and 4) NIR = 757-853nm. §.§ Data Preprocessing Different transformations over the data presented in figure <ref> were required to be able to execute the schema presented in figure <ref> on a cloud environment. Spark is an engine that uses two different data types, RDD and spark.sql.DataFrame, As a consequence, we need to transform the data to be load as one of those two data types. Figure 5 describes the data transformation process to load the images into an spark application using spark.sql.DataFrame data type. § EXPERIMENTAL PROCEDURE The cloud computing environment for the presented experiments consists of one cluster configuration base on Hadoop and Spark. To provide the following results, we built a prototype that is scalable using more hardware resources. In our case, six virtual machines (VM) were created; five of these VMs are using 10 GB of storage and 6 G DDR4, and one has 10 GB and 16 G DDR4. The operating system is Ubuntu 16.04 LTS and each machine have 4 VCPU. In the following experiment, a variation of the chunk size occurs to analyze the time required to download the remote sensing data from Hadoop into an application using Spark. The tuning of this parameter allows us to identify the correct chunk size to improve the application's execution time. In <cit.> present a description of the chunk size for remote sensing images, but do not provide experimental results with real data set. The chunk size considers for these experiments are 4KB, 16K, 32K, 64K, 128k, 1M, 10M, and 100M. We are taking values below to the default in Hadoop 128M <cit.>, to avoid the needs of more memory for sort map task output, which can crash the Java Virtual Machines or add significant garbage collection overhead. On the other hand, the HSaHSe generated by our method presented in figure <ref> will be test using SSIM, and PSNR metrics <cit.>, these metrics have been used in recent publication in HS and MS fusion <cit.>: SSIM(x,y) = (2μ_xμ_y+c_1)(2σ_xy+c_2)/(μ_x^2+μ_y^2+c_1)(σ_x^2+σ_y^2+c_2) in equation (<ref>), μ_x and μ_y are the average of x and y, respectively. σ_x^2 and σ_y^2 are the variance of x and y respectively, and c_1=(k_1,L)^2, c_2=(k_2,L)^2, where k_1=0.01, k_2=0.03, and L is the dynamic range of the pixel-values. The PSNR is modeled by PSNR = 10log(R^2/RMSE) where RMSE is the well known Root Mean Square Error formula, and R is the maximum fluctuation in the input image. A high SSIM means, the images generated by the fusion of the HSI and MSI have high similarity to the reference images in structural information. Along the same line, PSNR compares the level of the desired signal to the level of background noise. Thus, a higher PSNR means that there is more useful content in the obtained data. The algorithm based on the schema in figure <ref> was implemented in python. For the training phase the packets required are Tensorflow 2.2, and Keras 2.0. § EXPERIMENTAL RESULTS This section presents the performance of our cloud infrastructure in terms of the time needed to download our files from Hadoop into our Spark application. Also, the results of the proposed method for super-resolution fusing HSI and MSI is presented. §.§ Cloud Computing Metrics To take the best advantage of the cloud platform, we studied the download performance from Hadoop to a Spark application in terms of the parameter chunk size, in particular, for the proposed data fusion application and using the data format presented in figure <ref>. The correct value for this parameter can provide an improvement in the execution time of the spark application. In this experiment the set of chunk size described in section 4 was selected. Figure <ref> and <ref> show the time in seconds spend to download the data from Hadoop into an Spark Application. For both data set different performances was obtain. Even though, 64k can be consider a good parameter for both data set. Figure <ref> presents the results using Indian Pines image, and figure <ref> shows the results for Enrique Reef. These results were computed using the average of 10 execution of the super-resolution Spark application. Figure <ref> shows that for 64k the running time is lower with 3.5s. It means, the acquisition of the data form Hadoop into Spark application is faster for these values. Now, the standard deviation for the 90 execution was 0.58. Also, Figure <ref> shows that for 64k the running time is lower with 3.7s. As a result, for 64k a fast acquisition of data from Hadoop is performance. Now, the standard deviation for the 90 execution was 0.50. §.§ Super-resolution HSI Results For experimental propose the following sets of patches size to train our model to generate HSaHSe image were selected; Λ_1={16× 16, 14× 14,12× 12,10× 10}, Λ_2={12× 12, 10×10,8×8,6× 6}, Λ_3={8× 8, 6×6,4×4,2× 2}. For each set, the following number of patches were used to train the model in order to obtain an 80% of data training. For Λ_1, 179 and 1772 pathch were used in Indian Pines and Enrique Reef, respectably. For Λ_2 were used 272 and 3021 in Indian Pines and Enrique Reef, respectably. And 562, 6750 for Indian Pines and Enrique Reef in Λ_3. Table <ref> and <ref> provide the results of the proposed model for HSI and MSI fusion. The table <ref> presents the SSIM obtained with the best case for the set of patches size Λ_1. Using this set, the SSIM was 0.907, and the PSNR was 29.99. It means that the proposed method provides an excellent reconstruction for HSI with a high spatial resolution with a low nose into the images. In addition, figures <ref>,<ref>, and <ref> show from left to right the HSI in low spatial resolution, the HSI generated by the proposed model, and reference image. From images <ref>,<ref>, and <ref>, we can observe that the proposed method is providing an excellent performance for HSI enhancement in the spatial content. The table <ref> presents the SSIM obtained with the best case for the set of patches size Λ_3. Using this set, the SSIM was 0.987, and the PSNR was 39.92. It means that the proposed method provides an excellent reconstruction for HSI with a high spatial resolution with a low nose into the images. Figures <ref>, <ref>, and <ref> show from left to right the HSI in low spatial resolution, the HSI generated by the proposed model, and reference image. The images <ref>,<ref>, and <ref> show the performance of the proposed method for Enrique Reef data set. It provides an excellent performance for HSI enhancement in the spatial content. § CONCLUSION In order to overcome the needs in the area of satellite imagery, a cloud computing environment was configured using Hadoop and Spark. As a result, we can store, manage, and process remote sensing data into a cloud platform with a high fault-tolerance mechanism. The parameter chunk size was studied to determine the chunk's best size to get the best benefits of Hadoop and Spark. In particular, for our data, the best chunk size is 64K. In addition, a new process HSI and MSI data transformation were presented to perform data fusion techniques into the Spark environment. Also, a new approach for HSI and MS fusion was presented. As a result, an HSaHSe image was generated with a 0.907 of SSIM and 29.97 of PSNR for Indian Pines data set, and 0.987 of SSIM and 39.88 of PSNR for Enrique Reef data set. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT The authors would like to thank M.Sc. Alejandro E. Gonzalez for the deployment of Virtual Machines where the cloud was built. This work is partially supported by NSF Grant No. OAC-1750970 and NSF Award No. OIA-1849243. IEEEtran
http://arxiv.org/abs/2307.01854v1
20230704180001
MOKA3D: An innovative approach to 3D gas kinematic modelling. I. Application to AGN ionized outflows
[ "C. Marconcini", "A. Marconi", "G. Cresci", "G. Venturi", "L. Ulivi", "F. Mannucci", "F. Belfiore", "G. Tozzi", "M. Ginolfi", "A. Marasco", "S. Carniani", "A. Amiri", "E. Di Teodoro", "M. Scialpi", "N. Tomicic", "M. Mingozzi", "M. Brazzini", "B. Moreschini" ]
astro-ph.GA
[ "astro-ph.GA" ]
I. Application to AGN ionized outflows Dipartimento di Fisica e Astronomia, Università degli Studi di Firenze, Via G. Sansone 1,I-50019, Sesto Fiorentino, Firenze, Italy cosimo.marconcini@unifi.it INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze, Italy cosimo.marconcini@inaf.it Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, Chile European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching, Germany INAF, Padova Astronomical Observatory, Vicolo Osservatorio 5, 35122, Padova, Italy Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy Department of Physics, University of Arkansas, 226 Physics Building, 825 West Dickson Street, Fayetteville, AR 72701, USA Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Studying the feedback process of Active Galactic Nuclei (AGN) requires characterising multiple kinematical components, such as rotating gas and stellar disks, outflows, inflows, and jets. To compare the observed properties with theoretical predictions of galaxy evolution and feedback models and to assess the mutual interaction and energy injection rate into the interstellar medium (ISM), one usually relies on simplified kinematic models. These models have several limitations, as they often do not take into account projection effects, beam smearing and the surface brightness distribution of the emitting medium. Here, we present , an innovative approach to model the 3D gas kinematics from integral field spectroscopy observations. In this first paper, we discuss its application to the case of AGN ionised outflows, whose observed clumpy emission and apparently irregular kinematics are only marginally accounted for by existing kinematical models. Unlike previous works, our model does not assume the surface brightness distribution of the gas, but exploits a novel procedure to derive it from the observations by reconstructing the 3D distribution of emitting clouds and providing accurate estimates of the spatially resolved outflow physical properties (e.g. mass rate, kinetic energy). As an example, we demonstrate the capabilities of our method by applying it to three nearby Seyfert-II galaxies observed with MUSE at the VLT and selected from the MAGNUM survey, showing that the complex kinematic features observed can be described by a conical outflow with a constant radial velocity field and a clumpy distribution of clouds. : An innovative approach to 3D gas kinematic modelling C. Marconcini 1,2 A. Marconi 1,2 G. Cresci 2 G. Venturi 2,3,6 L. Ulivi 1,2 F. Mannucci 2 F. Belfiore 2 G. Tozzi 1,2 M. Ginolfi 1,4 A. Marasco 5 S. Carniani 6 A. Amiri 1,2,7 E. Di Teodoro 1,2 M. Scialpi 1,2 N. Tomicic 1 M. Mingozzi 8 M. Brazzini 1,2 B. Moreschini 1 Received September 15, 1996; accepted July 4, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Feedback models for galaxy evolution are mostly based on theoretical predictions, hydrodynamical simulations, and kinematical models based on observations <cit.>. Therefore, an in-depth comprehension of multi-phase gas kinematics in galaxies is crucial to determine the mutual interaction of ISM phases and shed light on energy injection and ejection mechanisms. Integral Field Unit (IFU) observations, together with kinematical models, allow to constrain with unprecedented detail and accuracy the 2D and 3D kinematics of gas disks in galaxies <cit.>. Despite the considerable advances in the modelling of gas kinematics in galaxy disks, galaxy-wide multi-phase outflows are still poorly constrained, due to their complex observed geometry, kinematics and surface brightness distribution <cit.>.The unified model of AGN predicts that radiation from the accretion disk around the supermassive black hole is collimated by a torus of obscuring dust and gas, which is broadly symmetric around the accretion flow axis <cit.>. This radiation impinges on the ISM, ionising the gas and transferring mechanical energy through radiation pressure and disk winds. Consequently, the outflowing gas extends outward from the nucleus assuming a typical (bi)conical geometry. These cones are more frequently observed in local Seyfert galaxies, especially in type-II (obscured) than type-I (unobscured) Seyferts, consistently with the unification model <cit.>. The best tracer of the ionized phase of these outflows on 10^2-10^3 pc scales is the forbidden emission line doublet [OIII]λλ4959,5007, since it can only be produced in low density regions, thus not tracing the sub-parsec scales of the Broad Line Region (BLR). In the presence of outflows, the spectral profile of the [OIII]λ5007 emission line typically assumes an asymmetric shape, with a broad, blue-shifted wing[A similar reasoning applies for the [NII]λ6584 emission line, a secondary useful tracer of the ionized emission in those sources, where a possible [OIII]λ5007 blue wing may be undetected due to dust obscuration.] <cit.>. A detailed study of gas kinematics and ionisation mechanisms in these cones is extremely important to shed light on the interaction between AGN and their host galaxies, and on the multi-phase nature of outflows <cit.>. AGN-driven outflows represent the main manifestation of radiative feedback in action, having the potential to sweep away the gas content of the host galaxy, thus quenching star formation inside the outflow cavity from low to high redshift <cit.>. As a result, they can shape galaxies and cause the well-known empirical relations between the black hole mass (M_ BH) and host galaxy properties, such as mass or luminosity of the bulge (M_ bulge and L_ bulge, respectively) and velocity dispersion (σ) <cit.>.Even though AGN-powered outflows are thought to play a key role during galaxy evolution, an accurate determination of outflow properties and driving mechanisms is still missing, due to the large uncertainties related with the estimated outflow properties in different gas phases <cit.>. In most sources, due to unresolved observations, it is unclear whether the outflow morphology is conical or shell-like, and it is largely debated what physical processes drive the coupling of the energy and of the momentum released by the AGN with the surrounding ISM. The most accepted model predicts that a nuclear highly ionized wind, accelerated by the BH radiation pressure, shocks the ISM of the host galaxy, creating an expanding bubble of hot gas, with the post-shock medium going through a momentum-conserving phase, followed by an adiabatically expanding phase <cit.>. To address these debated topics, many kinematical models have been proposed to analyse observations of galactic outflows, but up to date just a few have been tested with spatially resolved integral field spectroscopy (IFS) data (e.g. the AGN-outflow model of VLT/SINFONI data from <cit.>). Simpler models based on long-slit observations have been proposed to investigate the kinematics of the Narrow Line Region (NLR) in local Seyfert galaxies, modelling the position–velocity (PV) diagram <cit.>.<cit.> made a first comparison between the observed moment maps obtained from MUSE data of NGC 4945 and NGC 1365, and a kinematic toy model to investigate the outflow kinematics. This toy model is the starting point for our current kinematic model. Up to now, literature models, such as those mentioned above, are computed by adopting a conical outflow geometry, with velocity field and gas surface brightness of the emitting gas parameterized by analytical functions of the distance from the cone vertex. The velocity profile, at a given position on the sky is given by: f_P(v) = ∫_LOSΣ_P(s,v) ds where P is a given position on the sky, Σ_P(s,v) is the corresponding gas surface brightness distribution at a given velocity v and coordinate s along the line of sight (LOS). The best parameters are then found to reproduce f_P(v) for any given spaxel, or its average velocity and velocity dispersion. In few cases the model takes into account the finite spatial and spectral resolution by convolving the data with the proper smoothing kernels <cit.>. However, the assumption of a smooth function Σ(s,v) to describe the gas emissivity is in contrast with observations, which in nearby galaxies show clumpy and irregular structures. Therefore, an important question to be addressed is whether the complex velocity field observed in nearby sources - and never reproduced by previous studies - is the result of a complex velocity distribution within the propagating outflow, or the consequence of a clumpy distribution of ionized gas clouds.To address these debated questions, we propose an innovative approach to model gas kinematics and determine outflow properties, explaining improvements with respect to previous methods. In this paper, we present our new kinematic model (Modelling Outflows and Kinematics of Agn in 3D), discuss its main degeneracies and show examples of its application to three Seyfert II galaxies from the MAGNUM survey (Measuring Active Galactic Nuclei Under MUSE Microscope, ), NGC 4945, Circinus, and NGC 7582.The paper is constructed as follows. In Sect. <ref>, we describe step by step the 3D model operation. In Sect. <ref> we present the application of our method to simulated data. In Sect. <ref>, we introduce the sample to which we applied the model, describing the gas kinematic features, and the results for the kinematics and energetics obtained following the model application. In Sect. <ref>, we summarize our results and present future developments. § KINEMATIC MODEL In this section we present an overview of our model, describing the main steps to infer the kinematics and orientation of observed galactic structures. In Fig. <ref>, we show a schematic flowchart of the model operation and fitting, described in the following. §.§ Model set up We adopt a spherical coordinate system and create a uniform distribution of 10^7 point-like synthetic emitting sources (clouds, hereafter) distributed according to an input geometrical distribution chosen by the user (e.g. a cone when modeling outflows). We decide to adopt this number of clouds after considering the average MUSE emission line data cube size, that is ∼ 300×300×40. In particular, as discussed in Sect. <ref>, we need at least a few model cloud per voxel (volume pixel) to properly model the observed features. The spherical coordinate system is selected to model radial outflows, but the method works with any kind of reference system, such as cylindrical or Cartesian. The 3D spatial location of each cloud is specified by three coordinates: the semi-polar angle θ, measured from the cone axis (0^∘≤θ≤ 180^∘); the azimuthal angle ϕ of the cloud orthogonal projection on a reference plane passing through the origin and orthogonal to the axis (measured clockwise, 0^∘≤ϕ≤ 360^∘); and the radial distance r from the origin (in arcsec). We start by assuming the surface brightness distribution of the emitting clouds as follows: f(r) = f_ 0 e^-r/r_ 0, where f_ 0 is the flux value at the apex of the cone and r_ 0 is an arbitrary scale-radius. For what concerns the outflow velocity, we start by assuming it as constant with radius (V(r) = V_ 0). §.§ Reference frame We define the source reference system in cartesian coordinates as (X, Y, Z) = (sin(θ)cos(ϕ), sin(θ)sin(ϕ), cos(θ)). Therefore, the status of each cloud is specified by position and velocity vectors, namely, P⃗^ = (X, Y, Z) and V⃗^ = (V⃗_⃗X⃗^ , V⃗_⃗Y⃗^ , V⃗_⃗Z⃗^ ), with the velocity vector defined as: V⃗^ = v_ 0u⃗_⃗r⃗^ = v_ 0 (û_X, û_Y, û_Z) = v_ 0[ sin(θ)cos(ϕ); sin(θ)sin(ϕ); cos(θ) ] It is also possible to associate to each cloud a random velocity dispersion component σ⃗^ _ rand, but in this work we have not taken advantage of this possibility. Then, we consider the observer's frame (x,y,z), where (x,y) are the coordinates on the plane of the sky and z is directed along the LOS. The source and observer reference frame are shown in Fig. <ref>, as blue and red, respectively. We transform P⃗^ and V⃗^ from the source to the reference frame by means of Euler rotation matrices: [ x; y; z ] = R_γ R_β R_α[ X; Y; Z ] where R_γ, R_β, R_α are the Euler rotation matrices, with the rotation angles shown in Fig. <ref> and defined as: * α: Corresponds to the angular separation between the line of nodes and the right ascension coordinate in the observer reference system. The α angle is irrelevant if the observed source is axially symmetric around the source z-axis. * β: Corresponds to the outflow axis inclination with respect to the plane of the sky (e.g. β = 180^∘ in case of a source pointing away from the observer, with z-axis along the LOS; β = 90^∘ for a source axis lying on the plane of the sky). * γ: Corresponds to the projected source major axis inclination in the plane of the sky with respect to the line of nodes, indicating the rotation direction. Also known as Position Angle (P.A.). The velocity component along the LOS, in the observer reference frame, is defined as: v_z = V⃗^ ·n⃗^ + v_sys = V⃗^ ·[ 0; 0; -1 ] + v_sys with n⃗^ the unit vector pointing away from the observer, v_sys the outflow systemic velocity with respect to the galaxy. Once the clouds are projected, the model allows for both a spatial and spectral convolution, to account for observational effects. The observed position (x,y) of each cloud is randomly shifted on the plane of the sky according to a 2D probability distribution, given by the point spread function (PSF) measured from the data. If necessary, to account for the intrinsic spectral resolution, the observed LOS velocity can be similarly convolved with the line spread function (LSF), but in this work we have not taken advantage of this possibility. In the following, we will refer to the outcome of this step as the unweighted model. §.§ Unweighted model At this stage the unweighted model is binned in the observed (x,y,v_z) space, which is the one sampled in data cubes produced by IFU observations. The process is schematized in Fig. <ref>. The spatial and spectral extensions of the model cube are forced to match the spatial and spectral sampling of the data, respectively. A voxel of the obtained 3D model cube, with two spatial extensions and a spectral-velocity one, uniformly populated with clouds, is highlighted in black in Fig. <ref>. In this phase, all clouds in the same bin have a weight associated to our initially assumed radial-dependent flux function (Eq. <ref>). In particular, all the clouds corresponding to the same spaxel have the same intensity, being dependent only on the radial distance from the vertex of the cone. Spreading homogeneously 10^7 clouds in the binned model cube guarantees that at least few clouds populate each voxel, and thus that the weighting procedure described in the following Sect. <ref> succeeds. Since multiple clouds can fall in the same bin, as shown in Fig. <ref>, their contribution to the emission of the corresponding model spectral channel is assumed to be equal. Once the unweighted model is computed, each spaxel (spectral pixel) has an associated spectrum (see Eq. <ref>), which depends on the model geometry and the parameters of the flux and velocity functions we assumed. This model spectrum can be compared with the corresponding observed one in the same spaxel. From the model cube it is then possible to compute projected flux, velocity and velocity dispersion maps from the momenta of the cube. In Fig. <ref> we show two examples of unweighted models: a filled bi-conical outflow (top panels) and a hollow conical one (bottom panels). §.§ Weighted model We use the line flux F_ obs from each (x , y , v_z) voxel of the observed data cube, to determine the surface brightness of each model cloud. In particular, we assign a weight w = F_ obs/N_ mod to each cloud within the volume element specified by the (x , y , v_z) coordinates, where N_ mod is the total number of clouds ending up in that voxel. We want to clarify that, hereafter each cloud is assigned a weight which is independent of the distance from the vertex of the cone. Therefore, each cloud emission is not related to previous assumptions on the analytical surface brightness distribution (this distribution could be any function, see e.g. Eq. <ref>). As an example, all the clouds in the highlighted black voxel in Fig. <ref> have the same weight, since they belong to the same voxel and thus are weighted with the same flux. We can now compare the observed line profile in each spaxel with those obtained from a variety of weighted model cubes, each one computed using different combinations of geometrical and physical parameters. Assigning a weight to each cloud ensure that the model velocity profile will match the observed one by construction, provided that model clouds occupy the same velocity range spanned by the data. The latter condition will not be satisfied if the model geometry and velocity field is incorrect, which gives us the means to constrain both the cone geometry and the intrinsci velocity field at the same time. The procedure to infer the outflow parameters through a fitting procedure is described in the next section. §.§ Fit of model on observed data The fitting algorithm consists in a loop over the parameter space, until the best set of parameters is obtained, which is that one minimizing the difference between the observed and modelled emission in each voxel. First, we define the free parameters we want to explore for the MUSE example: intrinsic radial outflow velocity, outflow systemic velocity with respect to the galaxy systemic velocity (V⃗^ and v_sys in Eq. <ref>, respectively), and inclination with respect to the LOS. Then, following the procedure shown in Fig. <ref>, for each set of parameters we create a weighted model cube, whose emission is compared to the observed one over the velocity range defined by the 1% and 99% percentiles of the observed LOS velocity distribution, v_1 and v_99, respectively. Finally, we evaluate how well each model reproduces the observed spectrum by means of a customized goodness of fit estimator, defined as: κ = ∑_i,j(S_ Oj(v_i) - S_ Mj(v_i)/δ s_j)^2 Here, v_i represent the i-th spectral channel of the j-th spaxel. Therefore, S_ Oj(v_i) and S_ Mj(v_i) are the observed and model spectral flux in the i-th voxel, respectively, and δ s_j is the uncertainty on the flux, assumed to be constant in all spectral channels. We minimize κ to find the best set of model parameters. Although the κ estimator is defined as the standard χ^2 estimator, its definition relies on different assumptions. For this reason, we decided to refer it as κ instead of χ^2. A proper computation of the free parameters uncertainties and the development of a tailored statistic will be addressed in future work. Figure <ref> shows the comparison between the observed (black) and model (red) [NII]λ6584 emission line profile, extracted from a spaxel with a prominent blue-shifted wing in NGC 4945. Top panels show the unweighted model emission for the best fit model (left) and a model with wrong inclination (right). Bottom panels show the weighted version of top panels. The best model configuration on the bottom left is obtained with the parameters listed in Table <ref>, and perfectly reproduces the observed emission in each velocity channel. In the right panels, a different set of model parameters is not able to reproduce the observed blue wing, for either the weighted or unweighted model. This is due to the fact that, for this wrong set of parameters, no cloud contributes to observed LOS velocities smaller than -300 km s^-1. Indeed, for a model to be successful, it is crucial that in all spaxels there are model clouds populating all velocity channels where emission is observed. §.§ Solving the degeneracies The main problem to face when creating a 3D kinematical model is recognizing the degeneracies which affect the fit results. The main degeneracy affecting is that very high outflow velocities, combined with a range of different inclinations towards the observer's LOS, always allow to reproduce the observed line profile in each spaxel, even with a wrong set of geometrical parameters, since all velocity channels of each spaxel will be always populated with model clouds.As an example of this, Fig. <ref> shows the moment maps of observed data (top panels), and two models with radial outflow velocity of 3000 km s^-1, which is above the observed maximum velocity of ∼ 900 km s^-1. The model in middle panels is obtained with β = 50^∘, v_ sys = +1000 km s^-1 and outer semi-opening angle of θ_ out = 48^∘; the model in bottom panels instead, with β = 120^∘, v_ sys = -500 km s^-1 and outer semi-opening angle of θ_ out = 38^∘.Figure <ref> shows that models with high radial outflow velocity are able to equally reproduce the observed emission and kinematics in each spaxel, even with a very different geometrical configuration. If the outflow cone aperture is wide enough to reproduce the observed cone aperture in the plane of the sky, the combination of high velocities and a wide range of inclination angles with respect to the plane of the sky, can provide a wide range of velocities along the LOS, fitting any kind of observed data, once the weighting scheme described above is applied. Since each cloud is assigned a weight, measured from the observed data, and dependent on the (x, y, v_z) bin where the cloud falls, there are two configurations causing a cloud to be assigned a zero weight: either the cloud has a velocity v_z beyond the observed velocity ranges, or no emission is detected in the cloud 3D position-velocity. Therefore an outflow radial velocity which results in velocity percentiles well above the observed boundaries v_1 and v_99, as the cases shown in Fig. <ref>, allows the model to reproduce the observed spectrum in each spaxel by assigning zero weight to clouds at the model edges, thus producing a flattened model on the plane of the sky.Figure <ref> schematically shows the effect on model clouds distribution by creating a model with radial outflow velocity much higher than the observed one. The solid and dotted black lines define the intrinsic outflow aperture and the LOS direction, respectively. The grey dotted lines mark the region containing clouds with projected velocity within the v_1, v_99 boundaries. The gray clouds, as opposite to the green ones, have projected velocities above the observed boundaries, thus the model assigns them a null weight. Top and bottom panels show the side and top view of the model, respectively. The larger the outflow velocity, with respect to the maximum observed velocities, the more flattened on the plane of the sky is the distribution of model clouds with non-zero weight.To overcome this degeneracy issue we constrained the model parameters as follows. While following the loop over the free parameters (see scheme in Fig. <ref>), the algorithm discards all the combinations of model parameters which result in having percentile velocities v_1, v_99, derived from the integrated unweighted model spectrum, different from the observed ones. This is done by defining a parameter f and imposing for each model that |v_1| ( 1-f ) < |v^MOD_1| < |v_1| ( 1+f ) and |v_99| ( 1-f ) < |v^MOD_ 99| < |v_99| ( 1+f ). Here, f ∈( 0.2, 0.05 ) is a refinement parameter that progressively decreases until reaching f = 0.05. In this way, it is guaranteed that the difference between the observed and model percentile velocities differs of no more than 5 %. This constraint then requires a suitable combination of outflow velocity, cone aperture and inclination, to reproduce the observations, besides representing an optimal method to remove the degeneracies. § TEST ON SIMULATED DATA In this section, we present the results of the application of to simulated data cubes, in order to test model capabilities and limits. We generated four different simulated cubes with surface brightness distribution and kinematics similar to those observed in nearby and high redshift sources. The purpose is to simulate the observed spatial and spectral resolution of new generation IFU and test the degree of reliability of the model with increasing complexity. We created each simulated cube using a spatial and spectral sampling very similar to MUSE data cubes, that is 0.2 ” / pixel and 55 km s^-1, respectively.We tested different inclinations with respect to the LOS, inner and outer cone opening angles, velocity fields and PSFs, in order to simulate the outflow features observed with MUSE in our sample. Adopting different PSF sizes allows to simulate different instrument resolutions and source distances, in order to test the performance of for both low- and high-z sources. For the surface brightness emission we adopted a random distribution of 100 clumps, which is a sufficient number to produce moment maps similar to those of the MUSE sample. The clump is a 3D normal distribution of increased flux centered around a randomly selected (r, θ, ϕ) coordinate, between the minimum and maximum of the spherical coordinates interval chosen. The outflow is characterize by a constant surface brightness distribution, which is made irregular by the presence of the clumps. We want to stress that the number of simulated clumps is irrelevant for the result of the fit. For each simulated cube we inferred the best geometrical and kinematic parameters by running the fitting algorithm outlined in Sect. <ref>, progressively increasing the number of free parameters.The comparison between simulated and best fit moment maps for the first three models is shown in Fig. <ref>, the intrinsic and best fit parameters are reported in Table <ref>. Both the best fit moment maps (Fig. <ref>) and model parameters (Table <ref>) are in good agreement with the simulated data. The first moment (velocity) maps of each simulated cube clearly show that assuming a very simple velocity field and irregular flux emission, results in very complex and irregular kinematic features, even though the intrinsic velocity field is regular. For the first three models the fitted parameters are listed in Table <ref>, for Model 4 instead, the parameters are reported in Table <ref>. The parameters labeled with an asterisk were kept fixed during the fit.In the right panel of Fig. <ref> we show the comparison between simulated and best fit integrated emission line profiles, plotted in dashed blue and solid red, respectively. The line profile residuals shown with the solid green line are to be mainly ascribed to the discrepancies in model inclination and inner/outer opening angle, which result in missing clouds at the correct 3D position, as explained in Sect. <ref>. Model 4 was created as more similar as possible to the configuration that we expect in our MAGNUM sample. In particular, we simulated a constant radial velocity outflow, characterised by clumpy emission, superimposed on a rotating gas disk extending above the Field of View (FOV). We tested the capabilities by fitting the total simulated cube emission with a simple conical model and adopting a constant radial velocity field, to test whether the model is able to derive the real kinematical and geometrical outflow parameters, despite the presence of an underlying disk. We simulated an uniform disk emission with an intensity ∼ 10^-3 times smaller than the emission of the outflow clumps, and adopted a constant rotating disk velocity of 150 km s^-1, consistent with the observed stellar and gas rotating velocities in the inner kpc of galaxy disks <cit.>. The disk axis is tilted by 5^∘ with respect to the outflow axis. For Model 4 the total emission was convolved with a spatial PSF of 0.6”.Figure <ref> shows the comparison between simulated and best fit model moment maps and emission line residuals, on left and right panel, respectively. The best fit model in Fig. <ref> is obtained by fitting the outflow inclination (β), radial velocity (v_r), inner and outer opening angle (θ_IN and θ_OUT) (see Table <ref>). From this test case it emerges that, in the case of combined emission of multiple kinematical components (disk and outflow), fitting only the dominant outflow component still provides accurate results. The line profile residuals in Fig. <ref> show that the best fit model is missing clouds centered at 0 km s^-1, which are due to underlying rotating disk which was not included in the model.We tested models with an extremely wide range of axis inclinations, inner and outer opening angles, random clumps distribution and velocity fields, in order to account for a variety of possible configurations. From all these tests it emerges that correctly derives the kinematic and geometric parameters we used to create the simulated data cubes, with the typical PSF in low- and high-z data. In particular our model correctly derives, with unprecedented accuracy, the outflow inclination with respect to the LOS, the 3D velocity field and the outer opening angle. These represents the fundamental parameters to be measured to achieve accurate estimates of the outflow energetics. As expected, increasing the number of free parameters of the fit causes an increase of the degeneracies, with a corresponding lower accuracy even with the method outlined in Sect. <ref>.Fitting all the outflow parameters listed in Table <ref> or <ref>, including those labeled with the asterisk, allows for many possible configurations and unreliable best fit parameters. For example, for Model 1 and 2, we observed that keeping all the parameters free has the main consequence of providing an hollow conical best fit model, despite the intrinsic inner opening angle is exactly zero. We observed that for full conical simulated data, fixing the inner opening angle to zero, results in an optimal estimate of the other parameters (see parameters of Model 1). Letting the inner opening angle to be a free parameter instead lead to a wrong hollow cone configuration (see Model 2). As shown for Model 3 instead, starting from hollow conical simulated data results in a correct estimate of the inner opening angle, without the need of fixing any parameter.Model 4 represents the most reliable test, due to the co-presence of disk and outflow emission. For this case we decided to model only the outflow emission, exactly as we did for the MUSE sample (see Sect. <ref>). As shown by the best fit parameters (Table <ref>) and the moment maps comparison (Fig. <ref>), represents a reliable tool to derive the outflow properties with great accuracy. §.§ limits Based on the tests we run on simulated data, it emerged that our method has limitations on the derivation of the kinematical and geometrical properties of simulated complex outflows data. In particular, based on the results obtained for Model 4 (see Fig. <ref> and Table <ref>), which is created in order to simulate a typical data cube from the MUSE sample, it emerged that degeneracies increase with an increased number of free model parameters. For Model 4 we tested the accuracy of our method by progressively increasing the number of free parameters and compute, voxel by voxel, the difference between the best fit model and simulated cube. We define the accuracy as: A = 1/N∑_iI_simulated, i/I_fit, i where I_simulated, i and I_fit, i are the intensity of the i-th voxel of the masked simulated cube and the best fit cube, respectively. N is the total number of unmasked voxels. By definition, with the accuracy A approaching unity, the corresponding model can be considered a good representation of the simulated data cube. Moreover, as a consequence of the weighting procedure (see Sect. <ref>), the intensity of any voxel of the weighted model cube will always be smaller, or equal, to the corresponding intensity of the simulated/observed data cube (I_simulated, i≥ I_fit, i). We fitted the simulated cube with an increasing number of free parameters, starting with the radial outflow velocity (v_r), then adding the inclination with respect to the plane of the sky (β), inner opening angle (θ_ IN), outer opening angle (θ_ OUT), position angle (P.A.), rotation and expansion/contraction velocity (v_θ, v_ϕ). For the test case of Model 4, we observed that the accuracy level remain > 95 % fitting up to the first four outflow parameters. Adding the fifth free parameter, the accuracy drops to ∼ 80 %. The worst scenario is obtained when fitting all the seven parameters, which results in an accuracy level of ∼ 60%.The test case of Model 4 is well representative of what happens fitting only the outflow conical model, despite the presence of an underlying rotating galaxy disk. In particular, fitting up to four outflow kinematical and geometrical parameters, results in a best fit model cube which, in each voxel, is in very good agreement with the simulated cube. Moreover, we observed that correctly derive up to four outflow parameters, despite the presence of an underlying disk, when the observed outflow to disk flux ratio is ≥ 10^2.5. Figure <ref> shows the comparison of the integrated [OIII]λ5007 emission extracted from a clump of the outflow and disk, in the Circinus galaxy. The disk emission is multiplied by 10^2.5. Therefore, adopting in our simulations an average flux ratio of 10^2.5-10^3 is in agreement with the value observed in our MUSE sample. We expect that in cases with lower values of outflow to disk flux ratio, would not be able to properly recover the intrinsic outflow properties. The combination of outflow and disk fitting will be addressed in future work. § APPLICATION TO MAGNUM GALAXIES In order to show the application of to real sources, we selected a sample of three nearby Seyfert-II galaxies, from the MAGNUM survey, namely NGC 4945, Circinus and NGC 7582. In this section, we briefly introduce the main properties and gas kinematical features of this sample. §.§ Preliminary spectroscopic analysis Datacubes were analysed by means of a set of custom python scripts to first subtract stellar continuum, and then fit multiple Gaussian components to the emission lines, thus finally obtaining an emission-line model cube for each emission line. For a more detailed description of the data reduction and the spectroscopic analysis we refer to <cit.>. §.§ Ionized emission For each galaxy we extracted a sub-cube centered on the nuclear region where the ionization cone is observed. In the case of Circinus and NGC 7582 we used the [OIII]λ5007 emission line to map outflow properties; for NGC 4945 instead, we used [NII]λ6584, since [OIII] emission is highly obscured by dust in this edge-on galaxy (see e.g. Fig. A1 in <cit.>). We computed the S/N of the maps, by considering the ratio between the peak of the fitted emission line and the rms of the fit residuals, then we applied a S/N threshold of 3. The sub-cube was extracted from the emission line model cube corresponding to the sum of all Gaussian components used to fit the line, after deconvolving the [OIII] line profile by the MUSE instrumental broadening[This is necessary since our purpose is to untangle the model results from the instrument broadening effects.]. Once we limited the area we are interested in, we created moment maps of the ionized emission from the cube fit: the integrated line flux (moment of order 0), the flux-weighted LOS velocity (moment of order 1) and the velocity dispersion maps (moment of order 2). These maps will be used later to provide a first comparison with the final model results. In this way, we want to demonstrate that correctly reproducing the three moment maps is a necessary but not sufficient condition to assume a model as faithful representation of the outflow features and properties; a more detailed discussion about degeneracies is carried out in Sect. <ref>. §.§ Moment maps In Fig. <ref>, we show the moment maps for each source: starting from the left, integrated emission line, LOS velocity and velocity dispersion maps; from top to bottom: NGC 4945, Circinus and NGC 7582. The selected galaxies were chosen to show extremely clumpy, spatially resolved ionized outflows, as highlighted by integrated emission maps in Fig. <ref>. The projected distance covered by the outflow ranges from ∼ 1 kpc in Circinus, to ∼ 3 kpc in NGC 7582 <cit.>. The intensity maps show a bi-conical axis-symmetric geometry in all cases except for Circinus, where dust and gas in the galaxy disk are probably obscuring the counter cone. The receding cone in NGC 7582 and NGC 4945 is more dust obscured than the approaching one, as usually observed in Seyfert galaxies and AGN in general. A common feature that we detected in all sources is a clear and steep velocity gradient across the cone, with average flux-weighted velocities from v < -200 km s^-1 at the edges, to v > 100 km s^-1 at the center, along the outflow axis. Such a steep velocity gradient is commonly associated with a hollow conical geometry <cit.>. The velocity dispersion of NGC 7582 and Circinus has deep minima along the galaxy plane, while is larger along the outflow extension, with sporadic peaks of ∼ 350 km s^-1. NGC 4945 instead is characterised by a smoother and regular pattern, with the line-width increasing up to ∼ 400 km s^-1 along the outflow axis and then decreasing towards the edges down to ∼ 250 km s^-1, suggesting a more compact outflow geometry. We defined the systemic gas velocity in each source as the fitted stellar velocity, assuming co-rotation of gas and stars in the galaxy disk. §.§ Outflow models In this section, we present the results of our modelling applied to the three selected galaxies. Given the best set of parameters, we first created the corresponding model and provided spatially resolved estimates of the outflow energetics, and finally compared it with the volume-averaged results obtained with the standard literature method, described later in Sect. <ref> <cit.>. From the data we fixed both the position angle (γ, measured counter-clockwise from the north direction), the outflow conical aperture and the centre of the outflow model. γ coincides with the projected approaching outflow axis, the outflow aperture and centre instead, are identified by the ionized gas velocity and velocity dispersion maps. The outflowing cone aperture and centre are defined by the mentioned moment maps since they provide a better constraint to the projected shape and outflow starting point, rather than the clumpy integrated emission. Then, we run the fit of the [OIII] observed emission line, as described in Sect. <ref>, accounting for degeneracies and minimizing the differences between modelled and observed spectra. To estimate the outflow velocity uncertainty, we fixed all the parameters except the outflow velocity, which was varied until measuring a variation from the minimum of the κ estimator of 10% (Eq. <ref>). Figure <ref> shows a comparison between the observed moment maps and those obtained with our best-fit models for NGC 4945 and NGC 7582; the observed data are masked to facilitate a comparison with the modelled maps. The same maps for Circinus are reported in first and third panels from top in Fig. <ref>. The grainy emission and complex observed velocity fields of each outflow are extremely well reproduced by the model, as a consequence of weighting clouds by the observed flux. Velocity dispersion maps highlight some discrepancies, for example in Circinus where the modeled velocity dispersion does not properly reproduce the observed clumpy structure. There might be two possible explanations for this. First, the disk contribution may not be negligible, therefore we should improve the model by considering the emission from background gas in the galaxy disk. Second, there could be intrinsic velocity dispersion in the outflow, which we have not considered. As reported in Table <ref>, we found outflows with axes close to the plane of the sky (i.e. β∼ 90^∘), in all sources. This is an obvious consequence of the fact that we selected Seyfert-II galaxies with clear evidence for an extended ionization cone, which implies that the LOS is close to perpendicular to the cone axis. As shown by the best fit models in Fig. <ref> for NGC 4945 and NGC 7582 and in Fig. <ref> for Circinus, outflow features are perfectly reproduced by a full conical geometry. However, since in many studies the best result is provided by a hollow conical geometry, we tested our model with hollow conical geometries <cit.>. Therefore, we ran a fitting procedure considering different inner opening angles to reproduce the data. As shown in Fig. <ref> for Circinus, this configuration is not suitable to reproduce the data; the same conclusions apply to each galaxy in our sample. We show different model configurations, compared to the observed moment maps for Circinus (top panel). From top to bottom: the unweighted model, the best fit weighted model, an example of a model with constant radial outflow velocity of 250 km s^-1, a hollow cone model with inner opening angle of 27.5^∘. For each model, the residual moment maps obtained subtracting spaxel by spaxel the observed and model emission are also shown. Both the hollow cone and the low outflow velocity configurations are clearly unable to reproduce the observed emission. Indeed, hollow cones miss clouds at small projected velocities, while the low outflow velocity model is unable to reproduce the observed line wings (see e.g. Fig. <ref>). As the cone fills up, the residuals improve, reaching a minimum when the cone is completely filled. We conclude that a full conical geometry, with slow changes in radial velocity around a mean constant value, is well-suited to reproduce the observed outflow features in our sample. §.§ Outflow energetics In this section we investigate the ionized mass and the energetics of the outflow using two different approaches: the first approach (hereafter, referred to as "standard method") follows the prescription of <cit.> <cit.>) and relies on several assumptions, since outflow physical properties are usually unknown. The second (hereafter ' based method') is based on our novel 3D geometry and kinematic modelling. §.§.§ Standard method We calculate the mass outflow rate through the surface of a spherical sector with radius r defined by the cone aperture, using the outflow velocity inferred from the [OIII] emission line. If the outflow geometry and inclination with respect to the LOS are unknown, as done for example in <cit.> and <cit.>, we can define the outflow velocity as: v_out = max ( |v^max_10 - v_sys|, |v^max_90-v_sys| ) , where v^max_ 10 and v^max_ 90 are the maximum percentiles velocities corresponding to the 10% and 90% of the flux of the outflow component profile, respectively, and v_sys is the systemic velocity of the galaxy, set to 0 km s^-1. Equation <ref> assumes that the intrinsic outflow velocity can be determined by the wing of the line profile, which provides the maximum observed velocity, possibly correcting for projection effects and dust absorption <cit.>. The [OIII] line luminosity (L_ [OIII], that we measure in each spaxel from the observations) can be written from the theoretical point of view as: L_[OIII] = ∫_V n_e n( O^2+)ϵ_[OIII] f dV, where V is the volume occupied by the outflowing ionized gas, f the filling factor of the [OIII] emitting clouds in the outflow, n(O^2+) and n_e are the volume densities of the O^2+ ions and of electrons, respectively, and ϵ_ [OIII] the [OIII]λ5007 emissivity, having a weak dependence on the temperature (∝ T^ 0.1) at the typical temperature of the NLR (∼ 10^4K). As done in <cit.>, here we assume that most of the oxygen within the outflowing ionized gas is in the O^2+ form, and neglect the contribute to the mass from species heavier than helium. Finally, the outflow mass can be computed as: M_out = 5.33 × 10^7C L_44( [OIII] )/<n_e, 3> 10^[O / H] M_⊙ where L_ 44( [OIII] ) is the luminosity of the total [OIII]λ5007 emission line profile in units of 10^44 erg s^-1, <n_ e, 3> is the average electron density in the ionized gas clouds in units of 10^3 cm^-3 (i.e. <n_ e, 3> = ∫_V n_e f dV / ∫_V f dV) and [O/H] represents the oxygen abundance in solar units. C=<n_ e, 3>^2/<n_ e, 3^2> is a “condensation factor” (for more detail on the derivation of the warm mass traced by the [OIII]λ5007 line in every pixel (see ). We can assume C = 1 under the simplifying hypothesis that all ionized gas clouds in each resolution element (a MUSE spaxel in our case) have the same density. Also, under these assumptions, the mass of the outflowing ionized gas is independent from the filling factor of the emitting clouds. Finally we are able to infer the average mass outflow rate in the volume, as follows: Ṁ_ out = 164 L_ 44([OIII]) v_3/<n_ e, 3> 10^[O/H] R_ kpc M_⊙ yr^-1 where v_3 is the outflow velocity in units of 1000 km s^-1, and R_ kpc is the conical outflow radius, in units of kpc. The outflow rate is independent of both the opening angle Ω of the outflow and the filling factor f of the emitting clouds (under the assumption of clouds with the same density). The kinetic energy (E_ kin), kinetic luminosity (L_ kin) and momentum rate (ṗ_ out), of the ionised outflow, are then given by: E_ kin = 9.94 × 10^42 ( M_ out/M_⊙) (v_ out/km s^-1)^2 erg, L_ kin = 3.16 × 10^35 (Ṁ_ out/M_⊙ yr^-1) (v_ out/km s^-1)^2 erg s^-1, ṗ_ out = 6.32 × 10^30 (Ṁ_ out/M_⊙yr^-1) (v_ out/km s^-1) dyne. Eqs. <ref> - <ref> require the knowledge of different physical properties of the outflow, but only a few of those are usually measured, while others have to be assumed. The quantities usually assumed are the oxygen abundance, which is usually fixed to the solar abundance, and the electron density, which can be estimated from the [SII]λλ6717,6731 doublet or needs to be fixed to typical values of AGN at similar redshift of the sample. If the S/N for the [SII] doublet is high enough and the two lines are spectrally resolved, n_e can be directly estimated from the flux ratio for the lines in the doublet <cit.>, as follows: n_e = 627.1 R - 909.2/0.4315 R where R is the flux ratio of the total emission line profile of the sulfur doublet f([SII]λ6717)/f([SII]λ6731). Assuming a constant electron density can have a huge impact on the outflow energetic <cit.>, nevertheless this is necessary in those cases where an estimate from the data is not possible. The outflow energetic properties, calculated using Eqs. <ref>-<ref>, are reported in Table <ref>, except for the ionized mass outflow rate, which will be discussed later. Their uncertainties were obtained with error propagation, for the electron density we assumed a systematic uncertainty of 50% <cit.>. For NGC 7582 and NGC 4945 we determined the electron density from [SII] doublet with Eq. <ref>. For Circinus, instead we averaged the value of the spatially resolved study by <cit.>, obtained from the high velocity parts of the [SII] doublet lines. We found mass outflow rates of 1.9 × 10^-3 M_⊙yr^-1, 0.2 × 10^-2 M_⊙yr^-1 and 1.4 × 10^-2 M_⊙yr^-1 for NGC 4945, Circinus and NGC 7582, respectively. NGC 4945 is also characterized by a molecular outflow traced by ALMA CO J = 3-2 emission, co-spatial to the ionized one, with an estimated mass outflow rate of ∼ 20 M_⊙yr^-1, as reported by <cit.> (Carniani et al. in prep.). The inferred Ṁ_ out for Circinus, computed over a distance of ∼ 1 kpc from the AGN, is smaller by a factor of ∼ 100 compared to the estimate of <cit.>. They found an outflow rate for the blue and red components of 0.12 M_⊙yr^-1 and 0.1 M_⊙yr^-1, respectively, by selecting the spaxels with detected high-ionisation outflow traced by [Fe vii]λ6089, and considering a maximum distance from the AGN of 700 pc. The Ṁ_ out obtained for NGC 7582 is consistent with the high-ionization counter-part of 0.7 × 10^-2 M_⊙yr^-1, inferred by <cit.> using VLT/Xshooter spectra and covering the inner 300 pc. Both the outflow extension and maximum velocity of 364 km s^-1 are much lower compared to our work, in which we estimated the outflow rate considering an outflow extension of ∼ 3.2 kpc and a radial velocity of ∼ 630 km s^-1. §.§.§ Wind energetic uncertainties Literature estimates of the wind parameters rely on different assumptions for each gas phase. The electron density in the outflowing gas, the temperature and geometry are among the parameters that most affect the uncertainties of the outflow energetics. As explained in Sect. <ref>, when it comes to estimate the ionized wind mass and energetics, many basic assumptions can affect the final parameters, leading to systematic uncertainties of a factor of ∼ 10. Therefore, even adopting the standard recipe outlined in Sect. <ref>, it is not unexpected to observe up to three orders of magnitude of scatter for outflow energetics in different studies<cit.>. In this context, our method represents an innovative approach to determine the outflow physical properties with great accuracy, since it is not based on strong observational or physical assumptions. §.§.§ based method Employing the tomographic outflow reconstruction, we have a 3D distribution of ionized gas clouds within the outflow. This allowed us to provide a spatially resolved estimate of the outflow energetics. To compute the amount of ionized mass in each voxel we used Eq. <ref>, converting the flux density emitted by each cloud in the same bin to ionized mass in solar masses. Assuming the flux density within each spaxel to be constant over time and the outflow to subtend a solid angle dΩ, we use the continuity equation in spherical coordinates and express the ionized mass outflow rate via the following equation: Ṁ_ out = dΩ r^2ρ v where ρ is the mass density in each 3D volume element and v the outflow velocity, assumed to be constant within each spatial element. The mass density in each voxel is ρ = dM_ ion / dΩ r^2 dr, with dr = 0.2” the MUSE spatial resolution and dM_ ion the ionized mass in the same spaxel. Therefore Eq. <ref> becomes: Ṁ_out = dM_ ionv/dr being Ṁ_ out the average mass outflow rate within each spatial element of width dr, at a certain distance from the AGN. To estimate the mass outflow rate profile, we computed the amount of ionized gas (dM_ ion) in a shell of fixed width (dr) and assumed the radial velocity to be constant within the shells. 2D maps of the mass outflow rate, and their respective radial profiles, for NGC 4945, Circinus and NGC 7582 are shown in Fig. <ref> in left and right panels, respectively. To calculate the uncertainties on Ṁ_ out in each shell, we propagated the errors, considering the inferred error on v from our modelling (see Sect. <ref>) and assuming an uncertainty of 0.6” on dr.We found three different radial profiles which can be linked to the past AGN accretion histories and variation of AGN luminosity, possibly indicating large time variations in ionizing luminosity that result in the peaks of the outflow rate <cit.>. However, this could also be an ionization effect which is not taken into account when assuming a constant luminosity-mass conversion factor, as we have done. A proper determination of the ionized gas mass from the emission line luminosity requires the use of photo-ionization models, which we will address in future work. This can also be observed by inspecting the outflow clumpiness distribution. In NGC 4945 and Circinus we observed two shells with increased flux density. In NGC 7582 instead, we observed one peak of emission slowly decreasing with the distance from the source, up to the maximum model extension. From the mass outflow rate profile, we can estimate the dynamical timescale of the ionized outflow t_ dyn = d_e / v_e, where d_e and v_e are the maximum outflow extension and intrinsic velocity inferred with , respectively. We obtained 1.4 × 10^6 yr, 1.8 × 10^6 yr and 4.9 × 10^6 yr, for NGC 4945, Circinus and NGC 7582 respectively, consistently with theoretical predictions and simulations <cit.>. §.§.§ Methods comparison In addition to the spatially resolved mass outflow rate estimate, can provide a volume-averaged estimate of Ṁ_out. This is done using Eq. <ref> by considering the total amount of ionized gas mass, inferred by the emission of each model clouds, the maximum model distance from the AGN and the intrinsic outflow velocity. As explained in Sect. <ref>, each cloud has an assigned weight which is derived from the observed emission in the corresponding voxel. Therefore, we convert each cloud weight to [OIII] luminosity and then derive the ionized gas mass. In Table <ref> we show the parameters and the volume-average mass outflow rate, estimated via the standard method (<ref>) and our model (<ref>). The estimates provided by the two methods are compatible within the errors reported in Table <ref>. Moreover, the volume-average mass outflow rates have the same order of magnitude of the radial profile, as shown in the right panels in Fig. <ref>. The standard method underestimates the average Ṁ_out of a factor of 2, compared to our method, based on . This is due to the fact that, both the velocity (see Eq. <ref>) and the projected maximum radius extension adopted by the standard method are smaller compared to the values provided by our model. Both methods are affected by systematic uncertainties of the order of 50 %, as result of the assumed uncertainties on the electron density. As discussed above, to refine the energetics estimate we plan to combine our model with a photo-ionization model, to properly constrain the outflowing gas mass and further reduce the uncertainties. § CONCLUSIONS We have developed a new 3D kinematical model which allows us to constrain with unprecedented accuracy the kinematics and geometry of clumpy AGN galactic winds, reproducing the emission line features observed in all spaxels. We applied the model to three nearby Seyfert-II galaxies selected from the MAGNUM survey <cit.>, featuring a clear (bi)conical ionized outflow extending over kpc scales, and showed that the observed complexity in the kinematical maps is well reproduced by a simple radial outflow model with constant velocity and a clumpy ionized gas clouds distribution. The main features and results of our model are summarized below: * In Sect. <ref>, we build a multi-cloud model to take into account the intrinsic clumpy nature of ionised outflows. We weight each of our model clouds based on the observed line emission and velocity, and constrain the input parameters through a fit by comparing the model and observed line profile spaxel by spaxel. * The model takes into account the spatial and spectral resolution of the data, by creating a model cube with identical 3D extensions, spatial and spectral resolutions of observed data cubes. also accounts for the convolution of model clouds surface brightness with the PSF measured from observed data. * One of the main achievements of this modelling technique is to disentangle the observed features from projection effects, obtaining the real 3D distribution of the gas clouds in a tomographic way. * In this work, the free outflow kinematical and geometrical parameters that the modelling retrieves are the outflow inclination with respect to the plane of the sky, the gas cloud radial velocity and the systemic velocity with respect to the host galaxy (see Sect. <ref>). * In Sect. <ref>, we tested the capabilities of our new method by applying it to four simulated cubes with moment maps similar to the ones observed in the MAGNUM sample. The model manages to reproduce the observed features and provide the correct outflow parameters, with an accuracy > 95 %, fitting up to four parameters. * In Sect. <ref>, we managed to measure with great accuracy and without strong assumptions the outflow physical properties (i.e. mass outflow rate, kinetic luminosity, kinetic energy and momentum rate). With our model we provide a new reliable method to constrain the wind energetics, the powering mechanism in AGN and a different approach to infer the impact of outflows on the ISM and galaxy evolution, expanding the knowledge on AGN feedback mechanism. In <cit.> we present the first application of to a high-z QSO, observed with JWST/NIRSpec and characterized by a collimated high velocity outflow piercing an expanding bubble of ionized gas.[For the outflow 3D reconstruction see our YouTube channel: <https://www.youtube.com/channel/UCQ12ob3CuraQqedCNCGHUsA>] In future work we will apply our model to the other galaxies in the MAGNUM survey, as well as to other high-redshift quasars <cit.>. We will also consider more complex models where a disk or other kinematical components contribute to the observed data cubes. GC, AM, GT, FM, FB and GV acknowledge the support of the INAF Large Grant 2022 "The metal circle: a new sharp view of the baryon cycle up to Cosmic Dawn with the latest generation IFU facilities". GC, AM acknowledge support from PRIN MIUR project “Black Hole winds and the Baryon Life Cycle of Galaxies: the stone-guest at the galaxy evolution supper”, contract # 2017PH3WAT. EDT was supported by the European Research Council (ERC) under grant agreement no. 101040751. S.C acknowledges funding from the European Union (ERC, WINGS,101040227). G.V. acknowledges support from ANID program FONDECYT Postdoctorado 3200802. aa
http://arxiv.org/abs/2307.00386v1
20230701170207
Hybrid Stars Built with Density Dependent Models
[ "A. Issifu", "F. M. da Silva", "D. P. Menezes" ]
nucl-th
[ "nucl-th" ]
firstpage–lastpage 2023 Simplifying the large mass expansion [ August 1, 2023 ==================================== Using a density dependent quark model and a relativistic model within the mean-field approximation for hadrons with density dependent meson-baryon couplings, we construct an equation of state to describe a hybrid neutron star consisting of nucleons and exotic baryons (hyperons and Δ-resonances). We do the study using a Maxwell construction. The quark-hadron phase transition in the stellar matter is determined through; the structure, composition, and properties of the hybrid neutron star matter. The macroscopic properties of the star are determined, and the results are compared with observational astrophysical data. Stars: Neutron § INTRODUCTION Recent progress made in nuclear astrophysics due to the detection of gravitational waves from the merging of two neutron stars (NSs) in the event GW170817 <cit.>, followed by the kilonova event observation in several wavelength bands of the electromagnetic spectrum <cit.>, have given rise to the era of multi-messenger astronomy. These observations gave significant insight into the tidal deformability of merging NSs and provided new constraints on the equations of state (EoS) of these objects <cit.>. Also, recent data from NICER <cit.> gives a clear window for NS mass and radius. Besides, analysis of the GW170817 merger signals also led to constraints on the radius of the NS <cit.> involved, giving hints about a possible phase transition in the core of the NS <cit.>. On the quantum chromodynamics (QCD) phase diagram, there is a line that separates hadronic matter from the quark-gluon plasma phase. A smooth crossover, confirmed by lattice QCD calculations, takes place at high temperatures and low chemical potentials, giving rise to deconfinement. As the temperature decreases and the chemical potential increases, another phase transition, probably of the first order, appears from the hadronic to the quark phase. This is the likely scenario expected to occur in the inner core of the NSs <cit.>. The core of NSs is composed largely of strongly interacting protons and neutrons at low temperatures and high baryon density. However, moving deeper towards the inner core, heavy baryons such as the hyperons and the Δ-resonances are expected to appear. Consequently, the baryons become tightly packed, such that, they may dissociate into a “soup” of deconfined quarks. Hence, there is a possibility that there exist different phases of matter in the NS core: a hadronic phase at lower densities, a quark phase at higher densities, and perhaps even a mixture of hadrons and quarks <cit.>. The EoS for the NS matter in β-equilibrium is known to show different characteristics at two extreme limits. For densities ∼ 1.1 n_0, where the stellar matter exists with hadronic degrees of freedom, a chiral effective field theory can be used to calculate the EoS with good precision <cit.>. On the other hand, at higher densities, perturbative QCD techniques and high energy particle phenomenology, developed with quark-gluon degrees of freedom <cit.>, give better results for the quark-matter EoS to an accuracy of about n ≳ 40n_0 ≡ n_pQCD. Observation gives indirect information about the matter content in the core of the NSs. Visualizing the matter content in NSs requires modeling strongly interacting matter from the crust to the highest density expected inside the star. There have not been an accurate prediction of matter phases in the core of the NS yet, due to the lack of first-principle predictions beyond the nuclear saturation density n_0 ≈ 0.152 fm^-3. However, gravitational wave data is unlikely to bring closure to this question shortly <cit.>. That notwithstanding, observational data has led to several strong constraints making model-independent approaches feasible. In this paper, the hadron matter is assumed to be composed of nucleons, hyperons, and delta isobars. The field equations for the hadrons are solved by adjusting them with the enhanced parameterization to the relativistic mean field (RMF) approximation method, known in the literature as DDME2 <cit.>, and density dependent coupling constants determined from SU(3) and SU(6) symmetry arguments <cit.>. We also assume that the density at which quark deconfinement may take place is several times larger than the saturation baryon density of nuclear matter. We use the density dependent quark model (DDQM) to determine the EoS of the quark matter at the higher baryon density region where particles are expected to be in a deconfined state <cit.>. For the first time, two models with similar characteristics are used to describe the phase transition, namely, the DDQM together with DDME2. Thus, we can investigate the presence of a deconfined quark core in NS matter composed of an admixture of nucleons, hyperons, and Δ-isobars. We ensure that the hybrid EoS developed is within the 2 M_⊙ mass constraint <cit.>. The paper is organized such that in Sec. <ref> we discuss the formalism of the EoSs used to construct the two-phase hybrid star EoS. The section is divided into two subsections; in Sec. <ref> we discuss the hadronic EoS formulated from the RMF approach with density dependent baryon-meson couplings and in Sec. <ref> we discuss the DDQM used to calculate the quark matter EoS. In Sec. <ref> we discuss the deconfinement phase transition in a dense NS matter and how to visualize it from the EoSs of the hadronic and quark matter. We present a detailed analysis of the outcome of the investigation in Sec. <ref> and the final findings in Sec. <ref>. § EQUATION OF STATE In this section, we discuss separately the hadronic and quark matter models intended for use in constructing the two-phase model hybrid stars. We will elaborate on the RMF approximation with density dependent baryon-meson couplings and the DDQM in the following subsections. §.§ RMF Approximation and Density Dependent Coupling We study the hadronic region using quantum hydrodynamics (QHD-1) <cit.> (for review, see — <cit.>), which describes particle interactions inside the nucleus in two forms: Long-distance attractive and short-distance repulsive interactions, used to describe confined and deconfined matter phases, respectively. The model is relativistic and commonly referred to as the RMF theory. It describes particle interactions as being mediated by mesons, as we will see shortly. The Lagrangian density of the model is ℒ_ RMF= ℒ_H+ ℒ_Δ+ ℒ_ mesons+ ℒ_ leptons, where ℒ_H, ℒ_Δ, ℒ_ mesons and ℒ_ leptons are the Lagrangian densities for the baryon octet, baryon decuplet, mesons, and free leptons. The hadronic part is divided into the baryon octet and the decuplet. The Lagrangian of the baryon octet is of Dirac-type and takes the form ℒ_H= ∑_b∈ Hψ̅_b [ i γ^μ∂_μ - γ^0 (g_ω bω_0 + g_ϕ bϕ_0+ g_ρ b I_3bρ_03) - ( m_b- g_σ bσ_0 ) ] ψ_b, were σ is a scalar meson, ω and ϕ are vector-isoscalar mesons and ρ_03 is an isovector-vector meson. The Δ-isobars are represented by the Rarita-Schwinger-type Lagrangian ℒ_Δ= ∑_d∈Δψ_dν[γ^μ i∂_μ- γ^0(g_ω dω_0 + g_ρ d I_3dρ_03) -(m_d-g_σ dσ_0 )]ψ_dν, due to their additional vector-valued spinor component. Even though it has been shown that the spin-3/2 and spin-1/2 models have the same equations of motion in the RMF approximation model <cit.>, it is still important to point out their distinctions. The mesonic part is represented by ℒ_ mesons = 12(∂_μσ∂^μσ- m_σ^2 σ^2) +12(∂_μω∂^μω- m_ω^2 ω^2 ) +1/2(∂_μϕ∂^μϕ - m_ϕ^2ϕ^2) +1/2(∂_μρ⃗∂^μρ⃗-m_ρ^2 ρ⃗^2) . Hereafter, we will consider only the mean field form for the analysis where σ→⟨σ⟩≡σ_0, ω→⟨ω⟩≡ω_0 and ρ→⟨ρ⟩≡ρ_03. Finally, the leptons are represented by the free Dirac Lagrangian ℒ_ leptons = ∑_Lψ_L(iγ^μ∂_μ-m_L)ψ_L, where the summation runs over electrons (e) and muons (μ) in the system, L∈(e, μ), and their antiparticles. The degeneracies of the leptons and the baryon octets are 2 while the degeneracy of the Δ-isobars is 4. We adjust the model with the enhanced density dependent parameterization, DDME2 <cit.>, expressed as g_i b (n_B) = g_ib (n_0)a_i 1+b_i (η + d_i)^2/1 +c_i (η + d_i)^2, for i=σ, ω, ϕ and g_ρ b (n_B) = g_ib (n_0) exp[ - a_ρ( η -1 ) ], for i=ρ, with η =n_B/n_0. The baryon-meson coupling adopted for this study was determined by <cit.> through SU(3) and SU(6) symmetry arguments, where the baryon-meson coupling for the Δ-isobars were determined in a model-independent manner for the first time. The model parameters of the DDME2 were determined by fitting to experimental bulk nuclear matter data around its saturation density n_0 = 0.152 fm^-3 and other properties such as the binding energy, compressibility modulus, symmetry energy, and slope <cit.>. The values of the fit parameters are presented in Tab. <ref>. The ratio of the baryon to the nucleon coupling χ_ib=g_ib/g_iN with extension to the Δ-isobars are shown in Tab. <ref>. The equations of motion of the meson fields are calculated using the Euler-Lagrange equation and solved together with the thermodynamic quantities of the baryons and the free non-interacting leptons, imposing β-equilibrium, charge neutrality, and baryon number conservation conditions. Under this description, the effective masses, and the effective chemical potentials of the particles are also density dependent. Detailed derivations of these quantities are contained in <cit.> for density dependent couplings. The total energy density, ε_B, and pressure, P_B, are <cit.>: ε_B = ε_b + ε_m + ε_d +ε_L , P_B = P_b + P_m +P_d + P_L + P_r, where the subscripts b, d, L and m represent baryon octet, Δ-isobars, leptons and mesons, respectively. The pressure receives correction, P_r, due to thermodynamic consistency and energy-momentum conservation in the form P_r = n_BΣ^r, where Σ^r is the rearrangement term <cit.>. The effective masses m^*_b,d and the effective chemical potentials μ^* of the baryons are m_b,d^∗ = m_b,d - g_σb,dσ_0, and μ_b,d^∗ = μ_b,d- g_ωb,dω_0 - g_ρb,d I_3b,dρ_03 - g_ϕb,dϕ_0 - Σ^r, respectively, with I_3b,d, the isospin projection. The baryon density is given by n_b,d = γ_b,d∫d^3 k/(2π)^3, and the scalar density is n^s_b,d =γ_b,d∫d^3 k/(2π)^3m^∗_b,d/E_b,d, where γ_b,d is the particle degeneracy (2 for spin 1/2 particles and 4 for spin 3/2 particles) and E_b,d= √(k^2 + m_b,d^*2) is the single particle energy. §.§ Density Dependent Quark Model In this work, we assume that the deconfined matter phase in the inner core of the compact star is composed of electrons (e) and quarks (up (u), down (d), and strange (s)). In this phase, the matter will be described by the DDQM model expressed as m_i = m_i0 + m_I, where m_i0 (i = u, d, s) is the current quark mass and m_I is the density dependent quantity which includes the interactions of the quarks. The motivation for the DDQM approach is to include the interactions between quarks in a simple way. It is important to consider how the interactions are included in (<ref>), details of introducing m_I and some other parameterizations were discussed in <cit.>. Here, we consider the parameterization for the density dependent quark masses proposed in <cit.>, given by m_i = m_i0 + D/n_b^1/3 + C n_b^1/3, where n_b is the baryon number density, and C and D are the parameters of this model. The second term in (<ref>) is associated with the linear confinement. Therefore, D is a low-density parameter whose value is model dependent. The third term in (<ref>) is also associated with the leading-order perturbative interactions, which dominate at the higher-density regions; its value is model dependent. Some estimates for C and D were determind in <cit.>, where they used stability as their benchmark for analysis. Also, in <cit.>, they found an estimate based on the relation between C and D and other physical quantities, such as the relation between D and string tension and the chiral restoration density. Besides in <cit.>, estimates were made based on stable radius. In this work, we use C=0.965 and √(D)=121 MeV which are within the unstable quark matter region in the model framework, otherwise, the hybrid star is likely to transform into a strange quark star within a short time of its existence. The quark masses used are 5 MeV, 10 MeV and 80 MeV for u, d and s quarks, respectively. In addition, to calculate the EoS for the quark matter we impose a charge neutrality condition given by the following expression: 2/3 n_u - 1/3 n_d - 1/3 n_s - n_e = 0, where n_i (i = u, d, s, e) are the number densities of each particle. The quarks and electrons interact via weak interactions, which can produce neutrinos. However, as we are studying cold stars (T = 0 MeV ), the chemical potential of these neutrinos can be set to zero, and the chemical potential of the quarks and electrons obey the β-equilibrium condition μ_u + μ_e = μ_d = μ_s. Moreover, the baryons in the system must be conserved, so we impose the baryon number conservation n_b = 1/3 (n_u +n_d +n_s). Incorporating a density dependency can lead to thermodynamic inconsistencies, and one way to avoid this is by including an effective chemical potential μ^*_i. Therefore, we can express the free-energy density f of the free particle system with masses m_i (n_b) and effective chemical potentials μ^*_i f = Ω_0 ( {μ^*_i }, { m_i }) + ∑_i μ^*_i n_i, where Ω_0 is the thermodynamic potential density of the free quarks with masses m_i given by (<ref>), and effective chemical potential μ^*_i. At T=0 MeV, Ω_0 is given by the following expression Ω_0 = - ∑_i g_i/24 π^2[ μ^*_i ν_i ( ν_i^2 -3/2 m_i^2 ) +3/2 m_i^4 lnμ^*_i +ν_i/m_i] where g_i=6=(3 colors × 2 spins) is the degeneracy factor, and ν_i are the Fermi momenta, which are now connected to the effective potentials by ν_i = √(μ^*2_i - m_i^2). Thus, the particle number density n_i is given by n_i = g_i/6 π^2 (μ^*2_i - m_i^2)^3/2 = g_i ν_i^3/6 π^2. The chemical potential μ_i and the effective chemical potential are related through the relation μ_i = μ_i^* - μ_I, where μ_I is the density dependent quantity. Now, we can rewrite the β-equilibrium condition of Equation (<ref>) as μ_u^* + μ_e = μ_d^* = μ_s^*. Lastly, we can express the energy density ε_q and pressure P_q of the particles as ε_q = Ω_0 - ∑_i μ_i^* ∂Ω_0/∂μ_i^*, and P_q = -Ω_0 + ∑_i,j∂Ω_0/∂ m_j n_i ∂ m_j/∂ n_i. § DECONFINEMENT PHASE TRANSITION AND HYBRID EOS We develop the hybrid EoS using the two-phase model approach, where the hadronic and quark matter models are determined separately, and a hybrid EoS constructed through a phase transition. To determine the phase transition between hadronic and quark matter, we assume that the phase transition is first-order and Maxwell-like. In this scenario, the pressure in the mixed phase is constant. On the other hand, Gibbs-like phase transition is also well-known in the literature. However, it was determined that there are no significant differences between Maxwell and Gibbs constructions considering the microscopic properties of the hybrid stars <cit.>. We adopted the Maxwell-like construction for this study. Proceeding from the hadron and the quark models presented in Secs. <ref> and <ref>, we construct the hybrid EoSs for the stars by determining the EoS in the form of pressure as a function of chemical potential. We determine the crossing point between the hadronic and quark matter EoSs where the phase transition is energetically favorable. The crossing points can be seen in Fig. <ref>, and the corresponding critical chemical potential and pressure are in Tab. <ref>. The critical baryochemical potential, μ_c, and the critical pressure P_c at which the hadron and the quark matter are in mechanical and chemical equilibrium with each other is determined as P_H(μ) = P_Q(μ) = P_c, and μ_Q=μ_H=μ_c. This is shown in Fig. <ref> where the curves for P_H(μ) and P_Q(μ) intersect. It is important to state that, at μ_c, the quark phase becomes energetically favorable, and the hadron-quark phase transition occurs. The value of μ_c depends on the model employed, and its value indicates the point where hadronic and quark matter has the same chemical potential and pressure <cit.>. In this form, the lower baryon density region is composed of hadrons, and the higher-density region, where particles are in the deconfined state is described by the quark matter. In this construction, we ensure that the causality condition which forbids that the adiabatic speed of sound at zero frequency, c_s = √(∂ P/∂ε) does not exceed the constant speed of light. § RESULTS AND ANALYSIS In Fig. <ref>, we present the EoS for the hybrid NS. The hadronic EoSs, composed of different particle contents, show phase transitions at different densities. The smaller hadronic critical chemical potential, μ_c, is 1364 MeV corresponding to the stiffer hadronic EoS composed of nucleons. Adding hyperons to the nucleon matter increases the μ_c, as shown in Tab. <ref>, softening the EoS and delaying the phase transition. Again, including the Δ-isobars further softens the EoS at low densities and increases μ_c. Hence, the higher the μ_c, the softer the hadronic EoS involved. Additionally, chemical potentials between 949 MeV and ∼1364 MeV, the quark matter shows characteristics similar to the hadronic matter phase. In the hadron-to-quark phase where the curves for P and μ first intersect, as shown in Fig. <ref>, the EoS shows a sharp discontinuity as shown in Fig. <ref>. It has almost become standard these days to consider NSs with the entire spin-1/2 octet, since the hyperon puzzle can be circumvented in different ways. Lately, introducing the Δ-isobars in addition to the baryon octet is actively being investigated at zero temperature <cit.>, and at a finite temperature and entropy <cit.>. Even though the new degrees of freedom are expected to soften the EoS and reduce the maximum NS mass, adjusting the baryon-meson coupling of the non-nucleonic components of the stellar matter with experimental and astrophysical data within the RMF approximation reduces this effect <cit.>. We observe that the hadron-quark phase transition does not resolve the so-called “hyperon puzzle” as can be seen from our results and also pointed out in <cit.>. The perturbative and nonperturbative regions of the QCD theory with quark and hadron degrees of freedom exhibit different properties, respectively. Quark matter is approximately invariant under conformal symmetry, and hadron matter is not conformally invariant due to chiral symmetry breaking. These qualitative differences; can be observed by determining the values of some physical quantities, such as the speed of sound, polytropic and adiabatic indices, among others. Here, we will analyze the speed of sound c^2_s = ∂ P/∂ε and the polytropic index γ = ∂ln P/∂lnε, in Figs. <ref> and <ref>, respectively. The speed of sound is constant c^2_s=1/3 in the exactly conformal matter, and it approaches this value from below at high-density quark matter region <cit.>. The c_s informs us about the star's internal dynamics and composition using the stiffness of the corresponding EoS. Hence, the appearance of new degrees of freedom leads to different behaviors of the c_s in the stellar matter. The chiral effective field theory calculations show c^2_s ≪ 1/3, below the saturation density, whereas most hadronic matter at higher densities predicts c^2_s ≳ 0.5 <cit.>. These qualitative predictions are in good agreement with the result presented in Fig. <ref>. The c_s grows monotonically with density in the hadron region. However, the onset of new degrees of freedom, such as hyperons and Δ-isobars, leads to a sudden break in the monotonic behavior. A similar analysis was done in <cit.> using the adiabatic index. The little bump at the high-energy region, where quark matter is found, shows the appearance of a strange quark. Looking at Figs. <ref> and <ref> the appearance of Δ^- and Λ^0 at about 1.8 n_0 and 2.5 n_0 is immediately evident leading to the drop in γ and c_s. It has been argued that the characteristics of the c_s are associated with the size of the quark core in the hybrid star. As shown in <cit.>, the c_s in quark matter is related to the mass and radii of the investigated hybrid star. The authors showed that if c^2_s < 1/3, a massive NS is expected to have a massive quark core. From our model framework, we found that the most massive NS star is the one composed of only nucleons at low densities (see Tab. <ref>), in which case the quark core is responsible for 11% of the mass of the star. Moreover, the polytropic index, γ, attains a value of γ = 1 in the conformal matter region whereas, chiral effective field theory calculations and hadronic models predict γ≈ 2.5 around the saturation density <cit.>. In Fig. <ref>, γ starts rising in the hadronic matter region, with γ≈ 2 around the saturation density, until it peaks at γ≈ 3.5. Then, γ starts dropping to γ≈ 2 before it drops sharply, due to the hadron-to-quark phase transition at higher density regions, until it reaches γ≈ 1 in the quark matter phase, where the matter is expected to be conformal. Indeed the value of γ coincides with the theoretical prediction for the quark matter region, and the maximum value obtained in the hadronic matter region is slightly higher than the chiral effective field theory prediction. In Fig. <ref>, we show the particle composition (Y_i=n_i/n_B; where i is related to the different particle constituents in the system) of the star matter before, during, and after the hadron-quark phase transition. Before the phase transition, the star is composed of hadrons. As can be observed in Fig. <ref>, the star is mainly composed of non-strange baryons at densities below n∼ 2.3 n_0 before the strange baryons start showing up at about 2.5 n_0; with the Λ^0 being the most dominant. The Δ-isobars, on the other hand, appear at densities lower than 2 n_0. We have used black, red, and blue vertical lines to show where the hadron-quark phase transition starts (solid lines) and ends (dotted lines). Generally, we observe that adding new degrees of freedom to the stellar matter delays the hadron-quark phase transition. At densities of about 5 n_0, the core of the star is mainly composed of quark matter, with d-quark being the most dominant. However, the s-quark rises steadily with density while the d-quark decreases slowly with density. In Fig. <ref>, we use the EoSs generated from the study to calculate the mass and radii of the star using TOV equations <cit.>. Also, we added BPS to the EoSs to simulate the NS crust <cit.>. It is important to mention that the mass-radius diagram is sensitive to the particle content of the star at the medium to the central part of the star. As expected, the addition of new degrees of freedom to the stellar matter softens the EoS and, as a consequence, reduces the maximum mass of the star <cit.>. The results presented in the figure above and Tab. <ref> are well within the 2 M_⊙ constraints imposed on NSs <cit.>. Recent data from NICER <cit.> measure massive pulsar PSR J0740+6620 of mass 2.072^+0.067_-0.066 M_⊙ and radius 12.39^+1.30_-0.98km at 68% certainty <cit.>. Hence, there is a well-determined mass-radius window within which NSs can be described. Therefore, the model under discussion accommodates the description of PSR J0740+6620 with hyperons, delta particles, and quark matter in its core. Hypermassive stars with large quark cores were obtained in <cit.>, using the vector MIT Bag model and quantum hydrodynamic models, where the authors constructed hybrid stars with more than 80% quark core. However, with the DDQM and DDME2 models, we found a less massive quark core of about 11% of the mass of the star for a stellar matter composed of nucleons (most massive among NQ, NHQ, and NHΔQ). Comparing our results with <cit.>. We found that their softest quark matter EoS, corresponding to a Bag constant B^1/4=195 MeV, shows a hadron-quark phase transition at μ_c = 1266 MeV and P_c = 110 MeV. Meanwhile, our stiffer hadronic matter EoS, composed of nucleons, has a hadron-quark phase transition at higher values of μ_c and P_c, as presented in Tab. <ref>. Thus, we can infer that higher relative values of μ_c and P_c imply less massive quark cores in a hybrid NS. § CONCLUSION We studied the structure and composition of hybrid NSs constituted of nucleons, hyperons, and Δ-isobars admixed hypernuclear matter with a quark core. We constructed our EoSs for the study using the Maxwell method with a sharp hadron-quark phase transition. We determined the properties of the hybrid star, such as the EoS, c_s, γ, M_max, R_max and Y_i, and compared the results with both theoretical and observational data. Some of the results are listed below: * We determined a hybrid NS with a maximum mass within the 2 M_⊙ threshold, composed of the baryon octet, Δ-isobars, and a quark core. * We observed that hybrid NSs are more likely to be composed of non-strange baryons at low baryon densities, while baryons with strangeness are found towards the center of the star. * We established a relationship between the μ_c, hadron-quark phase transition, softening of the EoS, and the composition of particles in the star. Higher values of μ_c imply softer EoS, delay in the phase transition, and higher particle degrees of freedom in the star (i.e., either nucleon plus hyperons or nucleon plus hyperons plus Δ-isobars admixture). * Also, the value of P_c informs us about the phase transition, structure, and composition of the star. As can be seen in Tab. <ref>, higher P_c implies softer EoS, low M_max, small R_max, higher particle degrees of freedom and a delayed phase transition, as extensively discussed in the literature <cit.>. * The c_s calculated from the model framework shows c_s^2 ≈ 0.6 in the hadron region, and c_s^2 ≈ 0.3 in the quark matter region; well within the c_s^2 ≲ 1/3 threshold for conformal matter. These results are in good agreement with the theoretical predictions <cit.> * The value of γ determined in the model framework also shows good agreement with the theoretical predictions. In the quark matter region, we determined γ ≈ 1, and in the hadron region, we obtained a value slightly higher, γ ≈ 3.5, than the theoretical values predicted in the literature <cit.>. Our findings are comparable to previous works on hybrid NSs using different approaches. The characteristics of the c_s, γ and the theoretical evidence of a quark core in dense NSs matter are extensively discussed in <cit.>. The approach adopted here for the determination of the hadron-quark phase transitions and the construction of the hybrid EoSs is similar to what can be seen in <cit.>. However, we have used density dependent models to describe both the hadronic and quark matter models. § ACKNOWLEDGEMENTS This work is a part of the project INCT-FNA Proc. No. 464898/2014-5. D.P.M. was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq/Brazil) under grant 303490-2021-7 and A.I. under grant 168546/2021-3. F.M.S. would like to thank FAPESC/CNPq for financial support under grant 150721/2023-4. § DATA AVAILABILITY The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request. mnras
http://arxiv.org/abs/2307.03264v1
20230706195523
Stellar Half-Mass Radii of $0.5<z<2.3$ Galaxies: Comparison with JWST/NIRCam Half-Light Radii
[ "Arjen van der Wel", "Marco Martorano", "Boris Haussler", "Kalina V. Nedkova", "Tim B. Miller", "Gabriel B. Brammer", "Glenn van de Ven", "Joel Leja", "Rachel S. Bezanson", "Adam Muzzin", "Danilo Marchesini", "Anna de Graaff", "Mariska Kriek", "Eric F. Bell", "Marijn Franx" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0002-5027-0135]Arjen van der Wel Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium 0000-0003-2373-0404]Marco Martorano Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium 0000-0002-1857-2088]Boris Häußler European Southern Observatory, Alonso de Cordova 3107, Casilla 19001, Santiago, Chile 0000-0001-5294-8002]Kalina V. Nedkova Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA 0000-0001-8367-6265]Tim B. Miller Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511, USA 0000-0003-2680-005X]Gabriel B. Brammer Cosmic Dawn Center (DAWN) Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, 2100 Copenhagen, Denmark 0000-0003-4546-7731]Glenn van de Ven Department of Astrophysics, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria 0000-0001-6755-1315]Joel Leja Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Computational & Data Sciences, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0001-5063-8254]Rachel S. Bezanson Department of Physics and Astronomy and PITT PACC, University of Pittsburgh, Pittsburgh, PA 15260, USA 0000-0002-9330-9108]Adam Muzzin Department of Physics and Astronomy, York University, 4700 Keele Street, Toronto, Ontario, ON MJ3 1P3, Canada 0000-0001-9002-3502]Danilo Marchesini Department of Physics and Astronomy, Tufts University, Medford, MA 0000-0002-2380-9801]Anna de Graaff Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg, Germany 0000-0002-7613-9872]Mariska Kriek Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands 0000-0002-5564-9873]Eric F. Bell Department of Astronomy, University of Michigan, 1085 South University Avenue, Ann Arbor, MI 48109-1107, USA 0000-0002-8871-3026]Marijn Franx Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands Arjen van der Wel arjen.vanderwel@ugent.be We use CEERS JWST/NIRCam imaging to measure rest-frame near-IR light profiles of >500 M_⋆>10^10 M_⊙ galaxies in the redshift range 0.5<z<2.3. We compare the resulting rest-frame 1.5-2μm half-light radii (R_NIR) with stellar half-mass radii () derived with multi-color light profiles from CANDELS HST imaging. In general agreement with previous work, we find that R_NIR and  are up to 40% smaller than the rest-frame optical half-light radius R_opt. The agreement between R_NIR and  is excellent, with negligible systematic offset (<0.03 dex) up to z=2 for quiescent galaxies and up to z=1.5 for star-forming galaxies. We also deproject the profiles to estimate , the radius of a sphere containing 50% of the stellar mass. We present the R-M_⋆ distribution of galaxies at 0.5<z<1.5, comparing R_opt,  and . The slope is significantly flatter for  and   compared to R_opt, mostly due to downward shifts in size for massive star-forming galaxies, while  and  do not show markedly different trends. Finally, we show rapid size evolution (R∝ (1+z)^-1.7±0.1) for massive (M_⋆>10^11 M_⊙) quiescent galaxies between z=0.5 and z=2.3, again comparing R_opt,  and . We conclude that the main tenets of the size evolution narrative established over the past 20 years, based on rest-frame optical light profile analysis, still hold in the era of JWST/NIRCam observations in the rest-frame near-IR. § INTRODUCTION Projected light profiles of galaxies are widely used as proxies for their 3-dimensional stellar mass profiles, both at low and high redshifts. Under this assumption, great progress has been made in our understanding of the structure of galaxies and their assembly history. At the same time, we have known for decades that galaxies show color gradients <cit.>, implying that, given the correlation between color and stellar mass-to-light ratio <cit.>, M_⋆/L varies with radius, and generally peaks in the center for massive galaxies. The gradients arise due to a combination of radial variations in attenuation and stellar population properties (age, abundances, IMF). For early-type galaxies the color gradient is generally understood to be due to a radial variation in metallicity <cit.>, while for star-forming galaxies the stellar population gradients, in both age and metallicity, are significant but generally mild <cit.>, and centrally concentrated attenuation plays a dominant role, especially at higher redshifts as is now being revealed by JWST <cit.>. In order to interpret observations of light profiles in the context of theoretical models or simulations, these M_⋆/L gradients must be taken into account as their effect on the half- mass radius as inferred from observations can be very substantial (a factor ∼2), even at near-infrared wavelengths <cit.>. In addition, projected light or mass profiles can be difficult to interpret: a direct comparison with simulations requires a deprojection in three dimensions <cit.>, or the creation of mock observations by projecting simulated galaxies <cit.>. The interpretation of the redshift evolution of galaxy light profiles usually relies on the assumption that the M_⋆/L gradient does not (strongly) evolve, as most studies ignore color and M_⋆/L gradients. Several authors used observed color gradients in higher-redshift galaxies, first reported by <cit.> and <cit.>, to address the impact of M_⋆/L gradients on galaxy size estimates <cit.>. Generally, the results point to the existence of qualitatively similar color gradients at low and high redshift, but even relatively small changes can strongly affect the interpretation of the observed size evolution of galaxies <cit.>. Likewise, the evolution of galaxy geometry (the intrinsic, three-dimensional shape) is often overlooked. <cit.> show that geometry strongly evolves with redshift, which implies that the interpretation of projected light profiles must change with redshift, even if its impact has not yet been analyzed. With the arrival of JWST we can access for the first time the rest-frame near-IR light profiles of intermediate redshift (here, 0.5<z<2.3) galaxies that should provide a more direct proxy of the stellar mass profile since attenuation becomes negligible in most cases and variations in M_⋆/L as a function of age and metallicity are less strong. In this paper (Sec. 2) we use the first batch of NIRCam imaging from the CEERS program <cit.> that covers ≈ 4% of the full CANDELS dataset to test the robustness of stellar half-mass radii estimates based on resolved color profiles from HST imaging <cit.>. Early work by <cit.> already demonstrated that the rest-frame near-IR sizes from JWST/NIRCam are somewhat smaller than the rest-frame optical sizes as measured from HST/WFC3, supporting the previous results that stellar mass-weighted profiles are more compact than light-weighted profiles. Sec. <ref> describes the methodology to convert projected sizes into 3D sizes. In Sec. 3 we present size-mass distributions at for the different size proxies (rest-frame optical, mass-weighted, deprojected) and the average size evolution for massive quiescent galaxies. In Sec. 4 we summarize the results. We assume a flat ΛCDM cosmology with H_0=70 km s Mpc^-1 and Ω_m=0.3, and the <cit.> stellar initial mass function. § DATA AND METHODOLOGY §.§ A New Approach for Estimating Stellar Half-Mass Radii A variety of methods has been developed to convert light distributions into stellar mass maps, which can be divided along two axes. First, some methods create 2D mass maps <cit.>, which have the advantage of retaining all spatial information, while other methods create symmetrized (1D) profiles <cit.>, which have the advantage that they are more easily corrected for the PSF and, relevant to the topic at hand, more easily compared with standard methods to measure galaxy sizes. Second, the spatially resolved photometric information can be converted into M_⋆/L information by SED fitting <cit.>, or by the the application of color-M_⋆/L relations devised for integrated galaxy light <cit.> and applied to spatially resolved light distributions <cit.>. The former has the advantage that all available information is used, but (rest-frame) near-IR photometry is required to assign unbiased M_⋆/L values to dusty regions <cit.>. The latter has the advantage that color-M_⋆/L relations can leverage the knowledge of M_⋆ obtained from broad-band SED fitting across a wide wavelength range, including the near-IR. With any method, we should always keep in mind the uncertainties related to choices made to assign a `true' stellar mass, that is, uncertainties in the stellar population synthesis models and the implementation of absorption and scattering by dust – a discussion of these issues is beyond the scope of this paper. Since, in our case, we do not have spatially resolved rest-frame near-IR photometric information and the goal is to construct stellar half-mass radii for comparison with light-weighted radii, we choose to analyze 1D profiles and apply newly developed color-M_⋆/L relations. Our method consists of the following steps. First, we use M_⋆ estimates from SED fits over the full available wavelength range (UV-to-mid-IR) as described in Sec. <ref> to construct a (redshift-dependent) relationship between M_⋆/L and multiple HST colors in the observed frame (Sec. <ref>). Second, assuming that the same relationship holds within galaxies, we convert HST light profiles (Sec. <ref>) into stellar mass profiles via the multi-color-M_⋆/L relation (Sec. <ref>). The advantages of this method are multiple. Long-wavelength information is leveraged (via the SED-based M_⋆ estimates) in a redshift-dependent manner; that is, any evolution in the relationship between color and M_⋆/L, which is significant <cit.>, is automatically included. Furthermore, no conversion from observed to rest-frame colors is required, which removes template-related uncertainties. Finally, rather than a single color we use the shape of the SED for which spatially resolved information is available to estimate M_⋆/L. At each step we take care to formally propagate the uncertainties, resulting in robust uncertainties on the inferred stellar half-mass radii. §.§ Multiwavelength Photometry and SED Fitting The 3D-HST/CANDELS photometric catalog provided by <cit.> was used by <cit.> to estimate stellar masses, star-formation rates and other physical parameters with the Prospector-α  model <cit.>. The model uses the <cit.> stellar population FSPS, a non-parametric star-formation history, a two-component dust model, and optionally indcludes an enshrouded AGN. The Leja et al. catalog serves as the basis of our work and contains 63413 galaxies in the redshift range 0.5<z<3.0. This is a stellar mass-complete sample, where the completeness limit increases from log(M_⋆/M_⊙)≈ 8.7 at z=0.5 to log(M_⋆/M_⊙)≈ 10.1 at z=3.0. We define galaxies as quiescent when their 100 Myr-averaged star-formation rates are 0.8 dex or more below the star-forming sequence as defined by <cit.>. §.§ Derivation of Color-M/L Relations Our novel approach to derive mass profiles gradients rests on the assumption that the color-M/L relation within individual galaxies is identical to the relation among galaxies. We create a (multi-)color-M/L relation based on the full SED fitting results described above, for a set of colors for which we have spatially resolved information from HST. As such we leverage photometry with a much broader dynamic range in wavelength (UV to mid-IR) to infer M/L estimates based on a more limited range for which spatially resolved profiles are available (0.6 - 1.6 micron in the observed frame). In order to capture the effects of cosmological redshift and evolution of stellar populations simultaneously, we fit a relation of the following form: log(M_⋆/F_160) = a_0 + a_1log(1+z) + a_2log(F_606/F_160) + a_3log(F_814/F_160) + a_4log(F_125/F_160) where M_⋆ is the stellar mass estimate from the full SED fit, z is the redshift, and the F values are the total flux densities from the Skelton catalog in units of AB=25 magnitude in the respective HST filters (F606W, F814W, F125W and F160W). The fit minimizes χ^2, which is dominated by the uncertainties in M_⋆ rather than the photometric data. We fit two separate relations for quiescent and star-forming galaxies with stellar masses M_⋆>10^10 M_⊙ and over the redshift range 0.5<z<2.3, beyond which only one data point redward of the Balmer/4000Å remains and the uncertainties in M_⋆/L estimates increase markedly. We fit these relations to all galaxies with M_⋆>10^10 M_⊙ and measured flux densities in the four HST filters and give the coefficients in Table 1. The resulting log(M_⋆/F_160) proxy is shown in Figure <ref>. The overall scatter is 0.12 dex, increasing from 0.07 at z<1 to 0.15 at z∼ 2, which is less than or comparable to the typical uncertainty in SED-based M_⋆ estimates. The functioning of the method is further illustrated in Figures <ref> and <ref>. Figure <ref> shows a intuitively clearer version of Figure <ref>, displaying M_⋆/L values in the rest-frame V band. The tightness of the correlation across a large dynamic range demonstrates the general precision and accuracy of our method. Underlying this result is an observed-frame color-M/L relation that continuously changes with redshift (see Eq. 1). In Figure <ref> we show for two redshift bins the relationship between I_814-H_160 and M/L. The I_814-H_160 is one of three colors used to derive the color-based M_⋆/L and the scatter is due to additional color information from V_606 and J_125. At low redshift the set of colors contains sufficient information to reproduce the variety in SED-based M/L values at fixed color whereas at high redshift this is no longer the case due to a lack of information. If we fit a single relation to the joint population of star-forming and quiescent galaxies the distribution of points in Figure <ref> becomes somewhat non-linear for quiescent galaxies, perhaps because those are greatly outnumbered by star-forming galaxies at higher redshifts. Despite the apparent robustness of the method there are several caveats we have to keep in mind. The color-coding in Figure <ref> reveals a remaining systematic effect: for star-forming galaxies there is a stratification with star-formation activity in the sense that the HST colors overpredict M_⋆/L_V for galaxies with high star-formation activity, and vice versa. This does not translate into a stratification with attenuation; a degeneracy of stellar population properties must exist for a given (set of) colors. To what extent this issue affects the estimates of the stellar-half mass radius will be discussed where relevant. Additional conceptual caveats are the following. First, the Prospector stellar mass estimates serve as ground truth for our approach, but this ground truth itself is uncertain. We test for the sensitivity to this particular choice by comparing with the original 3D-HST stellar mass estimates presented by <cit.>. Even though the fitted parameters and resulting color-M/L relations change, the results after applying them to the observed color gradients as explained below do not differ significantly. Second, the work by <cit.> demonstrated that average M/L estimates from integrated photometry can be biased due to the outshining effect of young, unobscured regions. This bias propagates into our color-M/L relations and, more importantly, the impact on interpreting spatially resolved color information may differ. §.§ Converting Light Profiles to Stellar Mass Profiles §.§.§ Light Profile Fits and Color Gradients <cit.> describe the Sérsic profile fits performed with galfitM <cit.> on CANDELS imaging. The fits are performed on all available HST images in different filters simultaneously, fitting some parameters (axis ratio, position angle, position) as a constant while allowing others to vary quadratically as a function of wavelength (magnitude, effective radius, Sérsic index). Uncertainties on the parameters are usually underestimated and we increase the uncertainties as prescribed by <cit.>, who compared independent parameter estimates for the same objects and derived the `true' uncertainties as a function of S/N. The quadratic functions that describe the variation of the parameters with wavelength allow us to calculate Sérsic flux density profiles as a function of radius r at a common rest-frame wavelength of, e.g., 0.5μm (S_0.5(r)), with a half-light radius R_0.5 and a Sérsic index n_0.5. The ratios of Sérsic profiles at different wavelengths produce color profiles and can be used to define a color gradient between 0.5R_0.5 and 2R_0.5: Δ C = S_0.6(2R_0.5) / S_0.4(2R_0.5)/S_0.6(0.5R_0.5) / S_0.4(0.5R_0.5) The choices to evaluate the Sérsic profiles at 0.4μm and 0.6μm, and between 0.5 and 2 effective radii, are motivated only by pragmatic considerations: all galaxies in the sample have this wavelength coverage and at smaller and larger radii the profiles are more uncertain. Through sampling the uncertainties in the profile fits we infer propagated uncertainties in the color profiles. We note that the color gradient Δ C is not used in our method to derive M_⋆/L profiles and only serves to illustrate the strength of color gradient as a function of various galaxy parameters: Figure <ref> shows the evolution of the color gradient and its dependence on stellar mass and star-formation activity. Negative color gradients (redder centers; bluer outerparts) are ubiquitous, at all z and for all galaxy types. At z∼ 1 the measurement uncertainties are smaller than the population scatter, while at z∼ 2 they are similar, implying that at lower z we can distinguish galaxies with different color gradients while at higher z the observed scatter is dominated by measurement uncertainties. At fixed stellar mass star-forming galaxies generally have stronger gradients than quiescent galaxies. §.§.§ M/L Gradients and Mass Profiles The multi-color profiles are converted to M/L profiles using the color-M/L relations described in Section <ref>. At each radius (within an elliptical annulus) we have a measured value of F_160, which is multiplied by the right-hand side of Eq. 1, with z the redshift of the galaxies and where the flux ratios are given by the ratios of the four Sérsic profiles in same annulus. The result is a direct conversion of the F160W light profile into an M_⋆ profile. Since the inferred mass is based on a linear combination of different, inter-dependent Sérsic profiles, it matches a Sérsic profile itself (usually to within 0.1%); we refit a Sérsic profile to the mass profile out to 2× the effective radius in F160W, propagating – with a Monte Carlo simulation – the uncertainties on the individual light profile estimates and the color-M/L relation. In Figure <ref> we show how the stellar half-mass radius  compares with the optical half-light radius . There is a generally tight correlation, with a scatter that increases from ∼ 0.10 dex at z<1 to up to ∼ 0.2 dex at z>2. Since the scatter is generally similar to the formal uncertainties, we conclude that uncertainties dominate over intrinsic variations at the level of the precision that we achieve. The exception is the set of star-forming galaxies at z<1.5, for which the scatter (0.1-0.15 dex) is somewhat larger than the uncertainties (0.08-0.09 dex); this implies that we recover intrinsic variations in /from galaxy to galaxy (that are not explained by uncertainties). As expected based on the color gradients,  is generally smaller (by 0.1-0.15 dex) than , qualitatively consistent with previous work <cit.>. For quiescent galaxies we see an offset that is approximately constant with redshift (-0.10 to -0.14 dex across the entire redshift range), while for star-forming galaxies the difference decreases somewhat, from -0.17 dex at z<1 to -0.05 dex at z>2. At z<1 the largest galaxies are the most offset, which is due to the combination of a mild dependence on both M_⋆ and R (see Sec. <ref>). It is quite remarkable that, overall the color and M/L gradients are similar for star-forming and quiescent galaxies, as those in the former are mainly caused by a radial variation in attenuation <cit.>, even though, especially at lower redshifts, stellar population gradients are also present <cit.>, while the latter generally have little dust and the gradient must be primarily due to stellar population variations. The increase in scatter and downturn for galaxies with ≲ 1 kpc suggests that systematic errors affect the  estimates for the smallest galaxies. This is not surprising, since the HST/WFC3 PSF has a FWHM of ∼ 1.4 kpc. While light profiles can be well constrained at smaller scales, given sufficient S/N and accurate knowledge of the PSF, combining those from four different filters can lead to highly non-linear compound uncertainties that are difficult to propagate formally. Indeed, our formal uncertainties do not increase in line with the increased scatter. In Section <ref> we will address the precision and accuracy of our  estimates, including the behavior at ≲ 1 kpc. §.§ Comparison with NIRCam Effective Radii 435 galaxies in our M_⋆>10^10 M_⊙ sample (≈ 4%) fall within the footprint of JWST/NIRCam imaging from CEERS <cit.>. Martorano et al. (submittted) describe the galfitM <cit.> fitting procedure to the 7 short- and long-wavelength filters images and the estimation of rest-frame near-IR Sérsic profiles and associated half-light radii. After fitting the individual images a Chebychev polynomal fit is used to calculate the interpolated half-light radii at the desired rest-frame wavelength (0.5, 1.5, or 2.0μm). Relevant for the current paper is that we find no offset between the  estimates from CANDELS and CEERS (<0.01 dex) and small scatter (∼0.05 dex). In Figure <ref> we compare the stellar half-mass radii described in Section <ref> with the NIRCam-based near-infrared half-light radii (rest-frame 2.0μm for z<1.5; rest-frame 1.5μm for z>1.5). We verified that this sub-sample is representative of the full CANDELS sample in terms of its  distribution: the statistical properties of the sub-sample are not significantly different from those shown in Fig. <ref>. Up to z=1.5 we see negligible offsets (≤ 0.03 dex), implying an absence of significant systematic biases in our stellar half-mass radii. The typical formal uncertainty on the stellar half-mass estimates (≤ 0.10 dex) is very similar to the scatter, implying robust uncertainties and a typical level of precision of 25% or better. For quiescent galaxies at 1.5<z<2 the performance is still good, without a systematic offset, and uncertainties and scatter that are in agreement, while at z>2 the uncertainties increase and we are hampered by the small sample size as well. Reversing the question, we can also conclude that rest-frame near-IR light profiles represent stellar mass profiles well. That this is true is not immediately obvious, as age gradients would still cause M/L gradients even in the near-IR. Either these effects are small or in the case of massive, high-metallicity galaxies this trend may be countered by a anti-correlation between metallicity and near-IR M/L. For star-forming galaxies we see a small bias (+0.05-0.07 dex) that is comparable in magnitude but opposite in sign to the offset with half-light radius (Fig. <ref>). This may imply that the stellar half-mass radii for star-forming galaxies are perhaps overestimated at z>1.5. Such issues are understandable, as it is challenging to obtain accurate M/L estimates with limited photometric information redward of the 4000Å break. The small bias in the color-M_⋆/L relation for star-forming galaxies identified in Section <ref> (particularly, Fig. <ref>) may explain this: if galaxies have a positive gradient in sSFR <cit.>, then their outer parts will have overesimed M_⋆/L with our method, leading to overestimated . Clearly, NIRCam-based photometry can alleviate these concerns, but modeling of NIRCam photometry this is beyond the scope of this paper. Unfortunately, the current CEERS NIRCam sample is too small to assess the robustness of  estimates at <1 kpc. In the 1<z<1.5 panel there is a hint that those are indeed somewhat underestimated as suggested by Fig. <ref>, if perhaps not by the same amount. The model NIRCam PSF is known to be inaccurate at some level and further progress in our understanding of the true NIRCam PSF is required for accurate size estimates of the smallest galaxies. We note that both the rest-frame optical sizes from HST and rest-frame near-IR sizes from JWST are based on imaging data with similar resolution (≈ 0.15 arcsec). Our statements regarding the precision and accuracy of our estimates are limited to the >1 kpc regime. §.§ Converting 2D Profiles to 3D Profiles The methodology developed by <cit.> allows us to convert our two-dimensional (projected) Sérsic light and mass profiles into three-dimensional profiles. The procedure builds on our (statistical) knowledge of the intrinsic shape distribution of galaxies as described by <cit.>: these authors constructed models for the projected shape distributions of galaxies of different types, masses and at different redshifts. These models assume a Gaussian distribution for the intrinsic axis ratios of a triaxial ellipsoid c/a (short-to-long axis ratio) and b/a (middle-to-long axis ratio) and/or a Gaussian distribution for the triaxiality parameter T = (a^2-b^2)/(a^2-c^2). Depending on the type of galaxy the model consists of a single oblate population (a≡ b), a single triaxial population, or a mixed model of two components (one oblate + one triaxial). Given these models, the redshift, stellar mass and star-formation activity of a galaxy produce an a priori probability distribution for its intrinsic shape, parameterized as truncated Gaussian distributions for ellipticity (E) and triaxilaty (T) (as published in Table 3 of <cit.> and Table 1 of <cit.>), and its projected shape q' = b'/a' (assuming random viewing angles for the intrinsic shape distribution). Then, given the measured projected shape q' from the CANDELS imaging (see Sec. <ref>), an a posteriori probability distribution for the intrinsic shape distribution is constructed <cit.>: for a given q exists a set of combinations of intrinsic shape and viewing angle. Instead of calculating this a posteriori probability distribution for each galaxy we construct a library of solutions because its calculation requires an inversion of a demanding numerical integral and we wish to sample from the measurement uncertainty in q and R in order to propagate this into the inferred constraint on the intrinsic shape and 3D size. Using these libraries we infer posterior probability distributions for the 3D profile and the associated parameters R_3D,M*, the radius of a sphere that contains 50% of the triaxial stellar mass distribution. Figure <ref> shows that the deprojection for galaxies in this mass range has a small effect on the inferred half-mass radii, as was already demonstrated for specific cases by <cit.>. The sub-optimal visualisation of the results is chosen deliberately to highlight the lesser importance of deprojection compared to the M_⋆/L gradient correction shown in Figure <ref>. For quiescent galaxies, which show a larger variety in intrinsic shapes than star-forming galaxies (at least, for M_⋆>10^10 M_⊙), the scatter in R_3D,M_⋆  /  R_M_⋆ is larger than for star-forming galaxies, but even for those the full range is no more than 30%, with a systematic offset of +0.05 dex. For star-forming galaxies the scatter is smaller, and the systematic offset almost zero. In other words, the projected effective radius, measured as the long axis on the ellipse that encloses 50% of the light or mass, generally serves as accurate and precise proxy for the median radius, the radius of a sphere that contains 50% of the light or mass distribution in 3D. For low-mass star-forming galaxies, as shapes become more irregular, the deprojection will have a larger effect. Symmetrized uncertainties are defined as half of the 16th-84th percentile ranges of the posterior distributions, which produces a typical combined uncertainty in R due to the deprojection of 0.03 dex (σ in Fig. <ref>). One caveat is that we have ignored the wavelength dependence on the shape. But given the lack of sensitivity to changes in intrinsic shape, we can be confident that this approximation does not affect the results. Another caveat is related to the finding by <cit.> that a correlation exists between (intrinsic) shape and size for star-forming galaxies. This implies that size should, in principle, be included in our construction of the a posteriori probability distribution for the 3D profile. We test for the necessity of this additional step by varying the models, shifting the Gaussian means by 2σ up and down. The differences are negligible (on the 1% level), which implies that the current setup – where we ignore the covariance between size and shape – is sufficient for our purposes.[The corollary implication is that differences in shape as a function of, e.g., redshift and mass are not relevant for the conversion from 2D to 3D profile in the first place, but this was not a foregone conclusion.] Finally, the stellar masses, star-formation rates and definition of quiescence used in this paper are not the same as those used by <cit.> and <cit.>, but given the minor effects of the deprojection these differences have no impact on the overall result. § THE SIZE-MASS DISTRIBUTION AND ITS EVOLUTION §.§ Comparing Size Proxies The significant but approximately constant offset between the rest-frame optical half-light radii and stellar half-mass radii, along with a lack of strong projection effects, imply that the view of the size-mass distribution of galaxies will not strongly depend on the choice of size proxy. In Figure <ref> we show size-mass distributions for the redshift range 0.5<z<1.5, for light-weighted radii and both projected (2D) and deprojected (3D) stellar half-mass radii. Regardless of size proxy we see the same characteristic size distribution, with a steep slope for quiescent galaxies and a shallower slope for star-forming galaxies. The combined median trend bends downward at M_⋆≈ 2× 10^10 M_⊙, a transitional point in the structural properties of present-day galaxies first identified by <cit.>. The downward trend is particularly noticeable at z>1. Similarities aside, there are a number of small but interesting differences between the size proxies. Switching from light- to mass-weighted sizes strengthens the flattening/bending trend with mass further, an immediate result of stronger color and M/L gradients seen for more massive galaxies. Projection effects play a relatively small role in the demographics of the size-mass distribution. But note that the dashed and solid median lines in the bottom-right panel of Fig. <ref> are nearly perfectly parallel. This implies that the joint effect of M_⋆/L-gradient correction and deprojection leads to a constant shift in median galaxy sizes across more than 2 orders of magnitude in M_⋆. The scatter in sizes decreases somewhat when performing these corrections: rather than increasing the overall error budget, the corrections get us closer to what can be considered true, physical sizes, here defined as 3D stellar half-mass radii. Increased uncertainties in the R_M_⋆ estimates prevent us from presenting a similarly reliable view of the size-mass distribution at higher redshifts, in particular for star-forming galaxies. We should also keep in mind that the color-M_⋆/L relations devised in this paper are constructed on the basis of galaxies more massive than M_⋆ > 10^10 M_⊙, therefore the  estimates of low-mass galaxies may be biased, but this issue is likely not important as a general absence of strong color gradients implies a general absence of strong M_⋆/L gradients. The other caveat is that we concluded in Section <ref> that ≈ 1 kpc size estimates are suspect if color gradients are present. The small-size tail of quiescent galaxies may therefore suffer from currently unknown systematic effects and increased random uncertainties. That said, the average sizes general exceed 2 kpc so that the trends shown in Figure <ref> are robust. ccccccccc 12 Stellar Half-Mass Radii Field ID R.A. Dec. z log M_⋆ Quiescent Flag R_M_⋆ R_M_⋆,3D deg. deg. M_⊙ kpc kpc 1 19 215.299759 53.051308 1.076 9.510^+0.076_-0.079 0 2.578±0.371 2.547±0.551 1 28 215.264175 53.027222 0.763 8.893^+0.167_-0.142 0 1.840±0.234 1.746±0.324 1 46 215.293457 53.048298 1.217 10.650^+0.031_-0.027 1 0.744±0.120 0.862±0.154 1 55 215.298920 53.052399 0.697 9.045^+0.076_-0.088 0 2.557±0.294 2.564±0.496 1 63 215.296555 53.050770 1.219 9.931^+0.049_-0.052 0 2.847±0.317 2.997±0.534 1 83 215.302460 53.055332 0.725 9.808^+0.037_-0.043 0 2.422±0.449 2.665±0.633 1 110 215.297195 53.051907 0.694 10.294^+0.049_-0.036 1 1.528±0.290 1.935±0.395 1 145 215.281769 53.042686 0.784 9.360^+0.055_-0.160 0 4.058±0.504 4.034±0.796 1 148 215.248138 53.019707 0.933 8.947^+0.086_-0.116 0 1.760±0.419 1.794±0.511 1 151 215.252319 53.022339 0.679 9.738^+0.052_-0.062 0 2.300±0.224 2.159±0.332 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ (1): Field (1: EGS; 2: COSMOS; 3: GOODS-N; 4: GOODS-S; 5: UDS); (2): ID from <cit.>; (3): R.A. from <cit.>; (4): Dec. .from <cit.>; (5) Stellar mass from <cit.>, with 16th- and 84th-perceintile uncertainty range; (6) Star-forming (0) or quiescent (1) (Sec. <ref>); (7) Projected (2D) stellar half-mass radius (Sec. <ref>), defined as semi-major axis of half-mass ellipse; Deprojected (3D) stellar half-mass radius (Sec. <ref>), defined as radius of sphere containing 50% of the stellar mass. §.§ Separating Star-Forming and Quiescent Galaxies The clear pattern with star-formation activity in the size-mass distribution (Fig. <ref>) invites a closer look at the size distributions for star-forming and quiescent galaxies separately. Figure <ref> shows the star-forming galaxies and the most eye-catching result is the stellar-half mass radius depends less strongly on stellar mass than the half-light radius. Galaxies near the knee of the stellar mass function have  just ≈ 2 times larger than galaxies 100× less massive, a slope of 0.15 dex. Also note the correlation between size and projected axis ratio, particularly at M_⋆<10^10 M_⊙: as shown by <cit.>, galaxies that are flat in projection are more likely to have prolate/elongated 3D shapes and have larger (projected) sizes than galaxies with an oblate (disk-like) shape. When deprojecting the mass distribution this effect is lessened, but without a meaningful change in median size or scatter. This implies that for individual galaxies a shape-dependent deprojection correction can improve the accuracy of the size estimate, but that such a correction is not necessary to correctly infer the ensemble size distribution. For quiescent galaxies (Fig. <ref>) the size-mass distribution is strikingly different from that of star-forming galaxies, but rather similar for the different size proxies. The M_⋆/L gradient correction shifts the sizes downward, partially countered by an upward shift when correcting for projection effects. The distribution is flat up until ≈ 2-3× 10^10 M_⊙ <cit.>, followed by a steep increase toward larger masses (seen by many authors). Correlations with projected axis ratio are less obvious compared to those seen for star-forming galaxies, as most galaxies are oblate or round/triaxial, but the smallest galaxies in the mass range 10^10.5-11 M_⊙ are flatter (and therefore diskier) than average. |c|ccc|ccc|ccc||ccc|ccc|ccc| Median Radii and Percentiles 9c||0.5<z<1 9c|1<z<1.5 2-19 3c|log R_0.5μm 3c|log R_M_⋆ 3c||log R_M_⋆,3D 3c|log R_0.5μm 3c|log R_M_⋆ 3c|log R_M_⋆,3D log M_⋆ (M_⊙) 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 8.8 0.01 0.26 0.50 0.03 0.25 0.47 0.03 0.25 0.46 -0.04 0.17 0.41 -0.03 0.17 0.37 -0.02 0.16 0.35 9.0 0.06 0.31 0.54 0.06 0.30 0.50 0.07 0.29 0.49 0.02 0.29 0.50 0.02 0.25 0.45 0.02 0.23 0.43 9.2 0.09 0.34 0.56 0.09 0.32 0.53 0.10 0.32 0.52 0.05 0.31 0.50 0.04 0.26 0.45 0.05 0.25 0.43 9.4 0.13 0.41 0.62 0.12 0.37 0.58 0.14 0.38 0.57 0.10 0.35 0.56 0.10 0.31 0.50 0.10 0.31 0.48 9.6 0.15 0.42 0.65 0.14 0.38 0.59 0.15 0.38 0.58 0.16 0.39 0.60 0.14 0.34 0.54 0.15 0.34 0.52 9.8 0.21 0.46 0.70 0.19 0.40 0.61 0.21 0.41 0.60 0.18 0.42 0.63 0.17 0.37 0.56 0.18 0.37 0.56 10.0 0.25 0.49 0.71 0.19 0.42 0.63 0.20 0.42 0.63 0.23 0.47 0.68 0.17 0.42 0.59 0.19 0.42 0.57 10.2 0.26 0.54 0.73 0.17 0.45 0.62 0.19 0.45 0.62 0.24 0.51 0.69 0.17 0.44 0.61 0.19 0.45 0.61 10.4 0.18 0.50 0.75 0.07 0.40 0.64 0.12 0.41 0.64 0.26 0.57 0.72 0.17 0.45 0.63 0.22 0.47 0.64 10.6 0.18 0.49 0.75 0.05 0.37 0.61 0.11 0.39 0.62 0.04 0.47 0.71 -0.05 0.36 0.61 0.00 0.39 0.62 10.8 0.21 0.45 0.76 0.08 0.32 0.60 0.13 0.37 0.62 0.06 0.39 0.74 -0.03 0.26 0.64 0.00 0.31 0.64 11.0 0.31 0.49 0.76 0.20 0.39 0.63 0.24 0.45 0.68 0.14 0.36 0.72 0.02 0.27 0.59 0.07 0.32 0.62 11.2 0.41 0.59 0.81 0.29 0.51 0.70 0.35 0.57 0.75 0.29 0.53 0.74 0.21 0.41 0.62 0.25 0.44 0.65 11.4 0.57 0.77 0.94 0.48 0.66 0.88 0.56 0.70 0.97 0.50 0.65 0.81 0.35 0.53 0.72 0.41 0.57 0.74 11.6 0.71 0.85 0.95 0.54 0.71 0.89 0.60 0.79 0.99 0.67 0.82 0.98 0.61 0.66 0.95 0.66 0.72 0.98 Percentiles correspond with the lines in Figure <ref>. |c|ccc|ccc|ccc||ccc|ccc|ccc| Median Radii and Percentiles for Star-Forming Galaxies 9c||0.5<z<1 9c|1<z<1.5 2-19 3c|log R_0.5μm 3c|log R_M_⋆ 3c||log R_M_⋆,3D 3c|log R_0.5μm 3c|log R_M_⋆ 3c|log R_M_⋆,3D log M_⋆ (M_⊙) 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 8.8 0.01 0.26 0.50 0.03 0.25 0.47 0.03 0.25 0.46 -0.04 0.17 0.41 -0.03 0.17 0.38 -0.02 0.16 0.35 9.0 0.06 0.32 0.55 0.06 0.31 0.50 0.07 0.30 0.49 0.02 0.29 0.50 0.02 0.25 0.45 0.02 0.23 0.43 9.2 0.09 0.35 0.57 0.10 0.33 0.54 0.10 0.33 0.52 0.05 0.31 0.50 0.05 0.26 0.45 0.05 0.25 0.44 9.4 0.14 0.42 0.63 0.13 0.38 0.58 0.14 0.38 0.57 0.11 0.35 0.56 0.11 0.31 0.50 0.11 0.31 0.48 9.6 0.17 0.45 0.67 0.16 0.40 0.60 0.17 0.40 0.59 0.16 0.39 0.60 0.15 0.35 0.54 0.16 0.34 0.52 9.8 0.24 0.48 0.71 0.22 0.42 0.62 0.23 0.42 0.61 0.19 0.43 0.63 0.18 0.38 0.57 0.20 0.37 0.56 10.0 0.29 0.52 0.72 0.23 0.44 0.65 0.24 0.45 0.64 0.24 0.48 0.68 0.19 0.43 0.59 0.20 0.43 0.58 10.2 0.36 0.58 0.74 0.29 0.48 0.63 0.30 0.47 0.64 0.31 0.54 0.71 0.24 0.46 0.62 0.26 0.46 0.62 10.4 0.39 0.61 0.80 0.29 0.50 0.67 0.31 0.51 0.66 0.39 0.61 0.74 0.33 0.50 0.65 0.35 0.51 0.66 10.6 0.45 0.63 0.80 0.31 0.49 0.67 0.32 0.50 0.68 0.37 0.59 0.76 0.24 0.48 0.65 0.27 0.49 0.66 10.8 0.48 0.70 0.86 0.33 0.54 0.70 0.35 0.55 0.71 0.45 0.67 0.82 0.34 0.56 0.70 0.36 0.58 0.71 11.0 0.56 0.74 0.90 0.42 0.59 0.74 0.45 0.60 0.75 0.43 0.65 0.83 0.35 0.52 0.69 0.38 0.54 0.72 11.2 0.67 0.76 0.89 0.52 0.63 0.73 0.54 0.65 0.76 0.58 0.72 0.82 0.42 0.59 0.70 0.44 0.60 0.72 11.4 0.66 0.87 1.02 0.57 0.71 0.93 0.65 0.76 1.00 0.56 0.69 0.86 0.45 0.59 0.72 0.49 0.62 0.72 Percentiles correspond with the lines in Figure <ref>. |c|ccc|ccc|ccc||ccc|ccc|ccc| Median Radii and Percentiles for Quiescent Galaxies 9c||0.5<z<1 9c|1<z<1.5 2-19 3c|log R_0.5μm 3c|log R_M_⋆ 3c||log R_M_⋆,3D 3c|log R_0.5μm 3c|log R_M_⋆ 3c|log R_M_⋆,3D log M_⋆ (M_⊙) 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 16% 50% 84% 8.8 -0.06 0.13 0.33 -0.12 0.11 0.29 -0.05 0.17 0.29 0.13 0.13 0.16 0.05 0.14 0.17 0.09 0.14 0.19 9.0 0.04 0.22 0.36 0.05 0.19 0.33 0.09 0.23 0.35 0.02 0.23 0.35 0.04 0.19 0.37 0.11 0.23 0.36 9.2 0.02 0.18 0.36 -0.03 0.13 0.36 0.04 0.19 0.43 -0.07 0.10 0.40 -0.09 0.12 0.36 -0.08 0.15 0.39 9.4 0.05 0.20 0.36 0.01 0.18 0.34 0.06 0.22 0.40 -0.01 0.16 0.41 -0.03 0.16 0.40 -0.00 0.20 0.40 9.6 0.04 0.24 0.34 -0.05 0.16 0.29 0.02 0.21 0.35 0.08 0.22 0.43 0.05 0.22 0.34 0.09 0.22 0.36 9.8 -0.02 0.22 0.37 -0.03 0.19 0.34 -0.02 0.22 0.36 -0.04 0.18 0.26 -0.19 0.05 0.27 -0.14 0.12 0.33 10.0 -0.04 0.19 0.40 -0.11 0.09 0.32 -0.08 0.15 0.35 0.07 0.31 0.47 0.05 0.21 0.46 0.11 0.23 0.50 10.2 -0.10 0.23 0.44 -0.17 0.12 0.32 -0.11 0.14 0.36 -0.07 0.11 0.36 -0.15 0.00 0.27 -0.06 0.05 0.29 10.4 0.00 0.19 0.41 -0.12 0.12 0.30 -0.08 0.15 0.35 -0.08 0.11 0.49 -0.15 0.08 0.37 -0.11 0.14 0.40 10.6 0.06 0.24 0.50 -0.07 0.13 0.41 -0.03 0.19 0.44 -0.06 0.08 0.41 -0.19 -0.02 0.29 -0.12 0.04 0.35 10.8 0.15 0.33 0.49 0.03 0.21 0.38 0.09 0.25 0.43 0.00 0.18 0.44 -0.10 0.05 0.30 -0.05 0.11 0.35 11.0 0.29 0.42 0.61 0.17 0.31 0.51 0.21 0.36 0.58 0.10 0.29 0.54 -0.03 0.17 0.43 0.01 0.20 0.49 11.2 0.40 0.56 0.75 0.28 0.48 0.66 0.34 0.54 0.73 0.25 0.38 0.63 0.15 0.29 0.50 0.17 0.34 0.56 11.4 0.56 0.71 0.91 0.47 0.64 0.88 0.55 0.69 0.95 0.46 0.61 0.78 0.32 0.51 0.69 0.39 0.56 0.76 11.6 0.75 0.87 0.99 0.67 0.74 0.88 0.68 0.79 0.98 0.66 0.81 0.86 0.60 0.65 0.88 0.66 0.70 0.94 Percentiles correspond with the lines in Figure <ref>. §.§ Size Evolution of Massive Quiescent Galaxies For massive (M_⋆ > 10^11 M_⊙), quiescent galaxies our size estimates are robust at z>1.5 and we can probe the evolution of size-mass distribution out to z=2.3.[The data for this subset of galaxies is included in Table <ref>.] Figure <ref> shows the evolution with redshift of the sizes of massive, quiescent galaxies, comparing the three definitions: light-weighted (R_0.5μm∝ (1+z)^-1.64± 0.09), mass-weighted (R_M_⋆∝ (1+z)^-1.72± 0.15), and deprojected mass-weighted (R_M_⋆,3D∝ (1+z)^-1.79± 0.16). The key result is that the size evolution is significant, regardless of the choice of size proxy. At all redshifts the mass-weighted sizes are smaller than the light-weighted sizes by similar amounts by 0.1-0.15 dex, in line with the early NIRCam-based results from <cit.> that showed that NIRCam/F444W sizes are smaller than NIRCam/F150W by ≈ 0.15 dex. The deprojection from 2D to 3D (Sec. <ref>) shifts the sizes upward by about 0.05 dex. For most purposes, the measured sizes of massive galaxies do not require a deprojection correction to enable a meaningful comparison with the sizes of simulated galaxies based on 3D stellar particle distributions. The above analysis does not consider the steep slope of the size-mass relation for massive galaxies, and the effect that evolution in slope or differences in slope among the three size proxies might have on the inferred evolution. But the similarity in slope for the three size proxies seen in Figure <ref> and a lack of strong change in slope with redshift suggest that the effects are mild at most. Indeed, the size evolution result shown in Figure <ref> does not depend on the precise selection in M_⋆: galaxies with 10^11<M_⋆/M_⊙ < 2× 10^11 and galaxies with M_⋆>2× 10^11 M_⊙ show the same pace of evolution for all three size proxies, well within the uncertainties. There are two effects that may introduce a bias the inferred size evolution. First, differences in slope among the size proxies depend The slopes are very similar for the three size proxies (see Fig.<ref>), so that any dependence on slope in the parameterization of size evolution is the same for all three proxies. A shift in the M_⋆ distribution with redshift would introduce a bias in the estimated pace of evolution. Repeating the analysis by adopting a fixed slope of ΔlogR/ΔlogM_⋆=0.6 does not change the pace of evolution by less than 25% of the uncertainty. Our result that the half-mass and half-light radii of massive quiescent galaxies evolve rapidly wiht redshift and in a similar manner (with R∝(1+z)^α=-1.6… -1.8) is in tension with previous work. <cit.> argue that a correction for M_⋆/L gradients removes much of the size evolution at z>1 seen in the rest-frame optical so that the stellar half-mass radius, on average, evolves much less than the stellar half-light radius or not at all. This begs the question how the previously published stellar half-mass radii compare with the rest-frame near-IR sizes from NIRCam. In Appendix A we provide and extensive and quantitative comparison, the result of which is, in short, that the  estimates constructed in this paper produce the smallest offset and scatter when compared to NIRCam-based sizes. In addition, for our half-mass radii the uncertainties are similar to the scatter in the comparison with the NIRCam sizes (typically, 0.10 dex at z<2; also see Fig. <ref>), whereas for previously published estimates the formal uncertainties are smaller (≲ 0.05 dex) and likely underestimated, as was already pointed out by <cit.>. The accurate agreement over the redshift range 0.5<z<2 for our half-mass radii argue in favor of our conclusion that the sizes of massive quiescent galaxies strongly evolve with cosmic time, in line with previous results based on rest-frame optical size measurements <cit.>. § SUMMARY & OUTLOOK Our novel method to estimate stellar half-mass radii for a large sample of galaxies at 0.5<z<2.5 drawn from CANDELS and 3D-HST rests on leveraging the integrated UV-to-midIR photometric information that is available for these galaxies. We derive a relationship between the HST/ACS+WFC3 colors and the M/L estimated from the UV-to-midIR SED (Sec. <ref>) and apply that relationship to the spatially resolved color profiles (Sec. <ref>). The underlying assumption is that the distribution of physical properties (age, metallicity, attenuation, etc.) among galaxies is comparable to that within galaxies. Moreover, we infer 3D sizes based on the deprojection machinery developed by <cit.> and our knowledge of the shape distribution of galaxies and its dependence on stellar mass and redshift (Sec. <ref>). The  and  estimates are made publicly available online – see Table <ref> for the first 10 entries of the catalog. An essential test of the reliability of our stellar half-mass radii is provided by the comparison with size measurements from JWST/NIRCam imaging in the rest-frame near-IR for a (for now) limited subset galaxies in CEERS. The agreement is excellent (Sec. <ref>). First, systematic offsets are less than 10% for quiescent galaxies up to z=2 and for star-forming galaxies up to z=1.5. Second, the scatter is small (typically, 25%) and consistent with the formal error budget. The comparison with NIRCam demonstrates that our stellar half-mass radii are precise and accurate under the assumption that rest-frame near-IR sizes are, indeed, a good proxy for stellar-mass weighted sizes. As briefly discussed in Section <ref> this is not self-evident. Even in the near-IR the M_⋆/L evolves strongly with age. Either our and R_NIR sizes agree because they both accurately trace the stellar mass distribution, or they suffer from the same systematic bias. The latter is a distinct possibility: if our M_⋆/L are overestimated in regions with high star-formation activity (see Sec. <ref>) then lower M_⋆/L_NIR are to be expected as well. Previously published half-mass radii do not perform equally well, as described in Sec. <ref>, with larger systematic offsets, and larger scatter while reporting smaller formal uncertainties. The main caveat of the present analysis is that for small objects ≲ 1 kpc the systematic uncertainties are not well understood, not only because this is near the resolution limit of HST in the optical and JWST/NIRCam in the near-IR, but also because the NIRCam PSF is not sufficiently well understood at the moment. In Section <ref> we show the effects of correcting for M_⋆/L gradients and deprojecting the 2D distribution on the size-mass distribution of galaxies at 0.5<z<1.5. Compared to the rest-frame optical size distribution, the stellar half-mass radius - stellar mass relation is less steep (Fig. <ref>), while deprojection affects the size-mass distribution only little. A separation between star-forming and quiescent galaxies (Sec. <ref>) shows that the flattening of the size-mass relation is driven by massive star-forming galaxies, which have the largest downward M_⋆/L correction. For quiescent galaxies, the deprojection counters the downward M_⋆/L and the size-mass distribution in the rest-frame optical is very similar to the 3D half-mass radius distribution, modulo a small (≈ 0.05 dex) downward shift. The medians and percentile values of the various size distributions shown in Figures <ref>, <ref> and <ref> are provided in Tables <ref>, <ref>, and <ref>, respectively. In Section <ref> we show that the average and of massive, quiescent galaxies evolve rapidly from z=2.3 to z=0.5, with R∝ (1+z)^-1.7±0.1, and at the same pace as the rest-frame optical half-light radii. Now that NIRCam imaging datasets across larger volumes become available from COSMOS-Web <cit.> and JADES <cit.>, and the sample sizes become similar to those drawn from CANDELS, then our stellar mass profiles will be superseded by rest-frame near-IR profiles and the modeling of the optical-to-near-IR light profiles as pioneered by <cit.>. But our work provides a simple conversion from light-to-stellar mass weighted sizes for galaxies without spatially resolved near-IR imaging, that is, those without NIRCam imaging or those at high redshift (z>6) when even NIRCam only samples the rest-frame optical. The key point of our work is that spatially resolved optical colors accurately predict the sizes of galaxies in the rest-frame near-IR, which is generally considered a robust proxy for the stellar mass distribution. § ACKNOWLEDGMENTS MM acknowledge the financial support of the Flemish Fund for Scientific Research (FWO-Vlaanderen), research project G030319N. All the HST and JWST data used in this paper can be found in MAST: [10.17909/z7p0-8481]http://dx.doi.org/10.17909/z7p0-8481, [10.17909/T94S3X]http://dx.doi.org/10.17909/T94S3X. aasjournal § COMPARISON WITH  ESTIMATES FROM THE LITERATURE Previously published estimates of stellar half-mass radii based on CANDELS data are shown in Figures <ref> and <ref> in the same manner as in Figures <ref> and <ref>. Figure <ref> compares, for four different authors, the rest-frame optical half-light radii R_opt[Except for our own estimates, which are at rest-frame 0.5μm, the plotted values are measured from HST/WFC3/F160W CANDELS imaging] with half-mass radii , each time comparing the radii as published by the authors. The one exception is the top-right panel (see figure caption for details). Even though all  estimates are systematically smaller than R_opt, the offsets and scatter vary from author to author. Our estimates show similar offsets and scatters as those by <cit.>, adopting their preferred Method 1 estimates. The main difference is that their uncertainties are several times smaller than ours. The  estimates by <cit.> and <cit.> show much larger scatters, and very small formal uncertainties (≲ 0.05 dex). Taken at face value this means that these authors find a large (0.2-0.3 dex) galaxy-to-galaxy scatter in the / (in fact, larger than the scatter in R_opt at fixed mass). We note that in all cases the R_opt agree well between the authors, with small scatter and no systematic offsets. The largest scatter (0.1 dex) is found when comparing with <cit.>, who model the light profiles in a fundamentally different manner (multi-gauss expansion rather than Sérsic profile fitting). These differences do not explain the different trends and patterns seen in Figure <ref> – rather, those are due to the variety in techniques to correct for M_⋆/L gradients (see discussion in Sec. <ref>). The comparison with NIRCam-based rest-frame near-infrared sizes used in this paper provides additional insights, as illustrated in Figure <ref>. As already demonstrated in Section <ref> the  estimates presented in this paper compare well with the NIRCam sizes, with a reasonably small scatter of similar magnitude as the formal uncertainties. The  estimates by <cit.> and <cit.> show much larger scatter, especially for quiescent galaxies (0.2-0.3 dex) suggesting low precision, especially when compared to the very small formal uncertainties (0.01-0.06 dex). Unfortunately, <cit.>  estimates for the CEERS NIRCam sample are not available for z>1 and the comparison is limited to z<1. For those galaxies the <cit.>  estimates agree fairly well with the NIRCam sizes, with somewhat larger scatter than our estimates. It should be kept in mind that light profiles used by <cit.> are not identical to our profiles so that the comparison is not entirely fair and straightforward. The scatter in between <cit.> and ours is 0.05 dex, which can explain the difference in scatter for the star-forming galaxies. The sample of quiescent galaxies is too small to make a reliable statement. We must now address the question where the tension arises between the conclusions presented by <cit.>, who find little or no evolution in for quiescent galaxies at z>1 and the results presented here in Figure <ref>, with strong evolution in for massive quiescent galaxies up to z=2.3. The size comparisons in Figures <ref> and <ref> do not provide immediate answers. Figure <ref> shows the evolution in the light-to-mass weighted size ratio (/R_opt), comparing the results presented in this paper and those from <cit.>. In both cases the measurements from the respective authors are used without matching catalogs. Differences in redshift measurements, stellar mass estimates and size estimates can all contribute to differences in the comparison. Notably, the stellar mass estimates used here are systematically larger by 0.2 dex, which is here accounted for by lowering the stellar mass cut. Up to z≈ 2 the patterns are similar, with  estimates that are 0.1-0.2 dex lower than , both works showing no evidence for a different pace of evolution in  compared to . The main difference arises at z>2, where <cit.> find no offset and we do. For <cit.> the scatter in /at z>1.5 (>0.2 dex)is larger than the scatter in R_opt. For our own estimates this is not the case, but we see an increased number of outliers at z>1.5. These trends suggest that uncertainties start dominating over the corrective effect of accounting for M_⋆/L gradients at z>2. A more definitive statement on the evolution of  beyond z=2 will have to wait for NIRCam size measurements for larger samples.
http://arxiv.org/abs/2307.00945v1
20230703113641
The Schwinger effect by axial coupling in natural inflation model
[ "Mehran Kamarpour" ]
gr-qc
[ "gr-qc" ]
=-2cm =-0.25cm =25cm =18cm
http://arxiv.org/abs/2307.02474v1
20230705174711
Constraints on cosmologically coupled black holes from gravitational wave observations and minimal formation mass
[ "Luca Amendola", "Davi C. Rodrigues", "Sumit Kumar", "Miguel Quartin" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
^1Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120. Heidelberg, Germany ^2Departamento de Física & Cosmo-Ufes, Universidade Federal do Espírito Santo, 29075-910, Vitória - ES, Brazil. ^3PPGCosmo, Universidade Federal do Espírito Santo, 29075-910, Vitória - ES, Brazil ^4Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), D-30167 Hannover, Germany. Leibniz Universität Hannover, D-30167 Hannover, Germany ^5Instituto de Física, Universidade Federal do Rio de Janeiro, 21941-972, Rio de Janeiro, RJ, Brazil ^6Observatório do Valongo, Universidade Federal do Rio de Janeiro, 20080-090, Rio de Janeiro, RJ, Brazil We test the possibility that the black holes (BHs) detected by LIGO-Virgo-KAGRA (LVK) may be cosmologically coupled and grow in mass proportionally to the cosmological scale factor to some power k, which may also act as the dark energy source. This approach was proposed as an extension of Kerr BHs embedded in cosmological backgrounds and possibly without singularities. In our analysis, we test these cosmologically coupled BHs either with or without connection to dark energy. Assuming that the minimum mass of a BH with stellar progenitor is 2M_⊙, we estimate the probability that at least one BH among the observed ones had an initial mass below this threshold, thereby falsifying the hypothesis. We consider either the primary m_1 or the secondary m_2 BHs of 72 confidently detected gravitational wave events and adopt two different approaches. In the first one, we directly derive the probability from the observed events, and we obtain a tension with the k=3 scenario at the level of 2.6σ and 3.05σ for the m_1 or m_2 cases respectively. In the second approach, we assume the LVK power-law-plus-peak (PLPP) mass distribution, which takes into account the observational bias, and we find tensions at the level of 3.7σ and 4.0σ for the m_1 and m_2 masses respectively. We show that these bounds can be alleviated by allowing lower k values or faster BHs mergers (i.e., shorter delay times t_ d). In particular the m_1-based results in the following 2σ upper bounds: k ≤ 2.4 for the direct approach or k ≤ 1.7 in the PLLP approach. For k = 0.5, a value previously studied in the gravitational wave context, we find no relevant constraints, even increasing the minimum BH mass to ∼ 4 M_⊙. Finally, we show that future observations should quickly strengthen these bounds. Constraints on cosmologically coupled black holes from gravitational wave observations and minimal formation mass Luca Amendola^1, Davi C. Rodrigues^1,2,3, Sumit Kumar^4, Miguel Quartin^3,5,6 August 1, 2023 ================================================================================================================== § INTRODUCTION Recently, a new intriguing hypothesis about the origin of the cosmic acceleration has been put forward in <cit.>, with further developments in <cit.>. According to this scenario, black holes (BHs) grow in mass due to a form of cosmological coupling unrelated to local accretion. If this growth is fast enough, it could compensate the decrease in number density due to the expansion, and generate a form of effective cosmological constant. These BHs deviate from the standard Kerr solution since they are expected to model non-singular solutions that are asymptotically Friedmann-Robertson-Walker, rather than Minkowski <cit.>. These non-standard BH solutions could provide an average pressure that would constitute the entire amount of dark energy needed to explain the cosmic acceleration if the BHs have the necessary abundance. Farrah et al. <cit.> argue that this is the case. This new BH solution is at the moment a conjecture, but it presents several theoretical advantages: it does not need a new enigmatic ingredient in the cosmic recipe as dark energy and it automatically solves the issue of the “cosmic coincidence”. It would have nevertheless been just an interesting but very speculative idea were it not for the fact that in a companion paper, Farrah et al <cit.> have found strong indication in favor of just such a cosmological growth of supermassive BHs in elliptical galaxies. This growth seems very difficult to explain in terms of the standard local growth channels of accretion. More recently Lei et al. <cit.> used JWST data and found a conflict with Farrah et al. <cit.> parametrization at redshifts z ∼ 4.5 - 7. These high redshift results are mostly independent of our analysis: the constraints we find come from lower redshifts, as it will be shown when constraining the maximum redshift of binary BH formation (z_ max). These results, if confirmed, have an impact on the dark energy interpretation. This paper is devoted to testing the cosmologically coupled BHs (CCBH) by looking at the current and future datasets from LIGO-Virgo-KAGRA (LVK). The current understanding is that the gravitational waves (GW) detected by LVK come from the merging of BHs with stellar progenitors. If the CCBH hypothesis is correct, they must have grown to the observed mass from an initially lower mass. However, BHs with a stellar progenitor cannot be formed with arbitrarily low masses. Observationally, there is evidence of a paucity of BH masses between 2-5 M_⊙ <cit.>, while there is no conclusive evidence for a BH with mass below 2 M_⊙ (see also <cit.>). Neutron star (NS) stability studies based on the Tolman-Oppenheimer-Volkoff (TOV) equation predict that nonrotating NS could have masses at least as high as 2.2 M_⊙ <cit.>, which sets a natural lower bound for BHs forming through stellar collapse. Incidentally, rotating NS have been measured with masses as high as 2.7 M_⊙ <cit.>, and studies of binary systems containing NS find an empirical upper limit as high as 2.6 M_⊙ <cit.>. Here we adopt a more conservative lower bound, namely we assume that stellar BHs should not be formed with a mass lower than 2 M_⊙. This is less restrictive than, e.g., the minimum BH mass of 2.7 M_⊙ considered to find the proper Ω_Λ value from CCBHs <cit.>. In this paper, we quantify the probability of BHs with initial mass below the 2 M_⊙ threshold with two complementary approaches, explore ways to alleviate the tensions[We use the word “tension” whenever we find a rejection to a level higher than 2σ.] we find, and briefly discuss future prospects. We conclude that the CCBH as proposed in <cit.> is in strong tension with what we know about stellar progenitor BHs, but there still is an open parameter space where it can survive the present test. In particular, we find no relevant tension for the CCBH case studied in <cit.>. The forthcoming new GW datasets will soon shed further light on the CCBH conjecture. The codes we used for this work are available at <https://github.com/itpamendola/CCBH-direct> and <https://github.com/davi-rodrigues/CCBH-PLPP>. § COSMOLOGICALLY COUPLED BLACK HOLES Farrah et al. <cit.> considered three samples of red-sequence elliptical galaxies at different redshifts and found that the growth of supermassive black holes (BHs) is significantly larger than the growth of stellar mass, being a factor of 20 from z∼ 2 to z∼ 0. This growth is too large to be compatible with the expected accretion rate <cit.>. This suggests a different growth mechanism such that m_ BH = m_* (1+z)^-3.5 ± 1.4, at 90% confidence level, where m_* is the stellar mass of the galaxy and m_ BH the supermassive BH mass of the same galaxy. A possible explanation for the above physics comes from the proposal of cosmologically coupled BHs (CCBHs) <cit.>. In this case, BHs would grow following the parametrization <cit.> m(a) = m(a_i) ( a/a_i )^k , where k≥ 0 is a constant, a_i is the cosmological scale factor at the time of the BH formation and m(a_i) is its mass at that time. In <cit.> the value k≈ 3 was advocated, which would both explain the supermassive BHs growth and provide a source of dark energy capable of generating the observed Ω_Λ value. The latter requires further assumptions, in particular a proper star formation rate, that all the remnants with mass > 2.7M_⊙ are BHs, and that all the BHs follow the above mass parametrization. Criticisms of this connection with dark energy have appeared <cit.>. If all BHs are cosmologically coupled with k=3, Rodriguez <cit.> pointed out that this would be in contradiction with globular cluster NGC 3201 data, since it would imply that one of the BHs would have a mass below 2.2 M_⊙ by the time of its formation, a less conservative minimum BH mass than our threshold. This mass value is too low for a stellar BH, being instead compatible with a non-rotating neutron star <cit.>. A similar test on two Gaia DR3 stellar-BH systems with reliable age estimation has been carried out in Ref. <cit.>, resulting in the 2σ upper limit k≤3.2 assuming the same 2.2 M_⊙ limit and fixing the background to ΛCDM. In Ref. <cit.> it was also found that the rate of mergers and their typical masses in a CCBH scenario would be hardly compatible with LVK observations; they also point out that CCBHs should exhibit lower spins due to their increase in mass. Our purpose here is to test if the BHs detected from their coalescence waves could be cosmologically coupled. Before the results from <cit.>, Croker et al <cit.> (see also <cit.>) developed simulations of merging BHs and considered the impact of the cosmological coupling on the LVK detected BHs, showing that k=0.5 would be preferred over the standard k=0, at least for certain isolated-binary-evolution model. We use here the most recent data from LVK, together with more recent delay time expectations. A crucial difference between this work and the works <cit.> on CCBH and LVK data is that they started from a given BBH formation mass, assumed to be realistic, and consider if they could mimic LVK data from that initial mass distribution. Here we aim to estimate what is the probability that at least one of the observed BHs via LVK would be formed with an unrealistically low mass value (thus in part similar to <cit.>). A key quantity for modelling the CCBH effects on LVK data is the delay time t_ d (i.e. the interval between BBH formation and merger), which is detailed in Sec. <ref>. Within the general class of CCBHs, we distinguish two cases. If BHs have dark energy implications and constitute its only source, as proposed in <cit.>, then the constant k has a direct connection with the dark energy equation of state parameter w (assumed constant), with k ≡ - 3 w . We call this scenario dark energy BHs, DEBH. If instead BHs have no impact on dark energy, then the mass increase of BHs can still be parametrized by a function of z, but the microphysics is supposed to be independent of cosmology. This might occur if the growing BHs contribution to the total energy density is subdominant, or if there exist alternative non-cosmological growth channels. In this scenario, we assume no deviation from ΛCDM cosmology. This is the case studied in, for instance, <cit.>. We call this model growing BHs, GBH. The two models, DEBH and GBH, have identical background cosmological evolution for k=3 but diverge otherwise. We consider both in the following. § DELAY TIME AND THE MASS CORRECTING FACTOR The merging of binary black holes (BBH) systems detected by LVK is commonly considered to be the end of a pair of BHs that orbited together from several Myrs to several giga-years before the merger <cit.>. These BHs masses are consistent with them being remnants of star progenitors, and this constitutes the standard interpretation (e.g., <cit.>). It is expected that most of the BBHs detected by LVK were formed from binary stellar systems that evolved into a pair of BHs and later merged (e.g., <cit.> and references therein). The relevant time during which the CCBH effect (<ref>) is active extends between the BHs formation and their merger. We call this time the BBH delay time and denote it by t_ d. We note that another delay-time definition, as the time between the stellar pair formation and the BBH merger, is also used in the literature. However, they typically differ by a few Myr <cit.>, hence both definitions are essentially the same. For the GBH scenario, we consider any value of k in the range 0≤ k ≤ 3, where k=0 corresponds to the standard case (uncoupled Kerr BHs). Apart from the k=3 case, a particular value we will consider with some attention is k=0.5 since it was preferred in the analysis of <cit.>. For the DEBH case, changing the k value changes cosmology. For clarity, this case will be parameterized with a constant w, instead of k. We mainly consider -0.6 ≤ w ≤ -1. We do not consider more negative w values since they can only strengthen our constraints; moreover <cit.> argues that the maximum value of k is 3. This case has no limit that leads to standard BHs. The references <cit.> suggested that the distribution of delay times can be approximated by a log-uniform distribution (i.e., p(t_d) ∝ 1/t_d) with 0.05 < t_d () < 13.5 for BBH. It is also pointed out that the formation of the first BBHs is restricted to z < 10. This picture is in good agreement with simulations and observational constraints (see e.g., <cit.>). These simulations are in the context of ordinary (Kerr) BHs. CCBHs, on the other hand, grow with time and therefore their delay times will also change: since larger BH masses dissipate energy faster through GW emission, the delay time of CCBHs should be smaller than for ordinary BHs for the same initial mass and orbit <cit.>. Although for given initial conditions the merger time will be shorter for CCBH, there are other factors that may increase the delay time of the detected BBH mergers. In particular, as commented in <cit.>, BBHs that would not merge before z=0 in the standard picture may merge if CCBH is true. Due to such unknowns, we use a log-uniform distribution for t_ d varying three parameters that have a direct impact on the t_ d values (t_ min, t_ max and z_ max, as detailed below). Moreover, in Appendix <ref> we explore the possibility of a steeper PDF for the t_ d distribution (i.e., smaller delay times on average), which reduces the tensions we find in the main analysis. Our results impose constraints on the parameters that describe the delay time distribution for the CCBH theory. We believe that the parameters we consider are wide enough to include t_d distributions for CCBHs. We adopt then here the log-uniform t_ d distribution, log t_ d∼ U(log t_ min, log t_ x), where t_ x is the minimum between the maximum delay time t_ max and the time difference between the merger redshift (z_ m) and the maximum redshift with BBH formation (z_ max). As reference values, we consider t_ min=0.05 Gyr, t_ max=13.0 Gyr, z_ max=10, w=-1, k=3 . Besides these reference values, we also explore other combinations. We anticipate that the CCBH tension that we find here either increases or stays constant if t_ max, z_ max, or t_ min are increased. For the cosmological model we assume Ω_ m = 0.32 and H_0 = 70 km s^-1 Mpc^-1. Ref. <cit.> studied a possible correlation between t_ d and m_1 considering observational data. It was found a marginal preference for smaller masses values to have larger t_ d. We do not consider such a correlation here, but if future analyses confirm this mass delay-time correlation, it will result in stronger bounds on the CCBH model from GW data. In any case, the true delay-time distribution and its dependence on the mass are still uncertain (see for instance <cit.>). Let us now consider a BH that is formed at z_i and merges after a delay time t_d at z_m. Then t_d=∫_z_i^z_mdz/(1+z)H(z) . This can be inverted for a given w using H^2=H_0^2[Ω_m(1+z)^3+(1-Ω_m)(1+z)^3+3w] , leading to z_i=z_i(z_m,t_d) . Then the initial mass m_i of BHs will be a function of z_m, t_d, k, and proportional to the mass m_m at merging time, m_i=m_m(1+z_i(z_m,t_d)/1+z_m)^k. From a given set of observed BHs masses and a t_ d distribution (<ref>), we aim to find the probability that none of the observed BHs was formed with mass below 2 M_⊙ (i.e., M_i < 2M_⊙). If this probability is close to 1, then there is no tension between observations and the CCBH model. Otherwise, there is tension between the observational data, the model and the given assumptions. We will show that the latter is the case. Alternatively, one should invoke significant changes in some of the basic assumptions, e.g. a lower threshold for BH formation, shorter delay times, a late BH formation, or a different growth scheme. We estimate this probability in two complementary ways. In the first one, we use directly the current dataset and find the joint probability that at least one BH was born with a mass below threshold under the CCBH hypothesis. In this way, we do not need to assume a mass distribution. Since we do not correct the observed distribution for the selection effects, we implicitly discard low-mass BHs from the estimation, and therefore we end up with more conservative, but possibly more robust, estimates. We call this the direct method. In the second method, we derive the expected initial mass distribution of BHs taking into account the selection bias of the detectors. We use the power-law-plus-peak (PLPP) profile <cit.> to this end. This method leads to stronger constraints. We denote this as the PLPP method. The GWTC-3 data we use are shown in Table <ref> and in Fig. <ref>. These data come from confident BBH and NSBH events <cit.> that satisfy p_ astro > 0.5 and FAR_ min < 1 yr^-1. In our analysis, we use separately either the primary m_1 masses or the secondary m_2 ones. Therefore, each selected mass corresponds to an independent history of a compact binary evolution and merger. Considering all the m_1 and m_2 in a single analysis would be incorrect since binary BHs have the same t_ d and would therefore not be independent. Although we consider here results with either primary or secondary masses, emphasis is given on the results for m_1 masses, since these produce more robust and more conservative constraints. For the direct method, this choice automatically removes BHs that are outliers with particularly low mass and have a large impact on the statistics used. For the PLPP method, the m_1 data is more robust since its distribution depends on one less parameter, with non-negligible uncertainty, than the m_2 distribution. In principle, one could consider m_2 masses for all the BBH cases, and change to m_1 masses for the NSBH systems, but the impact on the results is small since there are only one or two NSBH systems in our selected sample. The selected sample, Table <ref>, has only two systems classified as NSBH by LVK, namely: GW200105_162426 and GW190917_114630. This classification depends on the adopted minimum mass for BHs, and <cit.> considers 2.5 M_⊙. When considering that the minimum mass is 2M_⊙, there remains a single NSBH, GW200115_042309. Excluding all events with secondary mass less than 5M_⊙ as potential NS or outliers, we are left with 69 m_2 data points. § DIRECT CONSTRAINTS FROM THE OBSERVED EVENTS Here we discuss the direct method. The formation redshift that a BH of merging mass m_m observed at z_m should have to initially form with a given threshold mass m_ th is given by Eq. (<ref>) as z_ th=(1+z_m)(m_ th/m_m)^-1/k-1 , and the corresponding delay time is t_ th=t_d(z_m,z_ th)=t_d(z_m,m_m,m_ th) . If the delay time is larger than t_ th, the BH would have formed with a mass below the threshold. If t_ th plus the merger age t(z_m) is larger than the cut-off t_ max, we take t_ th=t_ max-t(z_m). Analogously, if z_ th is larger than, say, z_ max=10, we should cut it at z_ max to prevent formation at an unrealistically early epoch. Now, given a normalized delay-time distribution Ψ(t_d), the probability that a BH has formation mass above m_ th is p_i(z_m,m_m,m_ th)=∫_0^t_ thΨ(t_d) t_d . The combined probability of having N BHs within the acceptable formation mass range >m_ th is P(m_1>m_ th)=∏_i^Np_i and therefore the probability of at least one below-threshold BH is 1-P. In order to reject the CCBH hypothesis, we should find a small P for the currently observed BHs. In other words, the p-value for rejecting the CCBH hypothesis is p=P(m_1>m_ th). Since GW observations pick preferentially high-mass BHs, the constraints we derive are on the conservative side. We consider both the DEBH case, in which the BH growth is linked to the dark energy so that k=-3w, and the alternative GBH scenario in which the BH growth does not influence the cosmological expansion. For simplicity, in this second case, we fix w=-1, i.e. the standard cosmological constant. In each case, there is then just one BH-cosmological parameter (in addition to the astrophysical ones, namely t_ min, t_ max, z_ max, m_ th): either k for the GBH case, or w for the DEBH case. We illustrate in the corner plot Fig. <ref> the exclusion plots for various combinations of parameters for both scenarios, implicitly fixing all the other parameters to the reference case. In this and in all subsequent corner plots, the reference case (for which GHB and DEBH coincide) is always at the upper right corner; moving beyond this point increases the rejection level of the CCBH hypothesis. The main result is that, for DEBH and in the reference case, the probability of having no BHs below threshold is 0.0083, corresponding to 2.64σ. Reducing the threshold mass to 1 M_⊙, the rejection level falls to approximately 2σ. Using instead the m_2 masses, and excluding as potential outliers (perhaps neutron stars) the two compact objects with masses in the range 2-5 M_⊙, we obtain, as expected, a higher rejection level of 3.05σ. Decreasing w into the phantom regime w<-1 makes the result stronger. For w<-1.2, using m_1 masses the rejection is at the 3σ level (again, fixing all the other parameters to reference). In Fig. <ref> we show the distribution of probabilities for 1000 realizations of the current data randomly chosen within the z_m and m_m errorbars (assuming a bivariate normal uncorrelated distribution for each point[We converted the 90% credibility regions of the GWTC-3 catalog into approximated Gaussian distributions, and imposed a minimum merger mass of 5M_⊙.]) for the reference case. The average and standard deviation is (2.655± 0.023)σ. This narrow distribution shows that sticking with the best fit z_m, m_m values is an acceptable approximation. We also notice that the merger redshifts z_m are obtained from the luminosity distance by assuming a ΛCDM evolution. However, in the DEBH scenario, the background is ΛCDM only for w=-1 so for any other value of w we should derive a new set of z_m. This correction is however on average Δ z=0.02 for w=-0.6, and smaller for w closer to -1. This is negligible with respect to the current uncertainty in z_m, so we neglect it. We can also estimate P(m_1>m_ th) for a number of events twice as large as the current one by simply duplicating each current event. More in general, one trivially has P(α N)=P(N)^α, if α N is the number of events in the forecast. With α=2 (a number of events that can be reached in a few months) the reference scenario would be rejected with p≈ 7· 10^-5, i.e. 4σ. The dependence on the threshold mass is illustrated in Fig. <ref>. The level of rejection of the DEBH increases quite fast with m_ th. For the GBH scenario, as it can be seen from Fig. <ref> and Fig. <ref>, values of k<2.4 are acceptable at, or better than, the 2σ level. For the other parameters, the range for which the tension is reduced below the 2σ level are t_ max < 8.7 Gyr and z_ max < 4. Finally, in Fig. <ref> we plot the individual probabilities p_i as a function of the BH mass for the reference case. As expected, small BHs are more likely to originate from below-threshold masses. However, all values of p_i are relatively close to unity (larger than 0.8), implying that currently observed BH are more likely to be formed above the threshold than below it. It is the combined probability, rather that some peculiar outlier, that leads to the conclusion that at least one BH should have been formed below threshold. However, one has to be cautious that when many more events will be detected, it is likely that some outlier might bias the product of probabilities. The probability p(m) for a BH of observed mass m (in solar mass units) can be very well approximated in the reference case by the following function p(m) = Clog(A-Bm^-1/2) , with A=26.72,B=30.88,C=0.3096. The form of this function is suggested by the analytical integration of Eq. (<ref>) and Eq. (<ref>) for a pure CDM model and a 1/t_d distribution; the coefficients are then obtained as a best fit to the actual p_i values. § CONSTRAINTS USING THE POWER-LAW-PLUS-PEAK DISTRIBUTION §.§ General procedures We now move to the PLPP method. The true population of merged BBH is not well described by the detected BBH since detection bias has an important role. In particular, it is known that it is easier to detect massive BBH systems than low-mass systems: many low-mass BBH mergers are expected to happen but are undetected. A successful profile for the primary mass (m_1) of merged BBH is the power-law-plus-peak (PLPP) one, as proposed in <cit.> and analysed with current data in <cit.>. We consider this profile with their most probable parameter values <cit.>. The PLPP is a combination of a power law, described by β(m_1), a Gaussian peak given by G(m_1) and a smoothing function S(m_1) that smooths the minimum mass probability transition. The PLPP depends on seven parameters to describe the m_1 distribution: the power α, the minimum and maximum masses (m_ min, m_ max), the Gaussian mean and standard deviation (μ, σ), the smoothing parameter δ_m and the λ parameter that adjusts the relative importance of the peak and the Gaussian. The peak is interpreted as a consequence of pair-instability supernovae <cit.>. The smoothing function S is introduced since the most probable m_1 values are not expected to be at the minimum m_ min: expectations from X-ray binaries and simulations <cit.> suggest a smoother transition. Explicitly, the PDF reads, π(m_1) ∝ (1 - λ) β(m_1)S(m_1) + λ G(m_1)S(m_1) , β(m_1) = α -1/m_ min^1-α - m_ max^1-α m_1^-α, G(m_1) = 1/√(2π) σexp( - (m_1 - μ)^2/2 σ^2) , S(m_1) = [1 + exp( δ_m/δ m_1 - δ_m/δ m_1-δ_m) ]^-1 , δ m_1 < δ_m 1 , δ m_1 > δ_m , where δ m_1 ≡ m_1 - m_ min. It is also imposed that π(m_1) = 0 for m_1 < m_ min or m_1 > m_ max. The parameters are found from GW observational data and considering the detector bias, through a hierarchical Bayesian approach <cit.>. The PLPP model represents the source frame mass distribution which is corrected for the selection effects. The minimum and maximum masses of the binary system are also parameters in this model and their recovery depends on the prior distribution used in population inference. In GWTC-3 analysis, a uniform prior on minimum mass was used m_1, min∈ [2,10] M_⊙. For the GWTC-3 data, the eight parameters that describe the m_1 and m_2 distributions are <cit.> (90% credible intervals): α= 3.40_-0.49^+0.58, δ_m = 4.8_-3.2^+3.3 M_⊙, m_min = 5.08_-1.5^+0.87 M_⊙, m_max = 86.9_-9.4^+11. M_⊙, μ= 33.7_ -3.8^+2.3 M_⊙, σ= 3.6_-2.1^+4.6 M_⊙ , λ= 0.039_-0.026^+0.058 , β_q = 1.1_-1.3^+1.8 . The β_q parameter, only needed to describe the m_2 distribution, will be discussed later on. The central values above are the medians of the posteriors, while the uncertainties represent the 5% and 90% quantiles of the posteriors. From <cit.> we also infer the maximum likelihood values, which read, α = 3.55, m_ min = 4.82 M_⊙, m_ max = 83.14 M_⊙ , δ_m = 5.45 M_⊙, μ = 34.47 M_⊙, σ = 1.87 M_⊙ , λ = 0.019, β_q = 0.76 . Since there are correlations among the parameters, the PLPP maximum likelihood values need not to be close to the central values of each parameter. Our main resuts use the values in Eq. (<ref>). Variations of these values will also be considered. The PLPP profile for the primary mass provides the PDF of the true m_1 distribution at any merging redshift z_m (current data do not favour a PLPP profile with redshift dependence <cit.>). The detected m_1 distribution should not match the true one since detection bias is not negligible. In particular, low mass BHs (∼ 5-10 M_⊙) are less likely to be detected by LVK than higher masses BHs at the same redshift. The PLPP profile depends on the proper modelling of the detector bias and on the expected BHs degrees of freedom. Although CCBHs are not identical to standard (Kerr) black holes, upon merging CCBHs are expected to behave as ordinary BHs <cit.> and hence the PLPP should properly take into account detector bias in this scenario. To test the CCBH hypothesis, we are interested in finding not the merged BBH mass profile, but its expected version at the time of the BBH formation. By finding the product distribution between the PLPP distribution with the mass factor (<ref>) we find the expected m_1 distribution at the formation time. These processes are illustrated in Fig. <ref>. More precisely, let M_1, m be a random realization of the PLPP distribution, where the index m stands for merger time, and let F_z_m be a random realization of the mass factor correction of eq. (<ref>) at given z_m. Then the m_1 distribution at BBH formation time is the distribution of the random variate M_1, i, with M_1,i = F_z_m M_1, m . Our results are found using at least 10^5 realizations of each random variable. A caveat of the above procedure is that the mass factor distribution depends on z_m, hence the m_1 distribution by formation time is z_m dependent. This dependence on z_m can be either considered by using the z_m values from observations or by using a mean z_m value as an approximation. The dependence on z_m is weak (see also Fig. <ref>), thus simply using an average z_m value is commonly sufficient (the quality of the approximation can be directly verified by computing any probability at the extremal z_m values). The mass factor (m_i/m_m) distribution and its dependence on z_m are shown in Fig. <ref>. In particular, for a given merger mass value m_m, the CDF coincides with the probability that the formation mass m_i was smaller than a given treshold. For any z_m, one sees that the probability of m_i/m_m < 0.05 is ≳4%. Hence, for a given observed BH with a merger mass of 40 M_⊙, the probability that the initial mass was less than 2 M_⊙ is about or larger than 4% (using the reference values). From the m_1 mass distribution at the BBH formation, one can compute what is the probability that one of the merged BBHs could have been formed with m_1 larger than a given mass threshold m_ th, denoted by p(m_ th, z_m). For the various scenarios studied here, this probability for a single event with a mass larger than 2 M_⊙ is not far from, but clearly below, unity. Since the total number of merged BBH is larger than the total number of confidently detected mergers, a minimum bound can be found by using the confidently detected BHs from gravitational waves (denoted by N), which we will use here. Hence, the probability of a given CCBH realization being compatible with existing data, similarly to eq. (<ref>), is P (m_1 > m_ th) = ∏_j^N p_j (m_ th, z_m,j) ≈ p^N (m_ th, z_m) , where z_m is an average over all the z_m,j values. The above equation can be computed either using the redshift values of the observed merged BBH or by using the last approximation, which only depends on z_m. The numerical differences are small, being negligible when stating the tension with σ units. The above analysis applies for m_1 masses. Following <cit.>, the m_2 distribution is given by the following conditional probability, π(m_2|m_1) ∝ ( m_2/m_1 )^β_q S(m_2) Θ(m1-m_2) , where β_q is the single PLPP parameter that only appears in the m_2 distribution, S is the smoothing function as defined in (<ref>) and Θ is the Heaviside theta function. In order to find the PDF π(m_2) we marginalize over m_1, π(m_2) = ∫_m_ min^m_ maxπ(m_2|m_1) π(m_1) m_1 , where π(m_1) is given in eq. (<ref>) and π(m_2|m_1) needs to be normalized for each m_1 value. With this result, one can find the initial mass distribution and compute the probability P(m_1 > m_ th), eq. (<ref>), for m_2 masses. §.§ Results Using the approach illustrated in Fig. <ref>, in Fig. <ref> we show m_1 and m_2 distributions at formation time for different parmeter values. Several values of k and of the three parameters related to the t_ d distribution (t_ max, t_ min, z_ max) are considered. The latter three parameters can be tuned to reduce the probability of a BH formation with mass below 2M_⊙, but, as long as this PDF is not negligible for m < 2M_⊙, the tension between observational data and the minimum BH mass will increase with the number of detected BHs. This figure also shows that the k=0.5 case, which was specially studied in <cit.> as a GBH model, is the safest among the considered cases. By applying the mass factor correction on the PLPP distribution (<ref>) and using eq. (<ref>), with N=72 for the m_1 BHs alone, the probability that no merged BBH was formed with mass smaller than 2 M_⊙ is P ≈ 2 × 10^-4, thus implying a minimum tension of 3.7σ for the reference values. The root of this tension is the mass factor distribution, see Fig. <ref>. If the number of detected events is doubled, and if the current mass distribution is confirmed, the tension will increase to a theoretical 5 σ. This number assumes that the PLPP is exactly valid. Instead of the m_1 values one may consider the m_2 masses, which lead to stronger constraints but depend on an additional PLPP parameter, β_q. In our events selection, Table <ref>, there are 69 m_2 masses larger than 5 M_⊙: these are clearly BHs. Using the m_2 distribution, eq. (<ref>), we find that the tension becomes 4.0σ. Considering a forecast with double the data, the tension goes up to 5.8σ. Considering only data whose mass central value is larger than 5M_⊙, thereby ensuring they are BHs, the strongest constraint from the 72 events would come from considering 72 BH masses composed of 69 m_2 and 3 m_1 masses. The resulting tension slightly increases, but remains around 4.0σ. In Fig. <ref> we show how the tension increases with the increase of the BH minimum mass. In particular, for the minimal mass of 2.7 M_⊙, which was considered in <cit.>, the tension is larger than 4σ (using the reference values). The left plot of this figure also considers changes in w, in the context of the DEBH model. The tension decreases for larger w values, which implies lower k values. It is important to point out that the case k=0.5 is quite safe <cit.>. The tension only becomes larger than 1σ if the minimum mass is about or larger than 4.7 M_⊙. Also, a forecast with double the observed events and the same PLPP distribution does not change appreciably this picture. To evaluate the dependence of our results on the maximum likelihood PLPP parameters (<ref>), in Fig. <ref> we consider how the CCBH tension, considering m_1 data alone, changes by considering 10^3 samples of the PLPP parameter distribution <cit.>. Although the tension does depend on the PLPP parameter values, one sees that the tension is, as expected, always larger than the one found with the direct approach. In Fig. <ref> we show exclusion plots for the DEBH and GBH models considering parameter variations within the PLPP approach and with respect to the reference values (for the 72 observed m_1 data). All the reference values changes here considered are such that the tension decreases. This figure should be compared with Fig. <ref>. Both figures show qualitatively the same behaviour, but quantitatively, as expected, the PLPP approach is stronger. The two most important parameters that may be changed to alleviate the tension are k and t_ max. For the DEBH model, the tension is reduced by increasing w, which corresponds to decreasing k, but even the case w=-0.6, which is far from the standard cosmological model, is not sufficient to drop the tension to an acceptable level. In the following, using Fig. <ref>, we highlight the necessary individual parameter ranges that reduce the reference tension from 3.7σ to an acceptable level, below 2.0σ. They are: k ≤ 1.7, t_ max < 6 Gyr, z_ max < 2. Reducing t_ min from 50 Myr to 5 Myr only marginally improves the picture. It is good to recall that we have been conservative in our bounds and that further data can quickly increase these tensions. In Fig. <ref> we show in detail the probability that there is not a single primary BH whose initial mass was smaller than the threshold (2 M_⊙), as a function of k. This is shown for current data and a forecast. § CONCLUSIONS According to a recent proposal <cit.>, BHs grow in mass due to a “cosmological coupling” and might be responsible for the cosmic acceleration. In such a scenario, dubbed cosmologically coupled BHs, or CCBH, BHs are not of the Kerr type, and are supposed to match asymptotically the cosmological background. This bold idea seems supported by recent analyses of the growth of supermassive BHs in quiescent elliptical galaxies <cit.>. In this paper we tested this hypothesis by considering the binary BHs with stellar progenitors observed with gravitational waves by the LIGO-Virgo-KAGRA (LVK) detectors. If these BHs are cosmologically coupled, they should have undergone a fast mass growth correlated with the cosmological scale factor from the moment of their formation until merging, and should have formed therefore with a mass smaller than the merger one. Since according to the current understanding there is a mass limit below which stellar BHs cannot form, the presently observed masses could be in conflict with this mass threshold. By assuming a distribution of delay times, we estimated the probability that at least one BHs observed by the GW detectors was formed below threshold if the CCBH hypothesis is correct. We considered the low threshold of 2 M_⊙. The main result is that assuming standard astrophysical parameters for the delay time distribution and the mass threshold, we find a tension at a confidence level of 2.6σ or more, using only the primary masses (m_1) data. More specifically, we obtain 2.6σ for the direct method of Sec. <ref>, and 3.7σ for the method based on the power-law-plus-peak distribution of Sec. <ref>. Stronger bounds are obtained by employing the secondary masses, namely approximately 3.0 and 4.0σ, respectively. The result depends on a number of astrophysical parameters, so we explore various possibilities to ease the tension. Of particular relevance is to notice that, if the maximum time between the binary BH formation and their merger (t_ d) is not larger than 8.7 Gyrs (in the direct method) or 6 Gyrs (in the PLPP method), the tensions essentially disappear. Finally, for the CCBH variation studied in <cit.>, whose BH masses increase more slowly (k=0.5), we found no relevant tension with minimum BH masses. This conclusion is also valid for the case k=1, which was recently studied in <cit.>. It is clear that the physics of CCBHs is still to be understood in any detail, and therefore the assumptions about minimal mass and evolution might need to be reconsidered in the future. Moreover, if the LVK BHs are of primordial origin, then a completely different analysis would be needed (see e.g. <cit.>). The current O4 run of the LKV detectors is expected to more than double the number of BH mergers by the end of 2024. With this extended dataset, it will be possible to rule out a much larger fraction of parameter space. We thank Kevin Croker and Valerio Faraoni for very useful comments and feedback. We also thank Riccardo Sturani for several discussions about this project. LA acknowledges support from DFG project 456622116. DCR acknowledges partial support from CNPq (Brazil) and FAPES (Brazil) (TO 1020/2022, 976/2022, 1081/2022). MQ is supported by the Brazilian research agencies FAPERJ, CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and CAPES. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. We acknowledge support from the CAPES-DAAD bilateral project “Data Analysis and Model Testing in the Era of Precision Cosmology”. § CHANGING THE DELAY TIME DISTRIBUTION The delay time distribution is clearly a crucial assumption, so we discuss here briefly the impact of changing the 1/t_d slope. For this appendix we adopt the more conservative direct method of Sec. <ref>. Ref. <cit.> shows a tendency for low-mass BHs to be formed via the common envelope channel, and this channel would have a delay time distribution steeper than 1/t_d (i.e., shorter delay times on average). The full functional form of the relation delay times versus mass is unknown and it appears difficult to model exactly. Here we limit ourselves to a preliminary investigation. We modify the t_d distribution by assuming a t_d^-β with β slightly larger than unity (e.g. β=1.1-1.3), i.e., shorter delay times on average. Looking at the results of App. B of <cit.>, we see that such a steeper power-law distribution approximates the predicted behavior for masses below 30M_⊙. For simplicity, we assume that this power-law extends to all masses: this has anyway very little impact since large masses are not the main drivers of our statistics. In Fig. <ref> we show the probability contour plot for w,β. For β≥ 1.2, the probability for w=-1 decreases below 2σ, bringing the DEBH model into the non-rejection region. Therefore, as far as current data are concerned, such steeper power-laws might alleviate or solve the tension. 34 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Croker and Weiner(2019)]Croker:2019mup author author K. S. Croker and author J. L. Weiner, 10.3847/1538-4357/ab32da journal journal Astrophys. J. volume 882, pages 19 (year 2019), http://arxiv.org/abs/2107.06643 arXiv:2107.06643 [gr-qc] NoStop [Croker et al.(2020a)Croker, Runburg, and Farrah]Croker:2020plg author author K. S. Croker, author J. Runburg, and author D. Farrah, 10.3847/1538-4357/abad2f journal journal Astrophys. J. volume 900, pages 57 (year 2020a)NoStop [Farrah et al.(2023)Farrah et al.]Farrah:2023opk author author D. Farrah et al., 10.3847/2041-8213/acb704 journal journal Astrophys. J. Lett. volume 944, pages L31 (year 2023), http://arxiv.org/abs/2302.07878 arXiv:2302.07878 [astro-ph.CO] NoStop [Faraoni and Jacques(2007)]Faraoni:2007es author author V. Faraoni and author A. Jacques, 10.1103/PhysRevD.76.063510 journal journal Phys. Rev. D volume 76, pages 063510 (year 2007), http://arxiv.org/abs/0707.1350 arXiv:0707.1350 [gr-qc] NoStop [Croker et al.(2020b)Croker, Nishimura, and Farrah]Croker:2019kje author author K. Croker, author K. Nishimura, and author D. Farrah, 10.3847/1538-4357/ab5aff journal journal volume 889, pages 115 (year 2020b), http://arxiv.org/abs/1904.03781 arXiv:1904.03781 [astro-ph.CO] NoStop [Farrah et al.(2023)Farrah, Petty, Croker, Tarlé, Zevin, Hatziminaoglou, Shankar, Wang, Clements, Efstathiou, Lacy, Nishimura, Afonso, Pearson, and Pitchford]2023ApJ...943..133F author author D. Farrah, author S. Petty, author K. S. Croker, author G. Tarlé, author M. Zevin, author E. Hatziminaoglou, author F. Shankar, author L. Wang, author D. L. Clements, author A. Efstathiou, author M. Lacy, author K. A. Nishimura, author J. Afonso, author C. Pearson, and author L. K. Pitchford, 10.3847/1538-4357/acac2e journal journal volume 943, eid 133 (year 2023), http://arxiv.org/abs/2212.06854 arXiv:2212.06854 [astro-ph.GA] NoStop [Lei et al.(2023)Lei et al.]Lei:2023mke author author L. Lei et al., @noop (year 2023), http://arxiv.org/abs/2305.03408 arXiv:2305.03408 [astro-ph.CO] NoStop [Özel et al.(2010)Özel, Psaltis, Narayan, and McClintock]2010ApJ...725.1918O author author F. Özel, author D. Psaltis, author R. Narayan, and author J. E. McClintock, 10.1088/0004-637X/725/2/1918 journal journal volume 725, pages 1918 (year 2010), http://arxiv.org/abs/1006.2834 arXiv:1006.2834 [astro-ph.GA] NoStop [Abbott et al.(2023)Abbott et al.]KAGRA:2021duu author author R. Abbott et al. (collaboration KAGRA, VIRGO, LIGO Scientific), 10.1103/PhysRevX.13.011048 journal journal Phys. Rev. X volume 13, pages 011048 (year 2023), http://arxiv.org/abs/2111.03634 arXiv:2111.03634 [astro-ph.HE] NoStop [de Sá et al.(2022)de Sá, Bernardo, Bachega, Horvath, Rocha, and Moraes]deSa:2022qny author author L. M. de Sá, author A. Bernardo, author R. R. A. Bachega, author J. E. Horvath, author L. S. Rocha, and author P. H. R. S. Moraes, 10.3847/1538-4357/aca076 journal journal Astrophys. J. volume 941, pages 130 (year 2022), http://arxiv.org/abs/2211.01447 arXiv:2211.01447 [astro-ph.HE] NoStop [Abbott et al.(2022)Abbott et al.]LIGOScientific:2022hai author author R. Abbott et al. (collaboration LIGO Scientific, VIRGO, KAGRA), @noop (year 2022), http://arxiv.org/abs/2212.01477 arXiv:2212.01477 [astro-ph.HE] NoStop [Ye and Fishbach(2022)]Ye:2022qoe author author C. Ye and author M. Fishbach, 10.3847/1538-4357/ac7f99 journal journal Astrophys. J. volume 937, pages 73 (year 2022), http://arxiv.org/abs/2202.05164 arXiv:2202.05164 [astro-ph.HE] NoStop [Legred et al.(2021)Legred, Chatziioannou, Essick, Han, and Landry]Legred:2021hdx author author I. Legred, author K. Chatziioannou, author R. Essick, author S. Han, and author P. Landry, 10.1103/PhysRevD.104.063003 journal journal Phys. Rev. D volume 104, pages 063003 (year 2021), http://arxiv.org/abs/2106.05313 arXiv:2106.05313 [astro-ph.HE] NoStop [Romani et al.(2012)Romani, Filippenko, Silverman, Cenko, Greiner, Rau, Elliott, and Pletsch]Romani:2012rh author author R. W. Romani, author A. V. Filippenko, author J. M. Silverman, author S. B. Cenko, author J. Greiner, author A. Rau, author J. Elliott, and author H. J. Pletsch, 10.1088/2041-8205/760/2/L36 journal journal Astrophys. J. Lett. volume 760, pages L36 (year 2012), http://arxiv.org/abs/1210.6884 arXiv:1210.6884 [astro-ph.HE] NoStop [Rocha et al.(2021)Rocha, Bachega, Horvath, and Moraes]Rocha:2021zos author author L. S. Rocha, author R. R. A. Bachega, author J. E. Horvath, and author P. H. R. S. Moraes, @noop (year 2021), http://arxiv.org/abs/2107.08822 arXiv:2107.08822 [astro-ph.HE] NoStop [Croker et al.(2021)Croker, Zevin, Farrah, Nishimura, and Tarle]Croker:2021duf author author K. S. Croker, author M. J. Zevin, author D. Farrah, author K. A. Nishimura, and author G. Tarle, 10.3847/2041-8213/ac2fad journal journal Astrophys. J. Lett. volume 921, pages L22 (year 2021), http://arxiv.org/abs/2109.08146 arXiv:2109.08146 [gr-qc] NoStop [Sadeghi et al.(2023)Sadeghi, Noori Gashti, Alipour, and Afshar]Sadeghi:2023cpd author author J. Sadeghi, author S. Noori Gashti, author M. R. Alipour, and author M. A. S. Afshar, @noop (year 2023), http://arxiv.org/abs/2305.12545 arXiv:2305.12545 [gr-qc] NoStop [Avelino(2023)]Avelino:2023rac author author P. P. Avelino, @noop (year 2023), http://arxiv.org/abs/2303.06630 arXiv:2303.06630 [gr-qc] NoStop [Maes(2023)]Maes:2023fli author author S. H. Maes, 10.31219/osf.io/369pd (year 2023), 10.31219/osf.io/369pdNoStop [Mistele(2023)]Mistele:2023fds author author T. Mistele, 10.3847/2515-5172/acd767 journal journal Res. Notes AAS volume 7, pages 101 (year 2023), http://arxiv.org/abs/2304.09817 arXiv:2304.09817 [gr-qc] NoStop [Rodriguez(2023)]Rodriguez:2023gaa author author C. L. Rodriguez, 10.3847/2041-8213/acc9b6 journal journal Astrophys. J. Lett. volume 947, pages L12 (year 2023), http://arxiv.org/abs/2302.12386 arXiv:2302.12386 [astro-ph.CO] NoStop [Andrae and El-Badry(2023)]Andrae:2023wge author author R. Andrae and author K. El-Badry, 10.1051/0004-6361/202346350 journal journal Astron. Astrophys. volume 673, pages L10 (year 2023), http://arxiv.org/abs/2305.01307 arXiv:2305.01307 [astro-ph.CO] NoStop [Ghodla et al.(2023)Ghodla, Easther, Briel, and Eldridge]Ghodla:2023iaz author author S. Ghodla, author R. Easther, author M. M. Briel, and author J. J. Eldridge, @noop (year 2023), http://arxiv.org/abs/2306.08199 arXiv:2306.08199 [astro-ph.CO] NoStop [Abbott et al.(2021a)Abbott et al.]KAGRA:2021kbb author author R. Abbott et al. (collaboration KAGRA, Virgo, LIGO Scientific), 10.1103/PhysRevD.104.022004 journal journal Phys. Rev. D volume 104, pages 022004 (year 2021a), http://arxiv.org/abs/2101.12130 arXiv:2101.12130 [gr-qc] NoStop [Belczynski et al.(2016)Belczynski, Holz, Bulik, and O'Shaughnessy]Belczynski:2016obo author author K. Belczynski, author D. E. Holz, author T. Bulik, and author R. O'Shaughnessy, 10.1038/nature18322 journal journal Nature volume 534, pages 512 (year 2016), http://arxiv.org/abs/1602.04531 arXiv:1602.04531 [astro-ph.HE] NoStop [Mapelli(2020)]Mapelli:2020vfa author author M. Mapelli, 10.3389/fspas.2020.00038 journal journal Front. Astron. Space Sci. volume 7, pages 38 (year 2020), http://arxiv.org/abs/2105.12455 arXiv:2105.12455 [astro-ph.HE] NoStop [van Son et al.(2022)van Son, de Mink, Callister, Justham, Renzo, Wagg, Broekgaarden, Kummer, Pakmor, and Mandel]vanSon:2021zpk author author L. A. C. van Son, author S. E. de Mink, author T. Callister, author S. Justham, author M. Renzo, author T. Wagg, author F. S. Broekgaarden, author F. Kummer, author R. Pakmor, and author I. Mandel, 10.3847/1538-4357/ac64a3 journal journal Astrophys. J. volume 931, pages 17 (year 2022), http://arxiv.org/abs/2110.01634 arXiv:2110.01634 [astro-ph.HE] NoStop [Chen et al.(2022)Chen, Lu, and Zhao]Chen:2022gaf author author Z. Chen, author Y. Lu, and author Y. Zhao, 10.3847/1538-4357/ac98b7 journal journal Astrophys. J. volume 940, pages 17 (year 2022), http://arxiv.org/abs/2210.09892 arXiv:2210.09892 [astro-ph.HE] NoStop [Fishbach and Kalogera(2021)]Fishbach:2021mhp author author M. Fishbach and author V. Kalogera, 10.3847/2041-8213/ac05c4 journal journal Astrophys. J. Lett. volume 914, pages L30 (year 2021), http://arxiv.org/abs/2105.06491 arXiv:2105.06491 [astro-ph.HE] NoStop [Abbott et al.(2021b)Abbott et al.]LIGOScientific:2021djp author author R. Abbott et al. (collaboration LIGO Scientific, VIRGO, KAGRA), @noop (year 2021b), http://arxiv.org/abs/2111.03606 arXiv:2111.03606 [gr-qc] NoStop [Abbott et al.(2021c)Abbott et al.]LIGOScientific:2020kqk author author R. Abbott et al. (collaboration LIGO Scientific, Virgo), 10.3847/2041-8213/abe949 journal journal Astrophys. J. Lett. volume 913, pages L7 (year 2021c), http://arxiv.org/abs/2010.14533 arXiv:2010.14533 [astro-ph.HE] NoStop [Talbot and Thrane(2018)]Talbot:2018cva author author C. Talbot and author E. Thrane, 10.3847/1538-4357/aab34c journal journal Astrophys. J. volume 856, pages 173 (year 2018), http://arxiv.org/abs/1801.02699 arXiv:1801.02699 [astro-ph.HE] NoStop [LIGO Scientific Collaboration et al.(2023)LIGO Scientific Collaboration, Virgo Collaboration, and KAGRA Collaboration]ligo_scientific_collaboration_and_virgo_2023_7843926 author author LIGO Scientific Collaboration, author Virgo Collaboration, and author KAGRA Collaboration, 10.5281/zenodo.7843926 journal journal Zenodo volume v2 (year 2023), 10.5281/zenodo.7843926NoStop [Cadoni et al.(2023)Cadoni, Sanna, Pitzalis, Banerjee, Murgia, Hazra, and Branchesi]Cadoni:2023lum author author M. Cadoni, author A. P. Sanna, author M. Pitzalis, author B. Banerjee, author R. Murgia, author N. Hazra, and author M. Branchesi, @noop (year 2023), http://arxiv.org/abs/2306.11588 arXiv:2306.11588 [gr-qc] NoStop
http://arxiv.org/abs/2307.02663v1
20230705213836
Convergence of Communications, Control, and Machine Learning for Secure and Autonomous Vehicle Navigation
[ "Tengchan Zeng", "Aidin Ferdowsi", "Omid Semiari", "Walid Saad", "Choong Seon Hong" ]
cs.IT
[ "cs.IT", "cs.AI", "cs.CR", "math.IT" ]
IEEEexample:BSTcontrol Convergence of Communications, Control, and Machine Learning for Secure and Autonomous Vehicle Navigation Tengchan Zeng1, Aidin Ferdowsi1, Omid Semiari2, Walid Saad1, and Choong Seon Hong3 1Wireless@VT, Department of Electrical and Computer Engineering, Virginia Tech, Arlington, VA, USA 2Department of Electrical and Computer Engineering, University of Colorado, Colorado Springs, CO, USA 3Department of Computer Science and Engineering, Kyung Hee University, Yongin, South Korea E-mails: 1{tengchan, aidin, walids}@vt.edu, 2osemiari@uccs.edu, 3cshong@khu.ac.kr August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks ranging from delivery to smart city surveillance. Reaping these benefits requires CAVs to autonomously navigate to target destinations. To this end, each CAV's navigation controller must leverage the information collected by sensors and wireless systems for decision-making on longitudinal and lateral movements. However, enabling autonomous navigation for CAVs requires a convergent integration of communication, control, and learning systems. The goal of this article is to explicitly expose the challenges related to this convergence and propose solutions to address them in two major use cases: Uncoordinated and coordinated CAVs. In particular, challenges related to the navigation of uncoordinated CAVs include stable path tracking, robust control against cyber-physical attacks, and adaptive navigation controller design. Meanwhile, when multiple CAVs coordinate their movements during navigation, fundamental problems such as stable formation, fast collaborative learning, and distributed intrusion detection are analyzed. For both cases, solutions using the convergence of communication theory, control theory, and machine learning are proposed to enable effective and secure CAV navigation. Preliminary simulation results are provided to show the merits of proposed solutions. § INTRODUCTION Connected and autonomous vehicles (CAVs) are promising solutions to reduce accidents, improve traffic efficiency, and provide various services, such as delivery of medication using aerial CAVs. <cit.>. To operate effectively, CAVs must perceive and sense their surrounding environment and autonomously navigate along a predesigned path to target destinations. As shown in Fig. <ref>, for a given CAV, environmental perception is accomplished by sensors and wireless connections with surrounding objects. Along with the prior knowledge of the road network, such collected situational information will be processed by a motion planner to design target path and dynamics for navigation. Subsequently, the controller will use the difference between the current dynamic parameters (e.g., location and heading angle) and desired targets designed at the motion planner as a feedback signal to make appropriate adjustments for the actuator commands. Such commands are later executed by the actuator that enables a CAV to track the target path and move to the destination. Depending on whether or not a CAV coordinates its movement with surrounding CAVs in the path-tracking process, autonomous navigation can be further studied in two use cases: Uncoordinated and coordinated CAVs. However, unique system design and security challenges must be addressed for autonomous CAV navigation. For example, the CAV's wireless system is tightly integrated with its control and autonomy mechanisms. In particular, as shown in Fig. <ref>, the controller may heavily rely on the information collected by the wireless system. Also, the communication network can transmit learning task data and learning models when CAVs use machine learning (ML) to complete navigation-related tasks like obstacle recognition. With such integration, the controller operation and learning process of the CAV will be affected by the performance of the wireless system. Hence, for effective CAV navigation, we must determine how the wireless system affects the navigation controller and navigation-related learning tasks while jointly designing the integrated systems. Moreover, due to their reliance on both wireless networks and sensors (see Fig. <ref>), CAVs must be robust against cyber-physical attacks. Example attacks include malicious information injections and sensor data manipulations that help adversaries take control of the CAV. Because of the difference of autonomous navigation between uncoordinated and coordinated CAVs, the aforementioned challenges vary from one use case to another. Therefore, there is a need to study the interconnection between communications, control, and ML and use such interconnection to develop a convergent integration among interconnected systems for a secure and autonomous vehicle navigation system among uncoordinated and coordinated CAVs. Although the wireless system is closely integrated with control and learning systems in CAV navigation, remarkably, most prior art studies each system separately. For example, the authors in <cit.> discuss future trends in the communications and processing technologies to enable CAVs, such as 5G and B5G requirements. In <cit.>, a comprehensive and thorough overview of the current state of vehicle control technology is presented with a focus on trajectory tracking control at the microscopic level and collaborative control at the macroscopic level. The authors in <cit.> surveyed different cloud-based and edge-based learning strategies that can assist the perception, mapping, and location for CAV navigation. Moreover, although many works looked at CAV path tracking security (e.g., see <cit.>, <cit.>, and references therein), these works often ignore the interdependence of cyber and physical systems in CAVs. Clearly, none of the prior works in <cit.> explicitly studied the close interconnection between systems for CAV navigation, as they often solely study one system and assume other systems to be blackboxes. The main contribution of this article is a comprehensive study on the convergence of communications, control, and ML for secure and autonomous navigation in uncoordinated and coordinated CAVs. In particular, we analyze fundamental problems such as stable path tracking, robust control against cyber-physical attacks, and adaptive controller design for the navigation of uncoordinated CAVs. Meanwhile, for coordinated CAVs such as platoons and drone swarms, we investigate the key challenges of stable formations, fast collaborative learning, and distributed intrusion detection. We also provide preliminary results to showcase the benefits of the proposed solutions. Finally, we present future research directions and open problems to further improve the joint system design. Note that although some recent surveys like <cit.> looked at CAVs, they neither explicitly studied navigation in both uncoordinated and coordinated CAVs nor considered joint system design and cyber-physical attacks. § CONVERGENCE OF COMMUNICATION, CONTROL, AND LEARNING FOR UNCOORDINATED CAVS We study uncoordinated scenarios where each CAV operates independently and tracks its own path without coordination with others. In particular, we investigate three key problems: stable path tracking, security, and adaptive navigation. §.§ Joint Communication and Control for Stable Path Tracking Although the environmental perception collected by wireless networks is critical for CAVs to decide the proper longitudinal and lateral movements for path tracking, such dynamic information will be inevitably subject to wireless delay and packet loss. Here, delayed or wrong information can impair the stability of CAVs which is defined as the ability of CAVs to stabilize their dynamic parameters around targets. To better leverage wireless links for autonomous navigation and path tracking, there is a need to determine the interconnections between the controller and the wireless network and, then, use them for joint communication and control design to ensure stable path tracking. As a first step of joint system design, a time-delay system can be built to identify the synergies between control system and wireless network. This is needed because a time-delay system can capture how dynamic errors between actual state and target state designed by motion planner change over time under the impact of wireless factors, like delay. The Lyapunov-Krasovskii and Lyapunov-Razumikhin theorems <cit.> can then be used to build a stability analysis framework for the time-delay system. These theorems can help analyze the stability of time-delay systems and determine the wireless network requirements, such as delay threshold, which can prevent the instability of the control system. Finally, the wireless network and control system can be jointly optimized for stable path tracking. For example, the controller can choose appropriate values for control parameters in its control law to ease wireless network requirements. Meanwhile, the wireless network can optimize its design at the physical, link, and network layers so as to meet the control system's wireless network requirements. The left block diagram in Fig. <ref> shows how to perform a joint design of the communication and control systems for a stable path tracking. To test the effectiveness of this joint communication and control for path tracking, we simulate a Manhattan mobility model and a CAV that uses a pure pursuit controller for path tracking <cit.>. By using the block diagram on the left side of Fig. <ref>, we derive the delay requirement imposed by the controller's stability. For the joint system design, we use a dual method to derive the optimal headway distance for the pure pursuit controller. For the wireless system design, we leverage the conditional value-at-risk and branch and bound algorithm for allocating transmit power to maximize the number of vehicular communication links that meet the delay requirements. As observed from the simulation results, the system with the joint design outperforms baselines optimizing the communication and control system separately. Clearly, we can see that the convergent integration between communication and control systems can yield a more reliable wireless network to support the stable navigation. §.§ ML for Robust CAV Control under Cyber-Physical Attacks The reliance on sensors and communication links exposes a CAV to cyber-physical attacks by adversaries that seek to take control of the CAVs by remotely manipulating their data. Thus, the CAVs' data processing units must be robust to such attacks. One key attack is data injection attacks in which an adversary manipulates CAV sensor reading such that the CAV deviates from its designated path or spacing from its surrounding objects. This threat model is important to study because it can show the direct impact of a cyber attack on the physical operation of the CAV. Furthermore, as a robustness mechanism at the state estimation process, a CAV's data fusion center can assign different weights for its sensor. The state can include the CAV's location, speed, acceleration, and wheel angle. Therefore, the CAV's estimated state depends on the data fusion's weighting strategy and on the attacker's dynamic data injection strategy. To find a robust data fusion strategy, a natural way is to study the interaction between the attacker and the data fusion center using game theory. However, in a CAV scenario, due to large and time-dependent CAV state space, game-theoretic analysis is challenging <cit.>. Instead, ML tools such as long-short term memory (LSTM) cells and reinforcement learning (RL) can be used to derive an effective CAV data fusion strategy <cit.>. An LSTM block is a deep recurrent neural network that can store information for long periods of time and, thus, can learn long-term dependencies within a sequence. Therefore, LSTMs are useful in learning temporal interdependencies between the CAV's state values from previous time steps and summarizing the the CAV's motion. Then, RL can be used by the fusion center to decide on the best action to choose based on the LSTM summary. The RL component seeks to find a data fusion strategy that minimizes the long-term effect of the data injection attack. Training this combined LSTM and RL scheme allows designing a data fusion center for CAVs that can robustly control the CAV and estimate its state. The middle block diagram in Fig. <ref> shows an LSTM-based robust CAV controller from <cit.>. In the simulation, we study a data injection attack on a CAV within a scenario in which each CAV has four sensors and the attacker can inject data to sensors 1, 3, and 4 to cause deviation in the spacing between CAVs on a street. We observe that the LSTM based controller can detect the attack on sensors thus setting the weight of sensor 2, w_2, close to 1 while the other weights will be set to values close to 0. Therefore, by using LSTM to extract temporal features from CAVs' state and RL to determine data fusion strategy, one can identify the cyber-physical interdependence in attacker's actions and goals and develop a convergent approach to ensure that CAVs' motion becomes robust against security attacks. §.§ FL for Adaptive Navigation Controller Design Another key CAV navigation challenge is designing controllers that can accurately execute real-time control decisions. Here, conventional feedback controllers can fail to adapt the CAVs to various road types, dynamic road traffic, and varying weather and payloads, since they are usually designed for a fixed vehicle model and road condition. Meanwhile, even if popular learning methods (e.g., neural networks) can be used to design adaptive navigation controller, the local data can be insufficient and skewed to train the learning model. The reasons are due to the limited on-chip memory available on board CAVs and the fact that an individual CAV can only store data pertaining to its most recent travels. Therefore, a distributed ML framework among multiple CAVs will be needed for properly designing CAVs' adaptive navigation controller. To this end, one can leverage the wireless connectivity in CAVs and use federated learning (FL) to enable a group of CAVs to jointly train the learning models used by their controllers. In FL, these CAVs can train the controller models based on their local data and, then, a parameter server, e.g., a base station (BS), can aggregate the trained controller models from CAVs. This process will be repeated among the CAVs in consecutive rounds and parameter server iteratively until all controllers converge to the optimal learning model <cit.>. As such, the learning model can be trained among multiple CAVs, and such a trained model can enable a particular CAV’s controller to adapt to new traffic scenarios previously unknown to it but experienced by other CAVs. To guarantee a good convergence, the interconnection between the wireless, learning, and control systems must be considered. Specifically, the wireless network must be designed to achieve a high participation of CAVs in FL even in the presense of mobility and channel uncertainty. Meanwhile, since the data quality among CAVs change from one CAV to another, the FL algorithm design must account for unbalanced and non-independent and identifcally distributed (non-IID) local data across CAVs. Here, the CAVs can effectively converge to using the optimal navigation controller and quickly adapt to the dynamic road traffic. The right block diagram in Fig. <ref> shows an autonomous controller design for CAVs. In the simulation, we assume that the CAV uses an adaptive proportional-integral-derivative (PID) controller to adjust longitudinal movement where the control parameters are tuned by an artificial neural network (ANN) auto-tuning unit. To train the ANN model parameters, we design a novel FL algorithm, i.e., dynamic federated proximal (DFP) algorithm, that accounts for CAV mobility, fading, and the unbalanced and non-IID data across CAVs. Moreover, an incentive mechanism is designed to determine the transmit power allocation strategy and increase CAVs' participation in FL. We use real vehicular data traces (i.e., Berkeley deep drive <cit.>) for our framework and we can observe that the convergence speed of the DFP algorithm with our incentive mechanism design can be improved by 40% compared with baseline, justifying the merits of considering the convergent integration between the wireless network, FL, and control system for adaptive navigation controller design. § CONVERGENCE OF COMMUNICATION, CONTROL, AND LEARNING FOR COORDINATED CAVS Next, we discuss autonomous navigation for coordinated CAVs where a group of CAVs coordinate their movements to complete sophisticated missions in an uncertain environment. Example applications are CAV platoons and drone swarms. In a CAV platoon, to increase the road capacity, multiple CAVs operate together and maintain small spacing between each other. In a drone swarm, a group of aerial CAVs fly along predesigned paths while keeping a target formation to perform collaborative tasks, e.g., cooperative obstacle recognition. In such applications, several joint system design and security challenges must be addressed: stable formation, fast collaborative learning, and distributed intrusion detection. §.§ Joint Communication and Control for Stable Formation For operational safety, autonomous navigation usually requires coordinated CAVs to maintain a stable formation. For example, a stable formation for a platoon means that each CAV maintains a target speed and safe spacing to its preceding CAV. To achieve such formations, CAVs will use sensors and wireless systems to collect dynamics (e.g., speed and location) of nearby CAVs. As already discussed, the wireless delay and packet loss can jeopardize the CAVs' stable operation. Hence, there is a need to study how the wireless network affects the stable formation for coordinated CAVs. Similar to Section II-A, a time-delay system can be built for the group of CAVs so that the delayed dynamic errors (such as velocity and spacing errors) are explicitly considered. To analyze the impact of the wireless network on the stable formation, there are two types of stability to consider for coordinated CAVs: Plant and string stability. To secure plant stability where all CAVs maintain the same speed and a target distance from surrounding CAVs, the Lyapunov-Krasovskii and Lyapunov-Razumikhin theorems can be used. Meanwhile, in order to guarantee string stability where the disturbances (e.g., in velocity or distance) of nearby vehicles are not amplified in the CAV group, transfer function from control theory can be used. This is due to the fact that the transfer function can capture how the disturbances propagate among adjacent coordinated CAVs. In this case, a non-increasing disturbance can thereby equal to a monotonically decreasing transfer function in coordinated CAVs. Based on plant and string stability analysis, the delay threshold can be derived so that coordinated CAVs can operate with a stable controller. This characterized delay can enable the joint communication and control system design where the design of the control system is optimized to relax the delay constraints on the wireless network and the wireless network can be optimized to improve its reliability to support controller's stability. The left block diagram in Fig. <ref> shows the joint communication and control design which keeps the stable formation for a group of n+1 coordinated CAVs. In the simulation, we consider a highway scenario for a platoon where the acceleration of each platoon CAV is determined by a proportional controller and the distribution of vehicles generating interference to platoon CAVs is modeled by stochastic geometry <cit.>. In particular, we calculate the control delay requirement and the wireless reliability whose performance is shown in the simulation under different proportional controller parameters a, b ∈ [2,4] <cit.>. We can observe that the platoon with the optimized controller design can achieve a higher wireless reliability for wireless networks compared with platoons with other control parameters. Clearly, the controller design directly affects how reliable the wireless networks is to support a stable formation of coordinated CAVs. In other words, a stable formation of coordinated CAVs will require solutions using the convergence of communication theory and control theory. §.§ Joint System Design for Fast Collaborative Learning ML will play an important role to execute collaborative navigation-related tasks such as cooperative obstacle recognition for coordinated CAVs (note that CAVs must be coordinated to keep safe spacing and ensure operational safety while learning). One common approach is using cloud-based ML where the CAV data is sent to a cloud for data analysis and inference. However, continuously transferring the data of many CAVs to cloud servers will not be resource-efficient if even feasible, since it will impose an unmanageable traffic load over the access network. Also, as the shared data (like trajectory information) can be privacy-sensitive, malicious servers can intrude the CAVs' privacy, e.g., by tracking the CAV's location. To facilitate collaborative ML while maintaining the privacy of CAV navigation data, FL can be used. Essentially, FL divides coordinated CAVs into a global CAV and multiple local CAVs and uses an iterative update scheme whose details can be found in Section II-C. However, as FL models are transmitted via communication links, the learning performance hinges on underlying wireless network <cit.>. For example, due to the limited battery life and transmit power, not all CAVs can participate in the FL training and some important local model updates can be discarded, leading to a poor execution performance for these navigation-related tasks. Meanwhile, the network design must take into account the communication requirement imposed from the control system to maintain operation safety. Hence, the impact of wireless network on the convergence of FL and safe spacing between CAVs must be explicitly considered when using FL for collaborative navigation related tasks. The middle block diagram in Fig. <ref> shows the FL implementation over n+1 coordinated CAVs. Here, we implement FL for a swarm of wireless-connected drones with the leader as the global CAV and followers as local CAVs. Then, we perform a rigorous convergence analysis to derive the convergence time while considering wireless factors, e.g., random antenna angle deviations σ^2, and the minimum number of CAVs to achieve a target convergence. We observe from the simulation results that, when the variance of antenna angle deviations increases, FL needs more time to converge. This is because, when the variance increases, the antennas at the transmitting and receiving drones will be less aligned, reducing the antenna gain and increasing the transmission delay. These results show that, when CAVs use FL for collaborative navigation, the FL performance directly depends on the underlying wireless setup. This connection is the foundation for convergently integrated system design when optimizing FL performance for coordinated CAVs. §.§ Distributed Generative Adversarial Networks for Intrusion Detection in Multi-agent CAVs The operation of coordinated CAVs requires addressing security challenges with a minimum delay to maintain coordination. By manipulating a CAV's controller in coordinated CAVs, an adversary can control other CAVs as well thus causing cascading failures. To detect intrusions in multi-agent CAVs such as CAV platoons or drone swarms, an intrusion detection system (IDS) is typically implemented at a central node such as a BS that receives CAV state information. However, such centralized IDSs cannot detect intrusion or anomalies on-time due to the large-scale nature of multi-agent CAVs and communication delays. Therefore, in coordinated CAVs, IDSs must operate in a distributed fashion with minimum dependence on a central node. Generative adversarial networks (GANs) are ANN architectures that can be used as IDSs. A GAN architecture consists of 1) a generator ANN that tries to generate real-like anomalous data samples and 2) a discriminator ANN that discriminates between the anomalous data samples generated by the generator and the real normal data samples. The generator trains the discriminator against anomalies by generating numerous anomalous data samples which helps the discriminator explore the anomaly space. This GAN architecture can be implemented in a distributed fashion in multi-agent CAVs where every CAV owns a discriminator and a generator. Next, each CAV can train its ANN by its normal state information and share its ANN parameters with other CAVs using FL. Here, each CAV learns its own normal state and the other CAVs's normal state. Using this framework each CAV will own a discriminator that monitors its own and neighboring CAVs' behavior in coordinated tasks without a central node. The right block diagram in Fig. <ref> shows a distributed GAN-based IDS for CAVs. Specifically, we consider that each CAV uses a GAN to learn its own and neighbors' normal state and uses this GAN to detect anomalies and intrusions. We simulate an internal attack (attack on a given CAV state) and an external attack (attack on the state of the neighboring CAVs of a given CAV) <cit.>. We observe that a distributed GAN-based IDS has a higher precision of detecting internal and external attacks on CAVs compared to central and standalone IDSs. This is because a distributed IDS can identify interconnection between cyber and physical systems among CAVs and allow each CAV to learn normal state from not only itself but also other neighboring CAVs. However, standalone IDSs do not consider neighboring CAVs and centralized IDS considers all of the CAVs as a system and neglects unique behaviors of each CAV. Therefore, by using GANs, we can implement distributed IDSs for coordinated CAVs such that each CAV can detect intrusions to the neighboring CAVs without any dependence on a central units which reduces communication delays. § FUTURE DIRECTIONS AND OPEN PROBLEMS ON JOINT SYSTEM DESIGN AND SECURITY DESIGN §.§ Joint System Design for Advanced Controller Design Classical control laws are usually used to deal with nominal operating conditions, like flat roads for ground vehicles and large and open spaces for aerial vehicles. However, to deal with more extreme operating conditions, like urban areas, there is a need for more sophisticated models for the controller design, like rear wheel position based feedback. When designing these advanced controllers, it is gaining momentum to use the machine learning framework for training its local data and optimizing the controller model. In this case, all the challenges pertaining to the interconnection between communication, control and learning will still persist, i.e., the impact of wireless network on learning performance and controller' stability, as well as a proper learning mechanism design for CAVs to facilitate a fast convergence to the optimal controller model. Hence, the goal of this future work is to use the interconnection between control, learning, and wireless framework for the advanced controller design. §.§ Advanced FL for CAVs As an extension of FL used in Sections 2-C and 3-B, advanced FL strategies can be further used to optimize the CAV system design. For example, hierarchical FL can be used to aggregates the knowledge learned by more CAVs. In particular, each BS will update the model, called intermediate model, based on model updates received from its associated CAVs, and BSs will also send the updated intermediate model to a central entity, hosted on an edge or a regional cloud server, who will generate a new global model and transmit the new global model to BSs. Also, since conventional FL frameworks suffer from straggler effects and partial participation of CAVs in the learning process, there is an increasing interest on asynchronous FL where the BS will aggregate the received parameters and generate a new global model in an asynchronous fashion. Same to the conventional FL used in Sections II-C and III-B, the impact of wireless network on the convergence performance must be explicitly studied for advanced FL frameworks. Also, when used for the navigation controller design, these advanced FL mechanisms must be properly designed to consider the unbalanced and non-IID data, the CAVs' mobility, and the wireless fading channels. §.§ Adversarial ML in CAVs As shown in Section <ref>, CAVs can rely on ML to complete tasks like object detection, path planning, and trajectory optimization. Such reliance on ML make CAVs vulnerable against adversarial ML that attempt to alter ML models through malicious inputs. For example, adversarial ML techniques can fool a CAV such that it will not detect an object on its path which would cause an accident. This is because the ML models used in CAVs are usually trained on stationary and benign environments such that the behavior of the environments is assumed to be unchanged after the ML models are trained. However, the presence of an intelligent adversary may disturb the normal behavior of the CAV environment such that the pretrained ML model would fail to fulfill its task. Techniques such as adversarial training or defensive distillation are two of the commonly used methods against adversarial ML. Adversarial training uses a lot of adversarial examples and explicitly train the model not to be fooled by each of them. Moreover, in defensive distillation, the output of an ML model will be probabilistic rather than hard decisions which will make it difficult for the adversary to find adversarial inputs that lead to incorrect decisions by CAVs. Therefore, adversarial ML for CAVs is a crucial research area for CAV decision making and operation. § CONCLUSIONS In this article, we have highlighted the joint system design and security challenges in two use cases of coordinated and uncoordinated CAV navigation. Using the convergence among communication theory, control theory, and ML, we have proposed solutions to address key challenges, such as stable path tracking, robust control against cyber-physical attacks, and navigation controller design for uncoordinated CAVs, as well as challenges pertaining to coordinated CAVs, including stable formation, fast collaborative learning, and distributed intrusion detection systems. Simulation results have been provided to verify the merits of proposed solutions. Finally, we have outlined some of key open problems further optimizing the joint system design and addressing security challenges for the CAV navigation. The convergence study in this article can be extended to any interconnected systems within CAVs and can be used to optimize the CAVs' operations, beyond navigation. 0.7 IEEEtran 10 url@samestyle 8457076 R. Hussain and S. Zeadally, “Autonomous cars: Research results, issues, and future challenges,” IEEE Commun. Surveys Tuts., vol. 21, no. 2, pp. 1275–1313, Secondquarter 2019. damaj2022future I. W. Damaj et al., “Future trends in connected and autonomous vehicles: Enabling communications and processing technologies,” IEEE Access, vol. 10, pp. 42 334–42 345, Apr. 2022. liu2023systematic W. Liu et al., “A systematic survey of control techniques and applications: From autonomous vehicles to connected and automated vehicles,” arXiv preprint arXiv:2303.05665, 2023. 8884164 J. Zhang and K. B. Letaief, “Mobile edge intelligence and computing for the internet of vehicles,” Proc. IEEE, vol. 108, no. 2, pp. 246–261, Feb. 2020. sun2021survey X. Sun et al., “A survey on cyber-security of connected and autonomous vehicles (CAVs),” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 6240–6259, Jul. 2021. ju2022survey Z. Ju et al., “A survey on attack detection and resilience for connected and automated vehicles: From vehicle dynamics and control perspective,” IEEE Transactions on Intelligent Vehicles, vol. 7, no. 4, pp. 815–837, Dec. 2022. 10004548 J. Zhu et al., “Merging control strategies of connected and autonomous vehicles at freeway on-ramps: A comprehensive review,” Journal of Intelligent and Connected Vehicles, vol. 5, no. 2, pp. 99–111, May 2022. gu2003stability K. Gu et al., Stability of time-delay systems.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 2003. 8761966 T. Zeng et al., “Joint communication and control system design for connected and autonomous vehicle navigation,” in Proc. of IEEE ICC, Shanghai, China, May 2019. Ferdowsi2018 A. Ferdowsi et al., “Robust deep reinforcement learning for security and safety in autonomous vehicle systems,” in Proc. of IEEE ITSC, Maui, HI, USA, Nov. 2018. konevcny2016federated Konečnỳ et al., “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016. fyu2020 F. Yu et al., “Bdd100k: A diverse driving dataset for heterogeneous multitask learning,” in Proc. of IEEE CVPR, Seattle, WA, Jun. 2020. 8778746 T. Zeng et al., “Joint communication and control for wireless autonomous vehicular platoon systems,” IEEE Trans. Commun., vol. 67, no. 11, pp. 7907–7922, Nov. 2019. TLi T. Li et al., “Federated optimization in heterogeneous networks,” in Proc. of ACM MLSys, Austin, TX, USA, Mar. 2020. FerdowsiGAN A. Ferdowsi and W. Saad, “Generative adversarial networks for distributed intrusion detection in the internet of things,” in Proc. of IEEE GLOBECOM, Waikoloa, HI, USA, Dec. 2019. [ < g r a p h i c s > ] Tengchan Zeng (S'18) received his Ph.D. degree from Virginia Tech in 2021. He is currently a Systems Engineer at Ford Motor Company. He was an exemplary reviewer for IEEE Transactions on Communications in 2021. [ < g r a p h i c s > ]Aidin Ferdowsi (S'17) received his Ph.D. degree in electrical engineering from Virginia Tech. He is currently a Principal Machine Learning Engineer at Capital One. Dr. Ferdowsi is awarded The Outstanding Dissertation Award in all STEM majors from Virginia Tech. His research interests include machine learning, data science, and game theory. [ < g r a p h i c s > ] Omid Semiari (S'14, M'18) received his Ph.D. from Virginia Tech in 2017. He is an assistant professor with the ECE department at the University of Colorado, Colorado Springs. His research interests include wireless communications (6G), machine learning for communications, distributed learning, and cross-layer network optimization. [ < g r a p h i c s > ] Walid Saad (S'07, M'10, SM'15, F'19) received his Ph.D degree from the University of Oslo in 2010. Currently, he is a Professor at the Department of Electrical and Computer Engineering at Virginia Tech where he leads the Network sciEnce, Wireless, and Security (NEWS) laboratory. His research interests include wireless networks, machine learning, game theory, cybersecurity, unmanned aerial vehicles, cellular networks, and cyber-physical systems. Dr. Saad was the author/co-author of eleven conference best paper awards and of the 2015 IEEE ComSoc Fred W. Ellersick Prize. He is a Fellow of the IEEE. [ < g r a p h i c s > ]Choong Seon Hong (S'95, M'97, SM'11) is working as a professor with the Department of Computer Science and Engineering, Kyung Hee University. His research interests include machine learning, mobile computing, federated learning and satellite networking. He was an Associate Editor of the IEEE Transactions on Network and Service Management, J. Communications and Networks and an Associate Technical Editor of the IEEE Communications Magazine.
http://arxiv.org/abs/2307.01195v1
20230703175527
An imaged 15Mjup companion within a hierarchical quadruple system
[ "A. Chomez", "V. Squicciarini", "A. -M. Lagrange", "P. Delorme", "G. Viswanath", "M. Janson", "O. Flasseur", "G. Chauvin", "M. Langlois", "P. Rubini", "S. Bergeon", "D. Albert", "M. Bonnefoy", "S. Desidera", "N. Engler", "R. Gratton", "T. Henning", "E. E. Mamajek", "G. -D. Marleau", "M. R. Meyer", "S. Reffert", "S. C. Ringqvist", "M. Samland" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
LESIA, Observatoire de Paris, Université PSL, CNRS, 5 Place Jules Janssen, 92190 Meudon, France antoine.chomez@obspm.fr Univ. Grenoble Alpes, CNRS-INSU, Institut de Planetologie et d'Astrophysique de Grenoble (IPAG) UMR 5274, Grenoble, F-38041, France; INAF – Osservatorio Astronomico di Padova; Vicolo dell’Osservatorio 5, I-35122 Padova, Italy Institutionen för astronomi, Stockholms universitet; AlbaNova universitetscentrum, SE-106 91 Stockholm, Sweden Univ. Lyon, Univ. Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon (CRAL) UMR5574, F-69230 Saint-Genis-Laval, France Université Côte d’Azur, OCA, CNRS, Lagrange, France Pixyl S.A. La Tronche, France ETH Zurich, Institute for Particle Physics and Astrophysics, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich, Switzerland Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany Jet Propulsion Laboratory, California Institute of Technology; 4800 Oak Grove Drive, Pasadena CA 91109, USA Fakultät für Physik, Universität Duisburg-Essen, Lotharstraße 1, 47057 Duisburg, Germany Institüt für Astronomie und Astrophysik, Universität Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany Physikalisches Institut, Universität Bern, Gesellschaftsstr. 6, 3012 Bern, Switzerland Department of Astronomy, University of Michigan; 1085 S. University Ave, Ann Arbor MI 48109, USA Landessternwarte, Zentrum für Astronomie der Universität Heidelberg; Königstuhl 12, 69117 Heidelberg, Germany Université Grenoble Alpes, CNRS, Observatoire des Sciences de l’Univers de Grenoble (OSUG), Grenoble, France Since 2019, the direct imaging B-star Exoplanet Abundance Study (BEAST) at SPHERE@VLT has been scanning the surroundings of young B-type stars in order to ascertain the ultimate frontiers of giant planet formation. Recently, the 17^+3_-4 Myr HIP 81208 was found to host a close-in (∼ 50 au) brown dwarf and a wider (∼ 230 au) late M star around the central 2.6 primary. Alongside the continuation of the survey, we are undertaking a complete reanalysis of archival data aimed at improving detection performances so as to uncover additional low-mass companions. We present here a new reduction of the observations of HIP 81208 using PACO ASDI, a recent and powerful algorithm dedicated to processing high-contrast imaging datasets, as well as more classical algorithms and a dedicated PSF-subtraction approach. The combination of different techniques allowed for a reliable extraction of astrometric and photometric parameters. A previously undetected source was recovered at a short separation from the C component of the system. Proper motion analysis provided robust evidence for the gravitational bond of the object to HIP 81208 C. Orbiting C at a distance of ∼ 20 au, this 15 brown dwarf becomes the fourth object of the hierarchical HIP 81208 system. Among the several BEAST stars which are being found to host substellar companions, HIP 81208 stands out as a particularly striking system. As the first stellar binary system with substellar companions around each component ever found by direct imaging, it yields exquisite opportunities for thorough formation and dynamical follow-up studies. An imaged 15 companion within a hierarchical quadruple system An imaged 15 companion within a hierarchical quadruple systemBased on data obtained with the ESO/VLT SPHERE instrument under programs 1101.C-0258(A/E). A. Chomez<ref>,<ref> V. Squicciarini<ref>,<ref> A.-M. Lagrange<ref>,<ref> P. Delorme<ref> G. Viswanath <ref> M. Janson <ref> O. Flasseur<ref> G. Chauvin<ref> M. Langlois<ref> P. Rubini<ref> S. Bergeon<ref> D. Albert<ref> M. Bonnefoy <ref> S. Desidera <ref> N. Engler <ref> R. Gratton <ref> T. Henning <ref> E.E. Mamajek <ref> G.-D. Marleau <ref>,<ref>,<ref>,<ref> M.R. Meyer <ref> S. Reffert <ref> S.C. Ringqvist <ref> M. Samland <ref> August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The formation of planets in binary systems, and chiefly the tightest (≲ 50 au) ones, is a vibrant subject in exoplanetology. Indeed, binary systems are complex environments from a dynamical point of view, severely affecting the size of protoplanetary disks and their capability to either form massive enough cores to undergo runaway gas accretion <cit.> or induce low enough Toomre Q values to trigger gravitational instability <cit.>. Whether substellar companions can form critically depends on the stars properties, their physical separations, and the disk initial properties <cit.>. From an observational standpoint, ∼ 300 S-type substellar companions (companions orbiting one component of a binary system) within binary systems are known to date <cit.> – their frequency being anti-correlated with binary separation <cit.> –, as well as triple and higher-order planet-hosting systems in strongly hierarchical configurations <cit.>. Indirect techniques have identified a few systems where both components host substellar companions <cit.>; notably, HD 41004 stands out due to its close A-B separation (∼ 23 au): with a m sini = 18 brown dwarf orbiting at 0.017 au from component B (0.4) and a m sini = 2.5 companion around component A (0.7) on an orbit with semi-major axis a=1.6 au and a large eccentricity e=0.4 <cit.>. In the course of a new analysis of archival data obtained through the SPHERE high-contrast imager <cit.>, we detected a new companion in the young (17^+3_-4 Myr) HIP 81208 system. HIP 81208 was observed as part of the BEAST survey dedicated to the search for exoplanets around 85 B-type members of the Scorpius Centaurus (Sco-Cen) association <cit.>. Located in the Upper Centaurus-Lupus (UCL) subgroup of Sco-Cen at a distance of 148.7^+1.5_-1.3 pc <cit.>, it has been recently identified as a triple system, where the A component is a 2.58 ± 0.06  B9V star, the B component is a 67^+6_-7 brown dwarf orbiting HIP 81208 A at about 50 au, and the C component is a low-mass star of 0.135^+0.010_-0.013  orbiting HIP 81208 A at about 230 au <cit.>. The newly found companion <cit.> is orbiting the C component, making HIP 81208 the first binary system with substellar companions around both components ever discovered through DI. We present SPHERE data and data reduction in Sect. <ref>; after confirming the bound nature of the companion candidate, we describe its properties in Sect. <ref>. A discussion on the peculiar properties of this quadruple system follows in Sect. <ref>. § DATA ANALYSIS §.§ SPHERE data HIP 81208 was observed twice by SPHERE <cit.> on August 6, 2019 and on April 5, 2022. Both observations were conducted using the telescope in pupil-stabilized mode. This allow the use of angular and spectral differential imaging <cit.> post processing techniques. In each case, an unsaturated, non coronagraphic image (point spread function; PSF) of the primary was obtained for flux calibration purposes, as well as a coronagraphic exposure with a waffle pattern applied to the mirror <cit.>, for centering purposes, before and after the main coronagraphic exposures. The N-ALC-YJH-S coronagraph was used, allowing the infrared dual-band imager and spectrograph <cit.> to observe in the K1 and K2 bands while the integral field spectrograph <cit.> observed in the YJH bands. Because the source of interest for this letter is outside the field of view of IFS, only IRDIS data will be considered hereafter. Table <ref> summarizes the observing conditions as well as the setup for the two observations, the same already used in <cit.>. §.§ Data reduction Data reduction was performed on the COBREX Data Center, a modified and improved server based on the High Contrast Data Center (HC-DC, formerly SPHERE Data Center), <cit.> and aimed at improving detection capabilities with existing SPHERE images by means of the algorithm. More specifically, we used <cit.> as well as the No-ADI routine embedded in the software <cit.> as post-processing algorithms. The pre-reduction pipeline (i.e. going from raw data to calibrated 4D datacube) is identical to the one implemented in the HC-DC, performing dark, flat, distortion and bad pixel corrections. PACO is modelling the noise using a multi Gaussian model at a local scale on small patches, allowing a better estimation of the temporal and spectral correlation of the noise. The full details on the improvements of the pre-reduction pipeline as well as the optimization regarding the ASDI mode of , and the obtained performances are described in <cit.>. provides a contrast gain between 1 and 2 magnitudes at all separations as well as reliable and statistically grounded signal to noise (S/N) detection and contrast maps in an unsupervised fashion compared to more classical algorithms like TLOCI-ADI used in <cit.>. Prompted by the results emerging from our and No-ADI reductions (Sect. <ref>), we additionally developed a custom PSF subtraction routine, building a local PSF model to remove C and enhance detection capabilities in its immediate surroundings ( See detailed description in Appendix <ref>). As regards the true North, the pixel scale, and the pupil offset, we adopted the long-term IRDIS calibrations by <cit.>: a pixel scale of 12.258 ± 0.004 mas/pixel (K1 band) and 12.253 ± 0.003 mas/pixel (K2 band), a true North orientation of (-1.77 ± 0.04)^∘, and a pupil offset of (136 ± 0.03)^∘. §.§ Detected sources Figure <ref> shows the S/N maps of and residual No-ADI maps for both epochs: a new source (hereafter Cb) was detected in close proximity (∼ 120 mas) to C. detects Cb with a high S/N (26.2 in 2019, 16.9 in 2022), placing a high confidence level on the detection. Although Cb is visible by eye in the no-ADI reduction, no reliable measurement could be extracted for the source because of strong contamination by C; does not provide tools to deconvolve sources. Conversely, includes a cleaning option designed for the case: it removes the contribution of C while characterizing Cb, enabling extraction of a reliable photometry <cit.>. We attribute the previous non-detection of Cb to self-subtraction artefacts introduced by TLOCI – the baseline algorithm used to process IRDIS observations in BEAST's standard reduction pipeline – near C[Any ADI-based algorithm with a subtraction step (e.g., which does not fit the planet and the systematics simultaneously) is also suffering from this self-subtraction effect.]. Our PSF subtraction routine, specifically designed to investigate the surroundings of C, enabled us to solidly reveal Cb and its first Airy ring on almost 360° (see Fig. <ref>) at both epochs. Figure <ref> shows the residuals after removing both C and Cb. The highest residuals in both epochs are barely above the local background noise in each of the 48 individual frames, allowing us to robustly determine position, contrast and associated uncertainties by deriving the mean and standard deviation of the resulting 48 independent measurements. Notably, the source is not only visible with both algorithms at both epoch but also in each raw frame, before any post-processing algorithm is applied (Fig. <ref>); unlike the nearby 1^st Airy ring of C, its separation from C does not vary with wavelength, while its rotation around C during the ADI sequence is consistent with the expected motion of a physical source (see Appendix <ref>), thus ensuring it is not an artefact. As an additional check, we also characterized B and C, finding results compatible within 1σ to those presented by <cit.>[We attribute the larger astrometric uncertainties emerging in our analysis to the fact that the previous analysis did not include primary centering uncertainties, which dominate positional uncertainty here.]. Furthermore, besides redetecting all known background sources with similar astrometric and photometric values to the initial analysis, we imaged an additional faint source (CC14) owing to 's deeper sensitivity. Astrophotometrical results for B, C, Cb and CC14 are provided in Appendix <ref>. § A LOW-MASS SUBSTELLAR COMPANION AROUND HIP 81208 C §.§ Companion confirmation Figure <ref> shows the proper motion of B, C and Cb as opposed to already known background sources. As for B and C, the motion of Cb is inconsistent (at 6 σ) with the observed motion of field interlopers. We anticipate that the non-null relative motion between C and Cb is consistent with orbital motion, as it will be quantified in Sect. <ref>. Having confirmed the common proper motion of Cb to the already known HIP 81208 A, B and C, we carefully investigated an alternative possibility: namely, that A-B and C-Cb constitute two independent binary members of UCL projected at a short separation (Appendix <ref>). We therefore assessed the probability that an UCL member, unseen by Gaia <cit.>, could end up as interloper to any BEAST star. A final false alarm probability, referring to at least one detection across the entire current survey (47 stars), of 1.3 × 10^-3 was obtained (meaning that we expect, on average, 1 false positive out of ∼40000 observed B stars), placing a high confidence level on the membership of A (+B) and C (+Cb) to a single quadruple system. §.§ Physical properties As in <cit.>, an estimate of the photometric mass of the newly discovered object was obtained using the madys tool <cit.>: we combined (K1, K2) contrasts, the 2MASS K_s magnitude of the primary <cit.>, plus the system's color excess (=0.011±0.021 mag) and age (17^+3_-4 Myr). For the purpose of accounting for theoretical uncertainties on the final estimates, the computation was performed by comparison with two different models suited for the age and mass range of interest: the Ames-Dusty models <cit.> and the BT-Settl models <cit.>. The resulting values were averaged to yield a final mass estimate: M_Cb = 14.8 ± 0.4 Additional details on the derivation of this estimate and its associated uncertainty are provided in Appendix <ref>. Based on models, this best-fit mass would correspond to expected 2MASS H=10.28±0.07 mag and K_s=9.74 ± 0.04 mag. In the same fashion, the average effective temperature, surface gravity and bolometric luminosity returned by this comparison are = 2050_-20^+35 K, logg=4.087^+0.011_-0.022, and logL/=-3.31 ± 0.03. However, we acknowledge that we currently have only 2 photometric points probing similar wavelengths and that, even in the case of young substellar objects with much more comprehensive data available, systematic errors intrinsic to atmosphere and evolution models might be up to an order of magnitude larger than formal uncertainties <cit.>. These differences can arise for instance from uncertainties on the initial entropy after accretion, possible age difference between planet and host star as well theoretical difficulties in handling atmospheric dust <cit.>. Figure <ref> displays the position of B, C and Cb on a color-magnitude diagram, all matching the M sequence, while Table <ref> reports the main outputs of the astrophotometric characterization of the object. §.§ Orbital properties We ran the code <cit.> to derive information on the orbital parameters of the companion starting from the astrometry and their best-fit masses of Cb and C. We used their relative astrometry, as measured by PSF subtraction, because it is affected by much smaller uncertainties than absolute astrometry (Appendix <ref>). The sampling tool is based on the 3.0 library, using a mix of custom move functions to alleviate potential multi-modality problems and the cyclicity of angular variables. 100 walkers, between 100000 and 400000 iterations, and 10 temperatures were used. The priors include a uniform log prior for sma (a ∈ [0,80] au). The upper value in sma corresponds roughly to 0.3 times the projected separation between A and C, which, following <cit.> should allow for the stability of Cb in the binary system, given the masses of A and C, and assuming a null eccentricity. We nonetheless considered eccentricities e ∈ [0,0.4] range for the priors[Note that assuming a larger range (up to 0.9) does not significantly change the results.]. Given the limited information available, the orbital parameters are poorly constrained (see Fig. <ref> and in Appendix <ref>). The a distribution peaks at 17 au (T ∼ 190 yr), with a tail extending to more than 40 au (T ∼ 600 yr). The eccentricity is not constrained. The inclination of its orbital plane is i=73± 20^∘. Figure <ref> shows the 1000 best draws from our posterior distributions for Cb determined based on the loglikelihood. We also ran the MCMC on B and C companions, finding a, e and i compatible with those found by <cit.>, albeit with larger uncertainties, due to the larger error bars found in the present astrometric measurements. § DISCUSSION AND CONCLUDING REMARKS Before BEAST began, no ≲ 30 companion was known around stars more massive than 3 – with only sporadic detections by RV in the mass range 2.5-3 <cit.>, questioning their very existence <cit.>. The discovery of a circumstellar ∼ 11 planet around the 6-10 binary b Centauri <cit.> and one (possibly two) brown dwarfs close to the deuterium-burning limit around the ∼ 9 μ^2 Scorpii <cit.> first provided evidence for such a population, opening up a plethora of questions about its genesis. The architecture of the HIP 81208 system turns out to be unique in many respects (Table <ref>). Not only is the B-type primary surrounded by a brown dwarf and a M-type stellar companion; the additional discovery, presented in this Letter, of a ∼ 15 companion to the C component makes it the first binary system with substellar companions to both components ever discovered by imaging. Even if considered in isolation, a ≃ 15 companion at ∼ 20 au from a late M-type star such as HIP 81208 C would be deemed remarkable. Figure <ref> shows the mass ratio of confirmed giant planets and brown dwarfs (M ∈ [1, 80]) around late M-type stars (M_* ∈ [0.08,0.3]): among five such DI companions, only two – WISE J0720-0846 <cit.> and LHS 2397a B <cit.> – have orbital separations <50 au; both of them, however, are characterized by a much larger mass ratio (q ≈ 0.7) than q_Cb, indicative of a binary-like formation <cit.>. Including indirect techniques (with a ∈ [3,50] au), only two a<10 au objects, discovered via microlensing – OGLE-2016-BLG-0263L b <cit.> and OGLE-2013-BLG-0911L b <cit.>, both with a small q ≈ 0.03 – are added[Data from the Exoplanet Encyclopaedia (http://exoplanet.eu/http://exoplanet.eu/).]. While a full formation analysis of HIP 81208 Cb is beyond the scope of this work, it is worth mentioning possible formation pathways for the object and the whole quadruple system. Pivotal to a full understanding of the observed architecture is the formation of HIP 81028 C: the M-star could be an outcome of either turbulent fragmentation of a star-forming core <cit.> – possibly followed by inward migration <cit.> – or gravitational instability (GI) within the disk of HIP 81208 A <cit.>, the rough cutoff between these mechanisms (∼ 500 au) likely depending on environment and stellar mass <cit.>. If the former is true, the circumstellar disks of A and C would be truncated due to mutual gravitational actions <cit.>. Provided no significant alteration of initial orbital parameters, a tentative estimate of the truncation radii R_T for the two stars could be derived as in <cit.> by drawing 10^6 (a,e) values for the A-C pair from the corresponding posterior distributions: R_T,C = 0.88 · R_R = 0.88 · r_p ·0.49 · q^2/3/0.6 · q^2/3 + ln(1+q^1/3) = 26_-9^+16 au, and R_T,A = 0.88 · (r_p-R_R) = 130_-40^+70 au, where r_p indicates the periastron of the orbit, and the Roche lobe R_R is derived from the empirical formula by <cit.>. The current position of Cb would be only marginally within the locations of its alleged parent disk, whence it would have sprouted either via CA <cit.> or GI. According to CA models, the formation of a high-q 15 object around a late M-type star is not expected <cit.>, also due to formation timescales (∼ 10^6-7 Myr) exceeding typical disk lifetimes by one or two order of magnitude at separations ≳ 10 au <cit.>. Conversely, GI could represent a formation channel for Cb <cit.>, provided a unusually large disk-to-star ratio of ∼ 30% (≈ 40) compared to the expected 1-10% <cit.>. Interestingly enough, the C-Cb separation would be within the typical range of M-type stellar binaries <cit.>. According to the alternative scenario, C itself would have formed via GI within the disk of A: simulations are able to produce objects with masses as high as 0.12 already around <1.2 stars <cit.>; despite the lack of GI models suited to B-type hosts, evidence for a more-than-linear dependence of disk mass on stellar mass <cit.>, coupled to observed circumstellar disks to late B stars spanning hundreds of au <cit.>, tentatively hints towards such possibility. As a companion to a disk-borne object, HIP 81208 Cb would intriguingly retain – whether formed in-situ or via dynamical capture <cit.> – the hierarchical level of a satellite <cit.>. A detailed characterization of the orbital parameters of B, C and Cb, and in particular their mutual inclinations, will discriminate between the two scenarios, helping shed light on this unique multiple system. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (COBREX; grant agreement n° 885593). SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF - Osservatorio di Padova (Italy), Observatoire de Genève (Switzerland), ETH Zürich (Switzerland), NOVA (Netherlands), ONERA (France) and ASTRON (Netherlands) in collaboration with ESO. SPHERE was funded by ESO, with additional contributions from CNRS (France), MPIA (Germany), INAF (Italy), FINES (Switzerland) and NOVA (Netherlands). SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012) and grant number 312430 for FP7 (2013-2016). This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CeSAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon (OSUL/CRAL). T.H. acknowledges support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28. G-DM acknowledges the support of the DFG priority program SPP 1992 Exploring the Diversity of Extrasolar Planets (MA 9185/1) and from the Swiss National Science Foundation under grant 200021_204847 PlanetsInTime. Parts of this work have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. R.G. and S.D. acknowledge the support of PRIN-INAF 2019 Planetary Systems At Early Ages (PLATEA). This research has made use of the SIMBAD database and VizieR catalogue access tool, operated at CDS, Strasbourg, France. This work is supported by the French National Research Agency in the framework of the Investissements d’Avenir program (ANR-15-IDEX-02), through the funding of the "Origin of Life" project of the Univ. Grenoble-Alpes. This work was supported by the Action Spécifique Haute Résolution Angulaire (ASHRA) of CNRS/INSU co-funded by CNES. This research has made use of data obtained from or tools provided by the portal exoplanet.eu of The Extrasolar Planets Encyclopaedia. aa § DETAILS ON THE CUSTOM PSF SUBTRACTION ROUTINE The custom PSF subtraction routine, sketched in Sect <ref>, proceeded in two steps based on small stamps of 47×47 pixels roughly centered on C. In the first step we used as a model a 47x47 pixel stamp of the mean of the two off-axis PSF, acquired just before and just after the observations. We recursively removed C and then Cb from each of the 48 individual frames of the coronagraphic sequence by minimizing the residuals between the data stamp and our empirical PSF model stamp. This minimisation was performed by injecting negative versions of this empirical PSF model in a local grid centered on C oversampled by a factor 100 in each spatial dimension. We used three free parameters each for C and Cb, namely the oversampled pixels positions in x and y and the source to model contrast, and selected the models that minimise the absolute value of the residuals on small optimisation zones (1000 by 1000 oversampled pixels). After this first step, we noticed that the residuals after subtraction of this first PSF model were characterized by 1) a relatively high intensity, of the order of 1-2% of the local flux of C and 2) a systematic shape independent of time but with an alignment following the parallactic angle rotation. We interpreted these features as hints that the local PSF at C's position slightly differed from the calibration off-axis PSF. We therefore added C again (using its fitted parameters) on each of the 48 stamps with both C and Cb removed, effectively building a local PSF with the same pupil orientation on each frame. Afterwards, we applied a similar approach to standard ADI, which median-combined the 48 resulting sub-frames obtained at different parallactic angles without derotating them, producing a local pupil-stabilised PSF. As in ADI, the weak residuals from the subtraction of Cb, already close to the background noise and rotating around C with the parallactic angle, were further removed from this local PSF model by means of the non-derotated median. As a second step, we repeated the same minimisation approach, starting from the 48 small stamps containing both C and Cb, but using this local pupil-stabilised PSF as model instead of the off-axis PSF. Minimization directly provides the best-fit parameters for position and contrast for C and Cb on each frame, albeit in a pupil-tracking rotating frame of reference. These measurements were then derotated to sky coordinates and averaged to obtain the results shown in Table <ref>. The intensity of the residuals was reduced by a factor ∼5 after this second step compared to the first step, the absolute value of the highest residuals in any individual frames at both epochs being barely above the local background noise. We determined the uncertainties associated with our estimates of positions and flux by deriving the standard deviation of the resulting 48 independent measurements, which naturally and robustly include all sources of systematic errors that cause frame to frame variations, such as tip-tilt jitter or atmospheric transmission variability during the observing sequence. The main remaining systematic – namely the uncertainty on the position of the central star, constant over the sequence – was quadratically added to the measured uncertainties and dominates the error budget. In the peculiar case of the relative position of Cb around C (reported in Table <ref>), this systematic is naturally canceled out and the dynamical fits performed in subsection <ref> consequently employ the much smaller errors bars obtained when this systematic contribution is taken out. § RAW CORONAGRAPHIC FRAMES DISPLAYING THE ADI ROTATION AROUND C OF CB As mentioned in Sect. <ref>, the presence of the source Cb is visually evident even in raw coronagraphic frames despite its close vicinity to HIP 82108 C. A subset of the 48 individual frames is provided in Fig. <ref>. § ASTROPHOTOMETRIC RESULTS We provide in Table <ref> a summary of the astrometric and photometric results obtained for HIP 81208 Cb (Sec. <ref>) by means of the PACO and PSF subtraction reductions described in Sec. <ref>. Values for the newly found background source CC14 – only detected by – are provided in Table <ref>. § ON THE BOUND NATURE OF A-B AND C-CB Whilst the proper motion analysis of Cb firmly allowed us to conclude on its non-background nature and its common motion to A, B and C, it does not excluded, in principle, an alternative hypothesis: The A-B system is totally independent of the C-Cb system, both being Sco-Cen binaries projected by chance at a short separation from one another. In order to quantify the probability of this alternative scenario, we adapted the argument already produced for HIP 81208 B <cit.> and μ^2 Scorpii b <cit.>. After defining indicative coordinate limits for UCL as (l, b) = [313^∘,343^∘] × [2^∘,28^∘], we recovered N_0=3835 bona-fide members to this subgroup from the Gaia DR2-based list of Sco-Cen members assembled by <cit.>. At the distance and age of Sco-Cen, the census of the stellar population of the association is reasonably complete[According to BHAC15 isochrones <cit.> at solar metallicity, a 0.08  star aged 15 Myr has absolute G=11.45, corresponding to an apparent G ∼ 17 mag at the mean separation of UCL (∼ 140 pc); the survey is virtually complete for G ∈ [12,17] mag <cit.>.]. However, a source can be overlooked by Gaia if it happens to be located too close to a brighter star, that is, if the Δ G between the former and the latter is larger than the maximum contrast achievable by the satellite at the corresponding angular separation s. Let us therefore define as shaded area the circular region, centered on a star, within which the average detection efficiency δ̅(s,Δ G) of Gaia equals 50% for a given apparent G magnitude, and effective separation s_eff the corresponding radius. Our goal here is to quantify the number of these phantom UCL stars, so as to enable an estimation of the probability of spotting at least one of them within the entire BEAST survey. Intuitively, the computation hinges upon 1) the total shaded area A_s, obtained as the sum of individual shaded areas for all Gaia sources within the boundaries of UCL, and 2) the number – corrected for completeness – of UCL members, N_UCL. As regards the former, we queried Gaia DR3 <cit.> finding approximately 8 · 10^7 stars within the coordinate limits of UCL. We then recovered from <cit.> the detection efficiency of Gaia DR2 as a function of Δ G and s, δ(s,Δ G). In this way, we were able to compute, for every Gaia source i, the effective separation s_eff as a function of the apparent G magnitude of a hypothetical phantom star, G: s_eff,i = s | δ̅(s,Δ G_i) = 0.5, where δ̅(s,Δ G_i) = 1/s^2∫_0^sδ(s̃,Δ G_i) s̃^2 ds̃ and Δ G_i(G) = G-G_i. Summation over Gaia stars yields the total shaded area as a function of G: A_s(G) = ∑_iπ s_eff,i^2. The probability density function (PDF) of phantom stars can be now expressed as: n_P(G) = A_s(G)/A_UCL-A_s(G)· N_UCL· 0.5 ·ζ_Gaia(G), where ζ_Gaia(G) is the apparent G magnitude PDF of UCL stars, and the factor 0.5 accounts for the expectation that 50% of these object were already detected by Gaia. To cope with the incompleteness of the initial mass function (IMF) of the UCL sample at its faint end (i.e., for unseen substellar objects), we recovered the sample of Upper Scorpius[Upper Scorpius is, together with UCL and Lower Centaurus-Crux, one of the three subgroups in which Sco-Cen is classically divided <cit.>. We verified through a Kolmogorov-Smirnov test with α=0.05 that the absolute G magnitude distribution for US stars – selected from the same sample adopting coordinate boundaries as in <cit.> – is, for G>4, consistent with its UCL analog.] (US) members by <cit.> pushing completeness down to ∼ 10. 2MASS J magnitudes were converted into Gaia G magnitudes based again on BHAC15 isochrones at 15 Myr, yielding the ζ_US(G) PDF of the sample; a new normalized ζ(G) could then be built by combining the Gaia-based UCL list and the US sample, setting a sharp transition between the former and the latter at the dimmest magnitude where they intersect (Ĝ=16.8 mag): above this value, the two distributions start to significantly differ due to Gaia incompleteness. The number of unseen sources n_U recovered in this way amounts to ≈ 700 (∼ 18%), and their PDF is given by: n_U(G) = 0 G<Ĝ N_UCL· [ζ_US(G))-ζ_Gaia(G)] G ≥Ĝ . In order to compute the probability that a phantom or an unseen star is hidden by a BEAST star, we consider as a typical BEAST star an object as bright in the apparent G-band as the mean of the sample (G=5.29); for a contrast Δ G ≈ 8 mag, the effective shading separation s_eff,B of this star starts being larger than the half-edge (5.5") of IRDIS FOV (A_IRD = 11" × 11"); we therefore impose s_eff,B(Δ G) = inf(s_eff,B(Δ G),√(A_IRD/π)). The differential probability associated to the event as a function of G is given by: f(G) = f_P(G) + f_U(G) = n_P(G) ·π s_eff,B^2(G)/A_s(G)+ n_U(G) · A_IRD/A_UCL where the second term takes into account that unseen sources should be spread over the entire UCL. Integration of f(G) yields p=∫_G=5.29^G=25 f(G) dG = 2.8 · 10^-5. The false alarm probability associated to the event of finding at least one such object across the whole survey, having completed until now the observations of 47 stars, is equal to: FAP = 1-470(1-p)^47 = 1.3 × 10^-3. In order to evaluate the impact of the assumption of a constant age for US – which is instead known to have experienced a long-lasting star formation history ranging between 15 and 5 Myr ago <cit.> –, the conversion of J magnitudes into G magnitudes was repeated by supposing a constant age 5 Myr. The resulting FAP≈ 1.4 · 10^-3 provides robust evidence for the independence of the result on model assumptions, firmly allowing us to exclude the alternative scenario in favor of the one positing a single quadruple system. § CHARACTERIZATION OF CB The derivation of a photometric mass estimate for HIP 81208 Cb is mediated by madys <cit.>, as in previous BEAST publications <cit.>. After averaging K1 and K2 contrasts derived by over the two epochs, the conversion of those contrasts into calibrated apparent magnitudes was operated by means of the 2MASS K_s magnitude of the primary. Being HIP 81208 A classified as a B9V star, the impact of the approximation K_s,A≈ K1_A≈ K2_A is well within the photometric error budget <cit.>. Interstellar reddening towards HIP 81208 is known to be rather small <cit.> and translating, in the K_s band, to a negligible A_K_s = 0.003 ± 0.006 adopting A_K_s/ = 0.306 <cit.>. Likewise, the adopted parallax and age estimates reflect those used in the <cit.> paper. Having obtained absolute magnitudes for the object: K1 = 9.90 ± 0.04 mag, K2 = 9.48 ± 0.04 mag, we built a set ℳ of substellar evolutionary models that are adequate for the age and mass range of interest, while providing at once synthetic SPHERE magnitudes: such a set, to which observed magnitudes were compared, comprehends the Ames-Dusty models <cit.> and the BT-Settl models <cit.>. Details on the fitting algorithm, encompassing all sources of uncertainty within a Monte Carlo framework, can be found in <cit.>. The output of each model i ∈ℳ corresponds to a triplet (M_min,i,M_opt,i,M_max,i) equal to the (16^th,50^th,84^th) percentiles of the posterior mass distribution; the two outputs were averaged in the following manner: M_min = inf_i ∈ℳ ({M_min,i}) M_opt = i ∈ℳmean ({M_opt,i}) M_max = sup_i ∈ℳ ({M_+,i}) with the goal of embedding theoretical uncertainties onto the final estimate. The posterior mass distribution returned by each model can be easily translated into the posterior distribution of any astrophysical parameter of interest provided by the isochrone grids. We were therefore able to derive in a self-consistent way (using similar equations to Eq. <ref>-<ref>)) the best-fit estimates for effective temperature, surface gravity and bolometric luminosity. Likewise, we also computed synthetic 2MASS H and K magnitudes as a helpful first-guess estimate for follow-up studies: H = 10.28±0.07 mag, K_s = 9.74 ± 0.04 mag. We highlight that independent mass determinations were derived a posteriori, and used as a control sample, starting from the best-fit logL/ through the recent ATMO2020 <cit.> and Sonora Bobcat <cit.> grids[These grids were not included in ℳ because they are currently not equipped with SPHERE filters.]; these best-fit masses of 14.51^+0.16_0.15 and 14.44^+0.15_0.14, respectively, are consistent with our best-fit mass estimates. Nonetheless, as already mentioned in Sect. <ref>, we are not able to exclude the possibility of unaccounted systematic effects that are common to all the adopted models. § ORBITAL FIT OF CB: CORNER PLOT Given the small separation between C and Cb, most sources of systematic error are either canceled out (centering error) or significantly decreased (platescale and True North error), enabling an accurate determination of their relative separation at both epochs (Appendix <ref>). Starting from the relative C-Cb astrometry as measured by PSF subtraction and their best-fit masses (cp. Table <ref>), we ran an MCMC code based on the code <cit.> in order to derive the orbital parameters of Cb's orbit around C. The input parameters for the MCMC are a logarithmically uniform prior for semimajor axis (a ∈ [0, 80] au) and an eccentricity e ∈ [0, 0.4]. The posterior distribution for the orbital parameters of Cb derived in this work is provided in Fig. <ref>.
http://arxiv.org/abs/2307.01177v1
20230703174058
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
[ "Zhengdao Chen" ]
cs.LG
[ "cs.LG", "math.FA", "math.OC", "math.PR", "stat.ML" ]
Characterisation of three-body loss in 166Er and optimised production of large Bose–Einstein condensates Robert P. Smith August 1, 2023 ======================================================================================================== The characterization of the functions spaces explored by neural networks (NNs) is an important aspect of deep learning theory. In this work, we view a multi-layer NN with arbitrary width as defining a particular hierarchy of reproducing kernel Hilbert spaces (RKHSs), named a Neural Hilbert Ladder (NHL). This allows us to define a function space and a complexity measure that generalize prior results for shallow NNs, and we then examine their theoretical properties and implications in several aspects. First, we prove a correspondence between functions expressed by L-layer NNs and those belonging to L-level NHLs. Second, we prove generalization guarantees for learning an NHL with the complexity measure controlled. Third, corresponding to the training of multi-layer NNs in the infinite-width mean-field limit, we derive an evolution of the NHL characterized as the dynamics of multiple random fields. Fourth, we show examples of depth separation in NHLs under ReLU and quadratic activation functions. Finally, we complement the theory with numerical results to illustrate the learning of RKHS in NN training. § INTRODUCTION There has been a long interest in understanding how neural networks (NNs) work from aspects such as approximation, optimization and generalization. In the supervised learning setting, for example, NNs can be seen as parameterizing a particular family of functions on the input domain, within which one can search for a good fit through training. Then, to explain what is special about NNs, it is crucial to investigate the space of functions (a.k.a. hypothesis class) they represent. As modern NNs often involve a huge number of parameters, it is worthy of understanding the space of functions that can be represented by NNs with unlimited width. As a foundational result, the universal approximation theorem (e.g., ) shows that, given enough width, NNs are capable of approximating virtually all reasonable functions, which suggests the vastness of this space. A more interesting question, though, is to also find a complexity measure of functions that quantify their representation cost in terms of the error rate of approximation, which would yield insights on what kind of functions are more naturally represented by NNs. This question has been studied fruitfully in the literature for shallow (a.k.a. two-layer) NNs <cit.>, but remains mostly open for multi-layer NNs. Meanwhile, to study the sample complexity of learning NNs, prior works have proved generalization guarantees that are based on not the number of parameters but certain norms of their parameters (e.g., ), which could serve as a complexity measure of NNs from a generalization point of view. Then, an important question is whether there is a complexity measure associated with width-limited multi-layer NNs that unifies the two perspectives of approximation and generalization. Another critical aspect of deep learning is the training of NNs, which involves a non-convex optimization problem but can often be solved sufficiently well by variants of gradient descent (GD). Although remarkable progress has been made to prove optimization guarantees for various settings, it remains intriguing what kind of exploration in function space is induced by the training of NNs. The Neural Tangent Kernel (NTK) analysis provides a candidate theory via a linearized approximation of NN training <cit.>, which implies that NNs represent functions in a particular reproducing kernel Hilbert space (RKHS) determined by their initialization. However, the NTK theory is unable to model the feature learning that occurs in the training of actual NNs <cit.>, which is crucial to the success of deep learning. Hence, the present work is motivated by the following questions that are central yet largely open: * How to characterize the function space explored by the training of multi-layer NNs with arbitrary width? * Can we associate with it a complexity measure that governs both approximation and generalization? To answer these questions, by viewing an L-layer NN as a ladder of RKHSs with L-levels, we propose a function space L and a complexity measure L that satisfy the following theoretical properties: * Width unlimited: L contains all functions that can be represented by an L-layer NN with arbitrarily-wide hidden layers; * Approximation guarantee: Any function f in L can be approximated by an L-layer NN at a cost that depends on L(f); * Generalization guarantee: Generalization errors can be upper-bounded for learning in L with L under control; * Feature learning: Gradient descent training of L-layer NNs in a feature-learning regime corresponds to a learning dynamics in L; * Depth separation: There exist choices of L and the activation function under which L is strictly smaller than L+1. To our knowledge, this is the first proposal satisfying all of the properties above – or in fact, just (<ref>) and (<ref>) together – thus opening up a new perspective in understanding deep NNs. The rest of the paper is organized as follows. In Section <ref>, we introduce the Neural Hilbert Ladder (NHL) model and the NHL spaces and NHL complexities that it gives rise to. In Section <ref>, we prove static correspondences between multi-layer NNs and NHLs, verifying (<ref>) and (<ref>). In Section <ref>, we prove generalization bounds for learning NHLs, verifying (<ref>). In Section <ref>, we show that the training of multi-layer NNs in the mean-field limit translates to a particular learning dynamics of the NHL, verifying (<ref>). In Section <ref>, we show examples of depth separation in the NHL spaces under ReLU and quadratic activations, verifying (<ref>). In Section <ref>, we present numerical results on synthetic tasks that complement the theory and illustrate the learning of kernels during training. Prior literature is discussed in Section <ref>. § BACKGROUND §.§ Basic Notations and Definitions We use bold lower-case letters (e.g. and ) to denote vectors and bold upper-case letters (e.g. and ) to denote random variables or random fields. ∀ m ∈ℕ_+, we write [m] {1, ..., m}. When the indices i, j, t and s and variables and ' appear without being specified, by default, they are considered as under the universal quantifiers “∀ i, j ∈ [m]”, “∀ t, s ≥ 0” and “∀, ' ∈”. Suppose 𝒰 is a measurable space, and we let (𝒰) denote the the space of probability measures on 𝒰. ∀μ∈(𝒰), we let L^2(𝒰, μ) denote the space of square-integrable functions on 𝒰 with respect to μ, and ∀ξ∈ L^2(𝒰, μ), we write ξ_L^2(𝒰, μ) (∫ |ξ(u)|^2 μ(du))^1/2. If is a 𝒰-valued random variable, we let () ∈(𝒰) denote its law and let [ϕ()] = ∫ϕ(u) [ ()](du) denote the expectation of any measurable function ϕ: 𝒰→ applied to . Suppose additionally that 𝒰 is equipped with a norm (or quasi-norm) ·_𝒰. We define (𝒰; M) { u ∈𝒰: u _𝒰≤ M } for M > 0, write (𝒰) (𝒰; 1) for the unit ball in 𝒰, and let 𝒰̂{ u ∈𝒰: u _𝒰 =1 } denote the unit sphere in 𝒰. In addition, ∀μ∈(𝒰), we define μ_𝒰, p (∫ h _𝒰^p μ(dh))^1/p for p ≥ 0 and μ_𝒰, ∞ as the essential supremum of the function ·_𝒰 on 𝒰 with respect to μ. For N ∈ℕ_+, we let Lip(^N) denote the space of functions on ^N with Lipschitz constant at most 1. For a function σ: →, we call it non-expansive if ∀ u ∈, |σ(u)| ≤ |u|; we call it (non-negative) homogeneous if ∀ u ∈, a ≥ 0, σ(au) = a σ(u). Let denote the input domain, which we assume throughout the paper to be a compact subset of ^d. We let 𝒞 denote the space of continuous functions on equipped with the sup-norm f _∞sup_∈ |f()| and the Borel sigma-algebra. For ν∈() and f ∈𝒞, we write ∼νf()∫_ f() ν(d). §.§ Multi-Layer Neural Network (NN) We consider an L-layer (fully-connected) NN with width m as expressing a function on of the following form: f_m() 1/mim a_i h_i^(L-1)() , where we define h_i^(1)() _i^⊺· = jd z_i,j x_j, and ∀ l ∈ [L-2] , h_i^(l+1)() 1/mjm W_i, j^(l)h_j^(l)() . Here, σ: → is the activation function, and each z_i,j, W^(l)_i, j and a_i is a weight parameter of the input layer, the lth middle layer and the output layer, respectively. For simplicity, we will omit the bias term in the main text but discuss it in Appendix <ref>. We refer to h_i^(l) as the pre-activation function represented by the ith neuron in the lth hidden layer. The 1/m factor in (<ref>) is often called the mean-field scaling, which allows large m limits to be considered while the parameters stay scale-free. Unlike the NTK scaling, the mean-field scaling allows feature learning to occur, including in the infinite-width limit <cit.>. The comparison with NTK and other scaling choices are further discussed in Section <ref>. §.§ Distributional View of Shallow NN When L=2, the model defined by (<ref>) reduces to the following expression of a shallow NN: f() = 1/m∑_i=1^m a_i _i^⊺· , which can be equivalently represented as f() = ∫ a ^⊺·μ_m(da, d), if we define μ_m(da, d) 1/m∑_i=1^m δ_a_i(da) δ__i(d) as a probability measure on ×^d. More generally, we may consider functions that can be represented in a distributional form as f() = ∫ a ^⊺·μ(da, d) , for some μ∈(×^d). In particular, the Barron norm <cit.> of a function f can be defined as f _inf_μ∫ |a| μ(da, d) , with the infimum taken over all μ∈(×^d) such that (<ref>) holds. One can further define the Barron space to contain all functions with a bounded Barron norm, which is equivalent to the ℱ_1 space considered by <cit.> if σ is homogeneous. Favorable theoretical properties of the Barron space have been derived in prior literature, including approximation guarantees and Rademacher complexity bounds <cit.>. Meanwhile, importantly, the training dynamics of a two-layer NN can be understood through the Wasserstein gradient flow of the underlying probability measure μ <cit.>. Hence, through the distributional perspective, prior works have developed a promising picture of shallow NNs by viewing it as learning in the Barron space. A central motivation of this work is to extend this theory to NNs with more than two layers. §.§ Reproducing Kernel Hilbert Space (RKHS) A Hilbert space is a vector space equipped with an inner product, ⟨·, ·⟩, such that the norm defined through ·⟨·, ·⟩ makes it a complete metric space. Of particular interest to learning theory is a type of Hilbert spaces consisting of functions on the input domain, which we define below. Let κ: ×→ be a symmetric and positive semi-definite function, which we call a kernel function. It is associated with a particular Hilbert space on , whose definition, existence and uniqueness are given by the following foundational result, often called the Moore-Aronszajn theorem <cit.>: There exists a unique Hilbert space, , consisting of functions on and equipped with the inner product ⟨· , ·⟩_, which satisfies the following properties: * ∀∈, κ(, ·) ∈; * ∀ f ∈, ∀∈, ⟨ f, κ(, ·) ⟩_ = f(); * the span of the set {κ(, ·) }_∈ is dense in . The Hilbert space that satisfies these properties is called the Reproducing Kernel Hilbert Space (RKHS) associated with (a.k.a. reproducing) the kernel function κ. The RKHS plays an important role in classical learning theory as well as mathematics and physics, and we refer the readers to <cit.> for further background. In Section <ref>, we will discuss efforts from prior literature in understanding NNs through kernels and RKHSs. § BASIC THEORY §.§ Neural Hilbert Ladder We begin by introducing a way to create an RKHS from a distribution of functions on . If σ is the activation function of interest and μ∈(), we define κ_μ: ×→ by κ_μ(, ') ∫h()h(')μ(dh) , which is symmetric and positive semi-definite. Hence, there is an RKHS on associated with the kernel function _μ, which we denote by _μ. By applying this recipe iteratively, we are able to construct a hierarchy of RKHSs. At the ground level, we define ^(1){↦^⊺· : ∈^d } to be the space of linear functions on ^d. Through the canonical isomorphism with ^d, ^(1) inherits an inner product from the Euclidean inner product on ^d, which makes ^(1) the RKHS associated with the kernel function ^(0)(, ') := ^⊺·'. Then, for L ≥ 2, we define an L-level Neural Hilbert Ladder (NHL) as follows: Suppose each of ^(2), ..., ^(L) is an RKHS on , and ∀ l ∈ [L-1], there exists μ^(l)∈(^(l)) such that ^(l+1) = _μ^(l), which is the RKHS associated with κ^(l)κ_μ^(l). Then, we say that (^(l))_l ∈ [L] is an L-level NHL induced by the sequence of probability measures, (μ^(l))_l ∈ [L-1]. In addition, we say that a function f on belongs to the NHL if f ∈^(L). Put differently, to define an NHL, at each level l we choose a probability measure supported on ^(l) – which is equivalent to the law of a random field on – to generate κ^(l) via (<ref>), which then determines ^(l+1). Thus, an NHL is a ladder of RKHSs constructed by interleaving them with random fields and kernel functions, as illustrated in Figure <ref>. §.§ Complexity Measures and Function Spaces Let p ∈ [2, ∞]. If , ' are two RKHSs on , we define 𝒟_p (→' ) inf_μ∈(), _μ = 'μ_, p . Then, given an RKHS on , we define Lp() inf_^(2),  ... , ^(L-1)∏_l=1^L-1𝒟_p (^(l)→^(l+1) ) , with the infimum taken over all choices of ^(2), ..., ^(L-1) as RKHSs. Heuristically speaking, it quantifies a certain price of arriving at as the Lth-level of an NHL. Then, we define the (L, p)-NHL complexity of a function f as: Lp(f) inf_ ( f _·Lp ( ) ) , with the infimum taken over all RKHS . Finally, we define the (L, p)-NHL space, L_p, to contain all functions with a finite (L, p)-NHL complexity: L_p { f: Lp(f) < ∞} = ⋃_Lp() < ∞ . Note that unlike in the kernel theories of NNs (see Section <ref>), the function space L_p is not a single RKHS but an infinite union of them. Some properties of L_p and Lp are in order: Let L ≥ 2 and p ∈ [2, ∞]. It holds that: * ^(L)_p is a vector space and Lp is non-decreasing in p. * If ⊆(^d) and σ is non-expansive, then ∀ f ∈L_p, f _∞≤Lp(f) and f is Lp(f)-Lipschitz on . Hence, L_p ⊆. * If σ is homogeneous, then Lp is equal for all p and L_p is a quasi-Banach space with Lp as the quasi-norm. These results are proved in Appendix <ref>. Specially, when σ is homogeneous (e.g. ReLU), Theorem <ref>(<ref>) allows us to define LLp and LL_p, which we will call the L-level NHL complexity and the L-level NHL space, respectively. §.§.§ Example: L=2 When σ is homogeneous and L=2, we see from (<ref>) and Lemma <ref> that 2(f) =  inf_μ^(1)∈(^(1))  f _^(2) , where the infimum is taken over all probability measures μ^(1) supported within the unit sphere of ^(1). Meanwhile, it is known <cit.> that the Barron norm of a function f on can be rewritten as f _ = inf_ξ, ρ  ( ∫ |ξ()|^2 ρ(d ) )^1/2 , where the infimum is taken over all ρ∈(𝕊^d-1) and all measurable functions ξ: 𝕊^d-1→ such that f() = ∫ξ() ^⊺·ρ(d ) . Note that the isomorphism between ^(1) and ^d means an equivalence in the roles played by μ^(1) and ρ. Thus, by Lemma <ref>, we see that: If σ is homogeneous, then f _ = 2(f). Therefore, 2 is identical to the Barron space. In fact, when L=2, (<ref>) reduces to 2 = ⋃_ρ∈(^d)_ρ, where we define _ρ as the RKHS associated with the kernel function κ_ρ(, ') ∫σ(^⊺·) σ(^⊺·') ρ(d ). This agrees with the decomposition of the Barron space as a union of RKHSs <cit.>. Thus, L can be seen as generalizing the Barron space to cases where L > 2. §.§ Alternative Form via Coupled Random Fields For each l ∈ [L-1], μ^(l) can be interpreted as the law of a random field on , ^(l), whose sample paths belong to ^(l). In fact, the random fields can be defined on a common probability space to yield an alternative formulation of the NHL that will become relevant later: In Definition <ref>, there exist random fields, (^(l))_l ∈ [L], that are defined on a common probability space and satisfy the following properties: * ^(1), ..., ^(L-1) are mutually independent, and ∀ l ∈ [L-1], μ^(l) = Law(^(l)); * There exist scalar random variables ^(1), ..., ^(L-2) such that ∀ l ∈ [L-2], ^(l+1)() = ^(l)^(l)() | ^(l+1) , where · | · denotes the conditional expectation, and ^(l+1)_^(l+1)^2 = (^(l))^2 | ^(l+1). In particular, we can choose each ^(l) to be measurable with respect to ^(l) and ^(l+1); * There exists a scalar random variable measurable with respect to ^(L-1) such that f() = ^(L-1)() , and f _^(L)^2 = ^2. The proof is given in Appendix <ref> and builds on the next observation: Let σ be non-expansive and μ∈() with ∫ h _∞^2 μ(dh) < ∞. Then, a function f on belongs to _μ if and only if ∃ξ∈ L_2(, μ) such that f() = ∫ξ(h) h()μ(dh) , ∀∈ . Moreover, f _μ = inf_ξξ_L^2(, μ), with the infimum over all ξ∈ L_2(, μ) satisfying (<ref>). This lemma can be viewed as generalizing prior insights on the duality between RKHS and random basis expansions <cit.> – while these results are applicable when the basis functions are indexed by a compact set, here, the basis functions {σ(h(·)) }_h ∈ are indexed by an infinite-dimensional function space. In Appendix <ref>, we prove this lemma based on a general result of <cit.>. § REALIZATION AND APPROXIMATION BY NEURAL NETWORKS For p ∈ [2, ∞], we shall define quantities M^(1)_m, p, ..., M^(L)_m, p≥ 0 associated with the NN defined in Section <ref> by M^(1)_m, p ( 1/mim_i _2^p )^1/p ,  M^(L)_m, p ( 1/mim a_i^2 )^1/2 , M^(l+1)_m, p ( 1/mim ( 1/mjm |W^(l)_i, j|^2 )^p/2 )^1/p , ∀ l ∈ [L-2] , Modulo the 1 / m scaling factor, they coincide with the per-layer Frobenius norms of the weight parameters when p = 2, and more generally, they appear in the group norm of finite-width NNs defined in <cit.>. Meanwhile, with the 1 / m scaling factor, we note that these quantities admit width-independent upper bounds if each parameter is sampled i.i.d. §.§ NN as NHL First, we show that L-layer NNs indeed represent functions in ^(L)_p, verifying property (<ref>). ∀ p ∈ [2, ∞], f_m ∈L_p with Lp(f_m) ≤∏_l=1^L M^(l)_m, p. In particular, f_m belongs to the NHL of (^(l)_m)_l ∈ [L], where we define ^(1)_m ^(1), and ∀ l ∈ [L-1], ^(l+1)_m _μ^(l)_m with μ_m^(l)1/mimδ_h_i^(l) being the empirical measure (on functional space) of the pre-activation functions of the neurons in the lth hidden layer. The proof is given in Appendix <ref> and leverages Lemma <ref>. Thus, a multi-layer NN can indeed be viewed as an NHL, whose NHL-complexity is controlled by the layer-wise norms of its parameters. In particular, we see that the series of random fields can be constructed based on the pre-activation functions in the respective hidden layers. §.§ NHL can be Approximated by NNs Conversely, if is bounded, we can show that any function in L_∞ can be approximated efficiently by an L-layer NN: Suppose σ is non-expansive and is bounded within 𝕊^d-1. Given any f ∈L_∞ and ν∈(), there exists a function f_m represented by an L-layer NN with width m such that ∼ν| f_m() - f() |^2≤L-1/m ( L∞(f) )^2. This result is proved in Appendix <ref>, where we use an inductive-in-L argument to show that a randomized approximation strategy based on sampling each μ^(l) independently can achieve low approximation error in expectation. Theorem <ref> implies that a function in L_∞ can be approximated with an L_2 error less that ϵ > 0 by an L-layer NN with O(L^5 / ϵ^4) number of parameters in total. In comparison, the approximation bound for functions in the neural tree space defined in <cit.> requires O(1/ϵ^4L+6), which grows exponentially in the depth. The contrast highlights a property of multi-layer NNs – that all neurons in one hidden layer share the same pre-synaptic neurons of the preceding layer – which is captured by the NHL by not by the neural tree space representation, whose branching structure incurs the exponential dependence on the depth. Thus, when σ is homogeneous, Theorems <ref> and <ref> establish a two-way correspondence between L-layer NNs and functions in L with the approximation cost governed by L, thus verifying both (<ref>) and (<ref>). § GENERALIZATION GUARANTEES §.§ Empirical Risk Minimization We consider a general task of fitting a target function f^*: →𝒴⊆. Concretely, we look for a function f that minimizes the population risk defined as (f) ∼νl(f(), f^*()), where ν∈() is an underlying data distribution on and l is a differentiable loss function on ×𝒴 (e.g., in L_2 regression, l(ŷ, y) = 1/2(ŷ - y)^2). In supervised learning, instead of having access to ν directly, we are typically given a training set of size n, S = {1, ..., n}⊆, sampled i.i.d. from ν, and we write ν_n = 1/nknδ_k∈(). A typical strategy is to find a function within a chosen function space that achieves a low empirical risk defined as _n(f) l(f(), f^*()), where we write ℰ_ for ℰ_∼ν_n for simplicity. Then, the question of generalization is whether the discrepancy between and _n decays sufficiently fast as n increases. In our case, assuming that σ is homogeneous, we consider learning in the function space L. Classical learning theory suggests us to prove uniform upper bounds on the discrepancy through the Rademacher complexity of L. While the Rademacher complexity of RKHS is known <cit.>, ^(L) is not one RKHS but a union of infinitely many of them, and hence a new approach is needed. §.§ Rademacher Complexity Recall that the empirical Rademacher complexity of a function space ℱ with respect to the set S` is defined by _S(ℱ) _ [ 1/nsup_f ∈ℱknτ_k f(_k) ], where = [τ_1, ..., τ_n] is a vector of i.i.d. Rademacher random variables. Our main result in this section is the following: If ⊆(^d) and σ is homogeneous, then ∀ M > 0, _S((^(L), M)) ≤ M (√(2 log(2) L) + 1) / √(n). The proof is given in Appendix <ref>, where we carry out an inductive-in-L argument inspired by that in <cit.> for bounding the Rademacher complexity of multi-layer NNs with finite group norms, combined with a strategy in <cit.> for reducing its dependency on L from exponential to O(√(L)). Combining this result with Theorem <ref>(<ref>) and classical generalization bounds based on Rademacher complexity (e.g. ), we derive the following generalization guarantee for learning in the space of NHLs, thus verifying (<ref>): Suppose that ⊆(^d), 𝒴⊆ [-1, 1], σ is homogeneous, and ∀ y ∈𝒴, the function ŷ↦ l(ŷ, y) is ω-Lipschitz on [-1, 1]. Then, ∀δ > 0, with probability at least 1-δ over the i.i.d. sampling of a sample S of size n in , it holds for all functions f with L(f) ≤ 1 that (f) ≤_S(f) + 2 ω (√(2 log(2) L) + 1) / √(n) + 3 √(log(2 / δ)) / √(2n). § TRAINING DYNAMICS §.§ Gradient Flow (GF) Given the correspondence between NNs and NHLs shown in Section <ref>, we may regard the training of NNs as a instantiating a particular learning rule for solving empirical risk minimization within L, which we further elucidate in this section. Typically in practice, we first initialize an NN by sampling its parameters randomly and then train it by performing variants of GD on the parameters with respect to the empirical risk. We assume below that σ is differentiable and its derivative σ' is Lipschitz and bounded. At t=0, each W_i, j, 0, a_i, 0 and _i, 0 is sampled independently from , ∈() and ∈(^d), respectively. Moreover, and have zero mean, has a finite fourth-moment, has a finite covariance, and is bounded. Note that Assumption <ref> is standard in prior works on the mean-field theory of shallow NNs (e.g. ), and it is satisfied if σ is e.g. tanh or sigmoid, though not ReLU. For simplicity, we consider GD dynamics in the continuous-time limit – also called the gradient flow (GF) – where the parameters evolve over time t (added as a subscript) according to a system of ordinary differential equations: it =   - t() it1 () it1 (), a_i, t =   - t() iiL-1 () , and ∀ l ∈ [L-2], ijtl+1 = - t() itl+1() itl+1 ()jtl() , where t() ∂_ŷ l(ŷ, f^*()) |_ŷ = f_m, t(), itL-1() a_i itL-1(), and ∀ l ∈ [L-2], jtl() 1/m ( imijtlitl+1() itl+1() ) . Note that GF causes not only the output function f_m, t but also the pre-activation functions in the hidden layers – which can be summarized by μ_m, t^(l)1/mimδ_h_i, t^(l) – to evolve, thus leading to a movement of the NHL represented by the model. The dynamics of (μ_m, t^(l))_l ∈ [L-1] is unfortunately not closed but depends intricately on the weight matrices. Nonetheless, we will show below that the dependencies the weight matrices can be subsumed by a mean-field description once we consider the infinite-width limit. §.§ Mean-Field NHL Dynamics Several prior works have considered the infinite-width limits of multi-layer NNs in the mean-field scaling and derived equations that govern their GF dynamics <cit.>. Following their insights, our goal is to uncover the learning dynamics of the NHL in the mean-field limit. To do so, we first write down the mean-field limit in the form of coupled random fields introduced in Section <ref>, as follows. Let _0 and _0 be a d-dimensional random vector with law and a scalar random variable with law , respectively, distributed independently from each other. Then, we introduce time-dependent random fields t1, ..., tL-1 and t1, ..., tL-1 on . At initial time, they are defined by 01() = _0^⊺· , 0L-1() = _0 , and 0l+1() = 0l() = 0, ∀ l ∈ [L-2]. Their evolution in time is given by, ∀ l ∈ [L-1], tl() =  0l() - ∫_0^t 'ζ_s(') ^(l-1)_t, s(, ') sl(') sl(') ds , tl() =  0l() - ∫_0^t 'ζ_s(') γ^(l+1)_t, s(, ') sl(') ds , where we define, ∀ l ∈ [L-1], _t, s^(l)(, ')  tl()sl(') , tsl'  tl() sl(') tl()sl(') , and _t, s^(0)(, ') = γ^(L)_t, s(, ') = 1. Note that by the definition above, tL-1() is independent of , and hence, with an abuse of the notation, we can also view tL-1 as a random variable. Lastly, we set f_t() = tL-1_t^(L-1)() and ζ_t() = ∂_ŷ l(ŷ, f^*()) |_ŷ = f_t(). We can show that, by the law of large numbers, the dynamics of f_t defined via the above is the infinite-width limit of the GF training of NNs considered in Section <ref>. Suppose Assumptions <ref> and <ref> hold. Then ∀ t ≥ 0, as m →∞, * f_m, t() f_t(); * ∀ l ∈ [L-1], the probability measure μ_m, t^(l) converges weakly to μ^(l)_t (tl) in all finite distributions, that is, ∀ N ∈_+, ∀1, ..., N∈, sup_g ∈Lip(^N)   | ∫ g(h(1), ..., h(N)) μ^(l)_m, t(dh) - ∫ g(h(1), ..., h(N)) μ^(l)_t(dh) | 0 If L ≥ 4, then for 2 ≤ l ≤ L-2, the random field tl defined through the above is actually deterministic, indicating a type of degeneracy in deep NNs under the mean-field scaling <cit.>. Randomness can be restored if we add a bias term to each layer that is randomly initialized (see Appendix <ref>). The proof of Theorem <ref> is given in Appendix <ref>. It applies to the more general scenario where bias terms are added and consists of two steps. First, we rewrite the limiting dynamics in a differential form with auxiliary random variables t1, ..., tL-2. Then, we prove the convergence of the dynamics as m →∞ using on a propagation-of-chaos-type argument <cit.>, which also appears in several prior works on the mean-field limit of multi-layer NNs <cit.>. Note that our result here is meant not to improve upon these prior works in techniques, but rather to demonstrate that the training dynamics in the mean-field limit can be interpreted as a learning dynamics of the NHL. Specifically, as a corollary of Lemma <ref>, we see that the limiting dynamics indeed defines an evolving NHL, with each ^(l+1)_t being the RKHS associated with the kernel function κ^(l)_tκ^(l)_t, t: Suppose Assumptions <ref> and <ref> hold. ∀ t ≥ 0, f_t belongs to the NHL of (^(l)_t)_l ∈ [L], where we define ^(1)_t ^(1), and ∀ l ∈ [L-1], ^(l+1)_t _μ^(l)_t as the RKHS associated with κ_t^(l). Moreover, ∀ l ∈ [L-1], as m →∞, _m, t^(l) converges to _t^(l) almost surely. Hence, we name it the mean-field NHL dynamics. We note that unlike in the mean-field limit of shallow NNs, the dynamics of each ^(l)_t is no longer Markovian when L > 2. From the mean-field NHL dynamics defined above, we derive that the output function satisfies f_t() = 'ζ_t(') θ_t(, ') , where we define θ_t(, ') lLκ^(l-1)_t(, ') γ^(l)_t(, ') , and we set κ^(0)_t(, ') ^⊺·', γ^(L)_t(, ') 1 and γ^(l)_tγ^(l)_t, t, ∀ l ∈ [L-1]. Hence, one can view f_t as evolving according to a functional gradient flow – also called the residual dynamics in the mean-field theory of shallow NNs <cit.> – under a time-varying and data-dependent kernel function θ_t. It is analogous to the Neural Tangent Kernel (NTK) that governs the training dynamics of infinite-width NNs under a difference choice of scaling (<cit.>; see the discussions in Section <ref>). However, a crucial difference is that the NTK remains fixed during training – and hence the name “lazy training” <cit.> for the NTK model – whereas the model here exhibits feature learning as θ_t evolves during training, as is required by (<ref>). §.§.§ Example 1: L=2 When L=2, the mean-field NHL dynamics is Markovian and reduces to the following ODEs of the random variables _t and _t: _t =  ζ_t() _t^⊺· , _t =  _t ζ_t() _t^⊺· . In this case, one can fully characterize the NHL dynamics by the evolution of the joint law of _t and _t, which follows a Wasserstein gradient flow (WGF) in (^d+1), without an underlying probability space for defining the random variables. This allows us to recover the mean-field theory of shallow NNs, which we review in Section <ref>. In particular, under similar assumptions, the global convergence guarantees of the WGF <cit.> also apply to the mean-field NHL dynamics at L=2. Note that when L > 2, the probability-space-free representation becomes less convenient. On one hand, the random fields tl can no longer be reduced to finite-dimensional random variables when l > 1. On the other hand, the mean-field NHL dynamics (<ref>) becomes non-Markovian. For these reasons, similar to the view taken by <cit.>, we choose to formulate the general mean-field dynamics through the random field representation. §.§.§ Example 2: Deep Linear NN When σ is the identity function, the model becomes a linear NN, whose output function can be written as f_t() = _t^⊺·. Moreover, for all l ∈{2, ..., L}, ^(l)_t always contains the same set of functions, namely, the linear functions on ^d, except that their norms in ^(l)_t, which are governed by the kernel function κ^(l-1)_t, can differ with l and evolve over time. Below, we show the mean-field NHL dynamics reduces in this case to a finite-dimensional system. We consider the setting of fitting a linear target function f^*() = (^*)^⊺ with L_2 regression, and we define Σ1/nkn_k ·_k^⊺ and _t Σ· (_t - ^*). We assume for simplicity that has zero mean and unit variance while has the zero vector as its mean and the identity matrix as its covariance. Thanks to the linearity, each κ^(l)_t, s(, ') is bilinear in and ' while lts' does not depend on or '. In other words, ∃ K^(l)_t, s∈^d × d and c^(l)_t, s∈ℝ such that κ^(l)_t, s(, ') = ^⊺· K^(l)_t, s·' and tsl' = c^(l)_t, s. Then, (<ref>) reduces to _t = ( lL c^(l)_t K^(l-1)_t ) ·Σ· (_t - ^*) , where we set K^(0)_t I_d, c^(L)_t 1, and each K^(l-1)_t K^(l-1)_t, t, c^(l)_t c^(l)_t, t. Moreover, the mean-field NHL dynamics reduces to closed equations in K^(l)_t, s and c^(l)_t, s. For example, ∀ l ∈ [L-2], it holds that K^(l+1)_t, s =  ∫_0^t ∫_0^s c^(l+1)_r, p K^(l)_t, r·_r ·_p^⊺· K^(l)_p, s  dp  dr , c^(l)_t, s =  ∫_0^t ∫_0^s c^(l+1)_t, r c^(l+1)_s, p_r^⊺· K^(l)_r, p·ζ_p  dp  dr , The full system of equations is derived in Appendix <ref>. We see that although f_t is always a linear function, its training dynamics is nonlinear and non-Markovian, which is in contrast with, for example, the GF dynamics of plain linear regression: _t = Σ· (_t - ^*) . § ROLE OF DEPTH In light of the success of deep NNs in practice, a rich body of work has studied the depth separation question of finite-width NNs, that is, whether certain functions can be approximated much more efficiently by deeper NNs than shallow ones, where the efficiency is usually measured by the width required. Meanwhile, it remains intriguing how the depth separation question can be properly characterized for width-unlimited NNs. Under the NTK theory, for example, the NTK spaces corresponding to NNs with different depths are actually equivalent <cit.>. By contrast, below we will demonstrate some basic depth separation phenomena in width-unlimited NNs for two choices of the activation function by comparing the NHL spaces at different depths. §.§ Example 1: ReLU Activation We first study the case where σ is the ReLU function. On one hand, since a hidden layer is capable of expressing the identity function using a pair of ReLU neurons with opposite weights, it is straightforward to show that L does not shrink as L increases: If σ is ReLU, then ∀ L ∈_+, L⊆L+1 and L+1(f) ≤ 2 L(f), ∀ f ∈L. The proof is given in Section <ref>. On the other hand, <cit.> proved that the function f() = max{ 1 - _1, 0 } on ^d cannot be represented by any infinite-width shallow ReLU NN with a finite Barron norm, and hence it does not belong to 2. Meanwhile, it can be represented by a finite-width 3-layer NN, and thus, as a consequence of Theorem <ref>, it belongs to 3. Therefore, we can establish a strict separation between 2 and 3: When σ is ReLU, 2⊊3. §.§ Example 2: Quadratic Activation When σ(u) = u^2 is the quadratic function and the bias terms are included, it is easy to see that an L-layer NN can express polynomials whose maximum degrees do not exceed 2^L-1. Meanwhile, on scalar inputs, we can prove that L_p for any p ∈ [2, ∞] is exactly the set of all polynomials with degree at most 2^L-1. Suppose ⊆, σ is the quadratic function, and the bias terms are included as in Appendix <ref>. Then LL_p is the set of all polynomials with degree 2^L-1, ∀ p ∈ [2, ∞]. To prove this proposition, we can construct explicitly a maximal NHL by leveraging the finite-dimensionality of the polynomial space. Concretely, we may define μ^(l) to be supported on a finite spanning set of the space of all polynomials with degree at most 2^L-1. Then, we are left to show that such a construction forms the desired NHL, and the full proof is given in Appendix <ref>. Lemma <ref> thus shows that when σ is the quadratic function, depth separation occurs as a growth of the polynomial degree. What is more interesting is a comparison between L and an alternative way of generalizing the Barron space to deeper models through direct composition. When =, for L ≥ 2, we may define the compositional Barron space at level L by ^(L){ f = f^(L-1)∘⋯∘ f^(1): f^(l)∈, ∀ l ∈ [L-1] }, which is proposed in prior works such as <cit.> to characterize the functions expressed by deep NNs that interleave wide layers with layers with a fixed width (“bottleneck layers”). To contrast it with the NHL model, we compare L to ^(L) in the case where = and σ is the quadratic function, in which case = 2 is the space of all quadratic functions. Thus, ^(L) contains all functions that can written as a composition of L-1 quadratic functions, and hence ^(L) is a subset of all polynomials with degree no more than 2^L-1. However, the total degree of freedom in ^(L) grows only linearly in L (at most 3L-3), whereas in L it grows linearly in the degree of the polynomial, thus exponentially in L (as 2^L-1+1). This simple argument leads us to conclude on a separation between L and ^(L): Suppose ⊆ and σ is the quadratic function. If L > 2, then ^(L)⊊L. Moreover, if for some L' it holds that ^(L')⊇L, then we need L' = Ω(2^L). § NUMERICAL ILLUSTRATIONS §.§ Experiment 1: Linear NN To validate the NHL dynamics derived above for linear NNs, we compare its numerical solution with the GD training of an actual 3-layer linear NN on an L_2 regression task of learning a linear target ^*, as described in Section <ref>. We choose d=10, n = 50 and ν = 𝒩(0, I_d). In Figure <ref>, we plot the learning trajectories in the linear model space projected into the first two dimensions, i.e., v_t, 1 and v_t, 2. We see that the NHL dynamics solved by numerical integration closely predicts the actual GD dynamics when the width is large. Moreover, the NHL dynamics presents a nonlinear learning trajectory in the space of linear models, which is in contrast with, for example, the linear learning trajectory of performing linear regression under the population loss. §.§ Experiment 2: ReLU NN To gain insights into feature learning and the evolution of the NHL through training, we perform GD on 3-layer NNs with the ReLU activation on an L_2 regression task. We choose d=1, n = 20, m=512, the target function being f^*() = sin(2 ), and ν being the uniform distribution on [0, 2 π]. All parameters in the model, including untrained bias terms, are sampled i.i.d. from 𝒩(0, 1) at initialization. We see from Figure <ref>(b) that the pre-activation values across all neurons in the second hidden layer – which correspond to ^(2)_m, t and approximate ^(2)_t – move substantially through training, demonstrating the occurrence of feature learning. Furthermore, as shown in Figure <ref>(c), the movement results in a learned kernel function κ^(2)_t that bears the same periodicity as the target function, showing that the kernel function is adaptive through training. In particular, as measured by the Centered Kernel Alignment (CKA) score <cit.>, κ^(2)_t becomes more aligned with the target function during training – an important notion in the literature of learning kernels <cit.> – and more so than κ^(1)_t. It suggests that the space ^(L)_t can move closer to the target function via training, though a theoretical explanation for the alignment phenomenon is still lacking. § RELATED WORKS Below, we further discuss the novelty and significance of our work relative to existing literature. Function space of width-unlimited NNs As discussed in Section <ref>, prior works have proposed the function space of shallow NNs based on a total-variation-type norm, which is proved to control both the generalization error <cit.> and the dynamical approximation error <cit.>. Other works have also established the regularity properties <cit.> and representer theorems <cit.> of this space. Hence, for shallow NNs, a relatively complete picture has been established that covers approximation, generalization and optimization. For multi-layer NNs, however, a satisfactory theory for the function space and the complexity measure is missing for the lack of a suitable model. While the neural tree space <cit.> is an interesting attempt, it does not correspond directly with the training of NNs, and importantly, the neurons in this model do not have shared preceding layers, which leads to approximation error bounds that grow exponentially in the depth, as discussed in Section <ref>. Another series of studies including <cit.> focused on NNs with “bottleneck” layers that have fixed width, which can be viewed as expressing certain compositions or accumulations of functions in the Barron space. While these models are interesting in their own right, the bottleneck assumption is not often obeyed in practice. In contrast, our work allows all layers of the NN to have unlimited width, resulting in a different way of generalizing the Barron space – not by merely composing functions from the Barron space, but rather requiring a hierarchical generation of the RKHS. We compared the two models in the case of quadratic activation in Section <ref>, and a more comprehensive comparison between them is left for future work. Mean-field theory of NNs In the mean-field scaling, shallow NNs under training are analogous to an interacting particles system <cit.>. Hence, as described in Section <ref>, its infinite-width limit can be modeled as an integral over a probability measure (i.e., an expectation with respect to a random variable) on the parameter space, which evolves according to a Wasserstein GF during training <cit.>. Notably, under suitable conditions, the Wasserstein GF can be proved to converge to global minimizers of the loss <cit.>. Several works have then proposed mean-field-type models for multi-layer NNs via probability measures defined in different ways <cit.>. In particular, <cit.> proved law-of-large-numbers results similar to Theorem <ref> for the convergence of finite-width NNs to the mean-field limit. However, these works do not address the function space explored by these models, which is the main contribution of our work. The work of <cit.> proposed a distributional view on functional space to study the training dynamics of a type of partially-trained three-layer NNs in the infinite-width limit and define a complexity measure based on optimal transport distances of distributions on functional space. Their results require that only the last two layers of the model are trained, but their point of view sets the path for the current work. NTK theory If we replace the 1/m factor by 1/√(m) in (<ref>), we arrive at what is commonly called the NTK scaling of NNs. As shown by <cit.>, if we initialize the NN randomly and take m →∞ under this scaling, then the pre-activation functions in the hidden layers barely move throughout training, and thus, the GF dynamics can be well-approximated by its linearization around the initialization, which is described by a kernel GF with a fixed kernel (that is the NTK). In other words, the evolution of the output function can also be written as (<ref>) except that the kernel function θ_t is now independent of t. Thanks to this simplification, gradient descent is proved to converge to global minimum at a linear rate for over-parameterized NNs in the NTK regime <cit.>. Furthermore, generalization guarantees can be proved for such models through the learning theory of RKHS <cit.>. However, the fact that the hidden layer neurons and hence the kernel function remain fixed to their initialization indicates a lack of feature learning. For this reason, the NTK limit is described as a regime of “lazy training” <cit.>, and the NTK theory does not satisfy desideratum (<ref>). Several works have studied the limitations of the NTK regime compared to feature-learning regimes, both theoretically <cit.> and empirically <cit.>. NNs as kernels Besides the NTK theory, a number of prior works have also explored the connection between neural networks and kernels, by either proposing new kernels methods inspired by NNs <cit.> or by modeling NNs as kernels <cit.>, or both. As an exemplar of the latter, <cit.> proposed the conjugate kernel model of multi-layer NNs, together with a random feature scheme for approximating the kernel <cit.> and a theoretical guarantee that stochastic gradient descent (SGD) can learn a good solution in the conjugate kernel space in polynomial time <cit.>. However, like other efforts to model NNs a certain kernel, it does not satisfy desiderata (<ref>) or (<ref>) from Section <ref> as a theory for the function space of multi-layer NNs. Under our framework, the conjugate kernel space can be seen as a particular fixed NHL determined by the random initialization of the weights, which is quite restrictive. In contrast, the function space L is not one RKHS, but an infinite collection of them, in which learning can occur through the NHL dynamics. In the NTK scaling, a randomly-initialized NN in the infinite-width limit can also be viewed representing a function sampled from a Gaussian Process whose covariance function is connected to the NTK, thus leading to a Bayesian interpretation <cit.>. In particular, <cit.> showed that SGD training corresponds to a linear dynamics of the Gaussian Process and mimics Bayesian inference. However, like the NTK theory (and in contrast with ours), this analysis relies on a linear approximation of the training dynamics close to initialization and therefore does not model feature learning in the training of actual NNs. While the NHL framework also involves random fields, a fundamental difference that, here, while the hidden layers are modeled as random fields, the output function is always deterministic. Complexity measures of NNs With large numbers of parameters, NNs in practice often have enough capacity to fit data with even random labels <cit.>. Hence, to derive meaningful generalization bounds, researchers have looked for complexity measures of NNs that do no depend on the network size. For example, several complexity measures based on certain norms of their parameters have been proposed, both for shallow NNs <cit.> and for multi-layer ones <cit.>, which gave rise to generalization bounds that are independent of the number of parameters. In particular, the group norm in <cit.> is closely related to the NHL norm proposed in the current work, as the NHL norm of the function represented by an NN can be bounded by the group norm of the NN. Thus, the NHL norm can be regarded as a generalization of the group norm to the continuous, width-unlimited setup under the NHL model. Empirically, there is evidence that regularizing the parameter norms through weight decays improves the model performance <cit.>. Beyond lazy training Several efforts extend the NTK analysis beyond the lazy training regime by considering higher-order Taylor expansions of the GD dynamics or corrections to the NTK due to finite width or large depth <cit.>, but the function space implication of these proposals is not clear. Meanwhile, there have been efforts to understand the effect of different scaling choices on the behavior of the infinite-width limit <cit.>. In particular, <cit.> propose a third scaling choice different from both mean-field and NTK, called the maximum-update scaling, which exhibits feature learning while avoiding the degeneracy of the mean-field scaling mentioned in Remark <ref>. With nontrivial mathematical techniques, several works have studied the training dynamics in the infinite-width limit under this scaling <cit.>, but the function space associated with this model is unaddressed except when only the penultimate layer is trained <cit.>. Training dynamics of deep linear NNs Many prior studies have examined the GD or GF dynamics of deep linear NNs <cit.>, including deriving their global convergence guarantees <cit.> and implicit bias <cit.>. The infinite-width limit of deep linear NNs under the maximum-update scaling have been studied in <cit.>. In particular, the latter work derived the limiting dynamics and implicit bias rigorously, and our Figure <ref> is inspired by figures therein. Meanwhile, we are not aware of prior studies on deep linear NNs in the mean-field limit, nor any discussions pertaining to the function space. Depth separation A number of works have studied the benefit of depth in NN approximation, with examples including <cit.>. While most of these studies quantifies the price of approximation by the width required in the NN, a few have also considered NNs in the infinite-width limit. For example, <cit.> constructs a function that does not belong to the Barron space under ReLU activation but can be represented by a finite-width three-layer NN, which has directly helped us establish a depth separation of the NHL spaces (see Proposition <ref>). In addition, <cit.> shows a function that cannot be approximated by a shallow NN but can be learned by a three-layer NN in the infinite-width limit, albeit of the type with a bottleneck layer. § CONCLUSIONS In this work, we propose to model multi-layer NNs as NHLs, thereby deriving the function space of multi-layer NNs as an infinite union of hierarchically-generated RKHS. The associated complexity measure governs both approximation and generalization errors. The training of multi-layer NNs in a feature-learning regime translates to a particular learning dynamics of the NHL. Depth separation is also shown under two examples of activation functions. Hence, our proposal emerges as a reasonable candidate for the function space of multi-layer NNs. Limitations of our work include the assumption in Section <ref> that the activation function is differentiable (thus excluding ReLU) and the GD step size is infinitesimal. Meanwhile, our work opens up interesting directions for future research, including further properties of the NHL space and the long-time behavior of the mean-field NHL dynamics at L>2. It also sets the stage for a concrete and rigorous probe into the popular idea that deep NNs perform hierarchical learning <cit.>. Acknowledgements The author is indebted to Joan Bruna and Eric Vanden-Eijnden for earlier discussions and thankful to Pengning Chao for feedbacks on the manuscript. apalike § SUPPLEMENTARY MATERIALS FOR SECTION <REF> §.§ Proof of Theorem <ref>(<ref>) First, for all function f, it is obvious by the property of L_p norms that that Lp(f) ≥Lp'(f) if p ≥ p'. Let p ∈ [2, ∞]. Suppose f is a function in L_p. By definition, ∃μ^(1), ..., μ^(L-1) such that f _^(L) < ∞ and ∀ l ∈ [L-1], μ^(l)_^(l), p < ∞ and ^(l+1)_μ^(l). Given any c > 0, the function cf ∈L with cf _L = |c| f _^(L), which implies that c f belongs to the same NHL as f and Lp(c f)≤ |c| Lp(f) < ∞. This shows that L_p is closed under scalar multiplication. Meanwhile, Lp(f) = Lp(c^-1 (cf)) ≤ |c|^-1Lp(c f). As a result, Lp(c f) = |c| Lp(f). This proves the absolute homogeneity of Lp. Let f' be another function in L_p. Similarly, by definition, ∃μ^(1)', ..., μ^(L-1)' such that f _^(L)' < ∞, ^(1)' = ^(1) and ∀ l ∈ [L-1], μ^(l)_^(l), p < ∞ and ^(l+1)'_μ^(l)'. Then, to show that L_p is a vector space, we need an upper bound on Lp(f + f'). For l ∈ [L-1], we define μ̃^(l)1/2μ^(l) + 1/2μ^(l)'. Thus, μ̃^(l) is supported within l∪^(l)'. We define ^(1)1 and ∀ l ∈ [L-1], ^(l+1)_μ̃^(l), and we will show that f + f' belongs to the NHL formed by (^(l))_l ∈ [L]. To do so, we need the following lemma: Let μ_1, μ_2 ∈(). Suppose μ_1 is absolutely continuous with respect to μ_2 and the Radon-Nikodym derivative d μ_1/d μ_2 is upper-bounded by M > 0 on the support of μ_1. Then it holds that _μ_1⊆_μ_2 with ·__μ_2≤√(M)·__μ_1. This lemma is proved in Appendix <ref>. For each l ∈ [L-1], noticing that μ^(l) (and μ^(l)') are absolutely continuous with respect to μ̃^(l) with d μ^(l)/d μ̃^(l)≤ 2 on ^(l) (and d μ^(l)'/d μ̃^(l)≤ 2 on ^(l)' ), we can apply Lemma <ref> to obtain that, ∀ p ≥ 2, μ̃^(l)_^(l), p^p =  ∫ h _^(l)^p μ̃^(l) (dh) =  1/2∫ h _^(l)^p μ^(l) (dh) + 1/2∫ h _^(l)^p μ^(l)' (dh) ≤   2^p/2-1∫ h _^(l)^p μ^(l) (dh) + 2^p/2-1∫ h _^(l)^p μ^(l)' (dh) , and in addition, μ̃^(l)_^(l), ∞ =  max{μ^(l)_^(l), ∞, μ^(l)'_^(l), ∞} ≤  √(2)max ( μ^(l)_^(l), ∞ + μ^(l)'_^(l), ∞ ) . Moreover, f + f' _^(L)≤ f _^(L) + f' _^(L)≤√(2) ( f _^(L) + f' _^(L)'). Therefore, ∀ p ≥ 2, ( Lp(f+f') )^p ≤   ( ∏_l=1^L-1μ̃^(l)_^(l), p^p ) f + f' _^(L)^p ≤   2^pL/2 - L + 1 ( ∏_l=1^L-1 ( μ̃^(l)_^(l), p^p + μ^(l)'_^(l), p^p ) ) ( f _^(L) + f' _^(L)' )^p < ∞ , and in addition, L∞(f+f') ≤   ( ∏_l=1^L-1μ̃^(l)_^(l), ∞ ) f + f' _^(L) ≤   2^L/2 ( ∏_l=1^L-1μ^(l)_^(l), ∞ + μ^(l)'_^(l), ∞ ) f _^(L) + f' _^(L) <  ∞ . Hence, f + f' ∈L_p, and this concludes the proof that L_p is a vector space, ∀ p ∈ [2, ∞]. §.§.§ Proof of Lemma <ref> Suppose f ∈_μ_1. By Lemma <ref>, ∃ξ∈ L^2(, μ_1) such that f = ∫ξ(h) σ(h(·)) μ_1(dh) and f __μ_1 = ξ_L^2(, μ_1). By the definition of Radon-Nikodym derivative, it then holds for all ∈ that f() =  ∫ξ(h) h()d μ_1/d μ_2(h) μ_2(dh) =  ∫ ( ξ(h) d μ_1/d μ_2(h) ) h()μ_2(dh) . Hence, there is f __μ_2^2 ≤  ∫ | ξ(h) d μ_1/d μ_2(h) |^2 μ_2(dh) =  ∫ | ξ(h) |^2 d μ_1/d μ_2(h) μ_1(dh) ≤   M ξ_L^2(, μ_1)^2 =   M f __μ_1^2 . §.§ Proof of Theorem <ref>(<ref>) In light of Theorem <ref>(<ref>), it suffices to consider the case p = 2. The basic regularity properties of functions in RKHS given below is simple to show using the reproducing property and the Cauchy-Schwartz inequality: Let be an RKHS on associated with the kernel function κ. Then ∀ f ∈, it holds that |f() | ≤   f _ (κ(, ))^1/2 , ∀∈ , |f() - f(') | ≤   f _ d_κ(, ') , ∀, ' ∈ , where we define d_κ(, ') (κ(, ) + κ(' , ') - 2 κ(, '))^1/2. Thus, our strategy is to prove the following statement with an inductive argument, which is given in Appendix <ref>: If ⊆(^d) and σ is non-expansive, then ∀ L ∈, ( κ^(L)(, ) )^1/2≤  ∏_l=1^Lμ^(l)_^(l), 2 , ∀∈ , d_κ^(L)(, ') ≤   - ' ∏_l=1^Lμ^(l)_^(l), 2 , ∀, ' ∈ , Suppose f ∈L, and let (^(l))_l ∈ [L] be an NHL to which it belongs. Then, Lemmas <ref> and <ref> allow us to derive that |f()| ≤   f _^(L)∏_l=1^L-1μ^(l)_^(l), 2 , ∀∈ , |f() - f(')| ≤   - ' f _^(L)∏_l=1^L-1μ^(l)_^(l), 2 , ∀, ' ∈ . Hence, if we minimize the right-hand side of (<ref>) over all NHLs, it follows from (<ref>) that f ≤L2(f); if we minimize right-hand side of (<ref>) over all NHLs, it follows from (<ref>) that f satisfies Lipschitz continuity on with Lipschitz constant L2(f). §.§.§ Proof of Lemma <ref> We can prove Lemma <ref> inductively in L. As we assume that is a subset of the unit ball of ^d, there is sup_∈κ^(0)(, ) = sup_∈^2 ≤ 1. Moreover, by the definition of κ^(0), ∀, ' ∈, d_κ^(0)(, ') = ( ^2 + ' ^2 - 2 ^⊺·' )^1/2 = - '  . Next, suppose that the statements of Lemma <ref> hold for a certain L ∈. Then, for L+1, Lemma <ref> implies that, ∀∈, κ^(L+1)(, ) ≤  ∫ | h() |^2 μ^(L+1)(dh) ≤  ∫ | h() |^2 μ^(L+1)(dh) ≤   ( ∫ h _^(L)^2 μ^(L+1)(dh) ) ∏_l=1^L μ^(l)_^(l), 2^2 =  ∏_l=1^L+1μ^(l)_^(l), 2^2 , and moreover, ∀, ' ∈, d_κ^(L+1)(, ')^2 ≤  ∫ | h() - h(') |^2 μ^(L+1)(dh) ≤  ∫ | h() - h(') |^2 μ^(L+1)(dh) ≤   d_κ^(L)(, ')^2 ∫ h _^(L+1)^2 μ^(L+1)(dh) ≤   - ' ^2 ∏_l=1^L+1μ^(l)_^(l), 2^2 . which proves the statements for L+1. §.§ Proof of Theorem <ref>(<ref>) First, we show the equivalence for different p ∈ [2, ∞]. For any functions f, it is obvious that Lp(f) ≥Lp'(f) if p ≥ p'. Then, to prove Theorem <ref>(<ref>), it suffices to show that for all f such that L2(f) < ∞, we can find an NHL (^(l))_l ∈ [L] induced by (μ^(l))_l ∈ [L-1] such that f _^(L) = L2(f) and ∀ l ∈ [L-1], μ^(l) is supported on the unit-norm sphere of ^(l) (in which case, μ^(l)_^(l), p = 1, ∀ p ∈ [2, ∞]). To this end, we introduce the following lemma, which is proved in Appendix <ref>: Suppose that σ is homogeneous. Let be a Hilbert space of functions on , and let denote the unit-norm sphere of . Given any μ∈() such that μ_, 2 = 1, there exists μ∈() such that ·__μ≤·__μ. Suppose L2(f) < ∞. By definition, we can find μ^(1), ..., μ^(L-1) such that L2(f) = f _^(L)∏_l=1^L-1μ^(l)_^(l), 2 and ∀ l ∈ [L-1], ^(l+1) = _μ^(l). Thanks to the homogeneity of σ, we may assume without loss of generality that f _^(L) = L2(f) while ∀ l ∈ [L-1], μ^(l)_^(l), 2 = 1. Then, assume for contradiction that at some l ∈ [L-1], μ^(l) is not supported entirely within on the unit-norm sphere of ^(l). This lemma implies that there exists a probability measure μ̃^(l) supported within the unit-norm sphere of ^(l) such that if we define ^(l+1)_μ^(l), then ∀ f' ∈^(l+1), f' _^(l+1)≤ f' _^(l+1). In particular, this implies that μ^(l+1)_^(l+1), 2≤μ^(l+1)_^(l+1), 2. Hence, by replacing ^(l+1) with ^(l+1), we obtain another NHL, (^(1), ..., ^(l), ^(l+1), ^(l+2), ..., ^(L)), which contains f and realizes the minimization problem in L2(f). Applying this argument to each l, we see that μ^(1), ..., μ^(L-1) can be chosen such that ∀ l ∈ [L-1], μ^(l) is supported within the unit-norm sphere of ^(l), and hence μ^(l)_^(l), p = μ^(l)_^(l), 2 = 1, ∀ p ≥ [2, ∞]. This concludes the proof of the fact that ∀ p ∈ [2, ∞], Lp = L2. Having shown the equivalence of Lp for all p ∈ [2, ∞] when σ is homogeneous, below we only need to consider the case p = 2. To show that L2 is a quasi-norm, we follow the construction in Appendix <ref>. Given f and f' ∈L_2, since σ is homogeneous, we may assume without loss of generality that f _^(L) = L2(f), f _^(L)' = L2(f'), and ∀ l ∈ [L-1], μ^(l)_^(l), 2 = μ^(l)'_^(l)', 2 = 1. Then, (<ref>) can be tightened to yield L2(f+f') ≤ 2^L/2 ( f _^(L) + f' _^(L)' ) ≤ 2^L/2 ( L2(f) + L2(f') ) , which means that L2 is a quasi-norm on L_2. Hence, to prove that L_2 is a quasi-Banach space, it remains to show that it is complete under L2 as the quasi-norm. Let (f_k)_k=1^∞ be a Cauchy sequence in L_2. By definition, ∀ϵ > 0, ∃ N(ϵ) ∈_+ such that ∀ n_1, n_2 > N(ϵ), L2(f_n_1 - f_n_2) < ϵ. For each k ∈_+, we define ϵ_k 2^-(k+1)L and n_k []N(ϵ_k). We then define g_0 f_n_1 and g_k f_n_k+1 - f_n_k for each k ∈_+. Thus, by construction, it holds that L2(g_k) < ϵ_k, ∀ k ∈_+. By the argument above, ∀ k ∈, there exists an NHL (^(l)_k)_l ∈ [L] induced by (μ^(l)_k)_l ∈ [L-1] such that g_k _^(L)_k = L2(g_k) and μ^(l)_k is supported on ^(l)_k, ∀ l ∈ [L-1]. By Proposition <ref>, (f_k)_k=1^∞ is also a Cauchy sequence in L^∞(). Since L^∞() is complete, the sequence admit a limit, f_∞, in L^∞(), and we would like to show that f_∞∈L_2. ∀ l ∈ [L-1], we define μ^(l) (2^l - 1) ∑_k=0^∞ 2^-(k+1)lμ^(l)_k . It is straightforward to show that the series converge in total variation as measures on . In addition, as ∑_k=0^∞ 2^-(k+1)l = 1/2^l - 1, it further holds that μ^(l)() = (2^l - 1) ∑_k=0^∞ 2^-(k+1)lμ^(l)_k(^(l)_k) = 1. This implies that μ^(l)∈() and allows us to define ^(l+1)_μ^(l), ∀ l ∈ [L-1]. Thus, to prove that f_∞∈L_2, it suffices to show that f_∞_^(L) < ∞ and ∀ l ∈ [L-1], μ^(l)_^(l), 2 < ∞. Notice that each μ^(l)_k is absolutely continuous with respect to μ^(l) with the Radon-Nikodym derivative upper-bounded by 2^(k+1)l / (2^l-1) on ^(l)_k. Thus, Lemma <ref> implies that ·_^(l+1)≤ 2^(k+1)l·_^(l+1)_k / (2^l-1). Hence, it holds for each l ∈ [L-2] that μ^(l+1)_^(l+1), 2^2 =  ∫ h _^(l+1)^2 μ^(l+1)(dh) =   (2^l+1 - 1) ∑_k=0^∞ 2^-(k+1)(l+1)∫ h _^(l+1)^2 μ^(l+1)_k(dh) ≤  2^l+1 - 1/2^l - 1∑_k=0^∞ 2^-(k+1)μ^(l+1)_k _^(l+1)_k^2 ≤   3 . Meanwhile, μ^(1)_^(1), 2^2 =  ∫ h _^(1)^2 μ^(1)(dh) =  ∑_k=0^∞ 2^-(k+1)∫ h _^(1)^2 μ^(1)_k(dh) =   1 . Therefore, ∀ l ∈ [L-1], it holds that μ^(l)_^(l), 2 < ∞. Meanwhile, ∀ k ∈, Lemma <ref> implies that ·_^(L)≤ 2^(k+1)(L-1)·_^(L)_k / (2^L-1-1), and hence f_∞_^(L) =  ∑_k=0^∞ g_k _^(L) ≤  ∑_k=0^∞ g_k _^(L) ≤   ( 2^L-1L2(f_n_1) + 2^(k+1)(L-1)∑_k=1^∞L2( g_k ) ) / (2^L-1-1) ≤   ( 2^L-1L2(f_n_1) + ∑_k=1^∞ 2^-(k+1) ) / (2^L-1-1) <  ∞ . This finishes the proof that f_∞∈L_2. §.§.§ Proof of Lemma <ref> Without loss of generality, we assume that μ({0}) = 0 (since otherwise we can replace μ with an μ' ∈() such that μ'({0}) = 0, μ' _, 2≤ 1 and ·__μ'≤·__μ). Let be any -valued random variable with law μ. Note that we can define a bijection between _+ × and ∖{0} via the map (c, ĥ) ↦ c ĥ, and we let (, ) denote the image of under the inverse of this map, which is a pair of random variables supported on _+ ×. We first see that ^2 = _^2 = 1. We choose a -valued random variable, , whose law has a Radon-Nikodym derivative of ^2 | with respect to the law of , i.e., ∀ĥ∈, [()](d ĥ) = ^2 | = ĥ [()](d ĥ). We can verify that () defined as such is indeed a probability measure on since ^2 | = ĥ≥ 0, and moreover, [()]() = ∫_^2 | = ĥ [()](d ĥ) = ^2 | = ^2 = 1 . Consider any function f ∈_μ. By Lemma <ref> and the bijection between _+ × and ∖{0}, there exists a function ξ : _+ ×→ such that f() =  ξ(, ) () = ξ(, ) () , and | ξ(, ) |^2 = f __μ^2. Then, defining a function ξ: → by ∀ĥ∈, ξ(ĥ) ξ(, ) | = ĥ/^2 | = ĥ , we see that f() =  ξ(, ) () | =  ξ(, ) | /^2 | ()^2 | =  ∫_ξ(ĥ) ĥ()^2 | = ĥ [()](d ĥ) =  ∫_ξ(ĥ) ĥ() [()](d ĥ) =  ξ() () . Thus, there is f __()^2 ≤   |ξ()|^2 =  ∫_ | ξ(, ) | = ĥ/^2 | = ĥ |^2 [()](d ĥ) =  ∫_ | ξ(, ) | = ĥ/^2 | = ĥ |^2 ^2 | = ĥ [()](d ĥ) =  ∫_ | ξ(, ) | = ĥ |^2/^2 | = ĥ [()](d ĥ) =   | ξ(, ) | |^2/^2 | By the Cauchy-Schwartz inequality, | ξ(, ) | |^2 ≤^2 | |ξ(, )|^2 |. Hence, f __()^2 ≤ |ξ(, )|^2 | = |ξ(, )|^2 = f __μ^2 . §.§ Proof of Proposition <ref> For each l ∈ [L-1], let ^(l) be a ^(l)-valued random variable with law μ^(l), and let ^(1), ..., ^(L-1) be distributed independently on a common probability space. First, we define as follows. By Lemma <ref>, there exists ξ∈ L^2(^(L-1), μ^(L-1)) such that f() = ∫ξ(h) h()μ^(L-1)(dh) and f _^(L) = ξ_L^2(^(L-1), μ^(L-1)). Then, we define ξ(^(L-1)), which is measurable with respect to ^(L-1). Hence, (<ref>) as well as the equality f _^(L)^2 = ^2 are implied. Next, ∀ l ∈ [L-2], we define ^(l) as follows. By Lemma <ref>, ∀ h ∈^(l+1), ∃ξ_h ∈ L^2(^(l), μ^(l)) such that h() = ∫ξ_h(h') h'()μ^(l)(dh') , and h _^(l+1) = ξ_h _L^2(^(l), μ^(l)) . We denote the map h ↦ξ_h by Ξ^(l), i.e., [Ξ^(l)(h)](h') ξ_h(h') for h ∈^(l+1) and h' ∈^(l), and finally define ^(l) [Ξ^(l)(^(l+1))](^(l)), which is, by definition, measurable with respect to ^(l) and ^(l+1). Then, (<ref>) and the relation ^(l+1)_^(l+1)^2 = (^(l))^2 | ^(l+1) are implied by (<ref>) and (<ref>). §.§ Proof of Lemma <ref> We will take advantage of the following general result on integral transforms and RKHS <cit.>: Given a Hilbert space _0 and a map φ: →_0, we define the kernel function κ: ×→ by κ(, ') = ⟨φ(), φ(') ⟩__0, and let denote the RKHS associated with κ. Then, a function f on belongs to if and only if ∃ξ∈_0 such that f() = ⟨ξ, φ() ⟩__0 ,  ∀∈ , and moreover, f _ = inf{ξ__0: ξ∈_0 such that (<ref>) holds} , with the infimum achieved at a unique ξ^* ∈_0. For our purpose, we would like to apply Lemma <ref> with _0 = L^2(, μ) and φ: → L^2(, μ) defined such that φ() is the function on defined by h ↦σ(h()). Indeed, L^2(, μ) is equipped with the inner product ⟨ξ_1, ξ_2 ⟩_L^2(, μ)∫ξ_1(h) ξ_2(h) μ(dh) and complete <cit.>, and hence it is a Hilbert space. Moreover, ∀∈, as σ is non-expansive, there is | σ(h()) | ≤ |σ(0)| + |h()| ≤ |σ(0)| + h _∞, and hence φ() ∈ L^2(, μ) by the assumption on μ (note that φ() is a measurable function on since the evaluation functionals are measurable with respect to the Borel sigma-algebra on ). Thus, we are able to prove Lemma <ref> by applying Lemma <ref> with _0 and φ defined as above. § SUPPLEMENTARY MATERIALS FOR SECTION <REF> §.§ Including the Bias Terms §.§.§ Multi-layer NN By including the bias term, we mean replacing (<ref>) in the definition of the multi-layer NN by h_i^(l+1)() b^(l+1)_i + 1/mjm W_ij^(l)h_j^(l)() . In the GF dynamics, the bias terms evolve according to the following ODE: b^(l)_i, t = - βζ_m, t q^(l)_i, t() , where β denotes the learning rate of the bias parameters relative to the weight parameters. If β = 0, for example, it corresponds to having untrained bias terms. §.§.§ NHL To incorporate the bias term, we replace Definition <ref> of the NHL in the following way. We consider the map ×→   (h, b) ↦   h(·) + b , If μ∈(×), we let μ_+ ∈() denote its push-forward under this map. Suppose each of ^(2), ..., ^(L) is an RKHS on , and ∀ l ∈ [L-1], there exists μ^(l)∈(^(l)×) such that ^(l+1) is the RKHS associated with the kernel function κ^(l)(, ') ∫h() + bh(') + bμ(dh, db) . In other words, we can write ^(l+1) = _μ^(l)_+. Then, we say that (^(l))_l ∈ [L] is an L-level NHL (with the bias terms included) induced by the sequence of probability measures, (μ^(l))_l ∈ [L-1]; in addition, a function f on belongs to the NHL if f ∈^(L). If is a Hilbert space and μ∈(×), we define μ_, p, + ( ∫ h _^p + |b|^p μ (dh, db))^1/p for p ∈ [2, ∞), and analogously for p = ∞. Given an RKHS , ∀ p ∈ [2, ∞], we can define 𝒟_p (→' ) inf_μ∈(×), _μ = 'μ_, p, + . Then, we can define the (L, p)-NHL complexity of a function in the same way through (<ref>) and (<ref>). In addition, in place of Proposition <ref>, the representation of an NHL through coupled random fields can be defined in the following way. In Definition <ref>, there exist random fields, (^(l))_l ∈ [L-1], and random variables, (^(l))_l ∈ [L-1], that are defined on a common probability space and satisfies the following properties: * The pairs (^(1), ^(1)), ..., (^(L-1), ^(L-1)) are mutually independent, and ∀ l ∈ [L-1], μ^(l) = (^(l), ^(l)). * There exist scalar random variables ^(1), ..., ^(L-2) such that ∀ l ∈ [L-2], ^(l+1)() = ^(l)^(l)() + ^(l) | ^(l+1) , where · | · denotes the conditional expectation, and ^(l+1)_^(l+1)^2 = (^(l))^2 | ^(l+1). In particular, we can choose each ^(l) to be measurable with respect to ^(l) and ^(l+1); * There exists a scalar random variable measurable with respect to ^(L-1) such that f() = ^(L-1)() + ^(L-1) , and f _^(L)^2 = ^2. In the mean-field dynamics, the evolution of the bias term is governed by d/dt^(l)_t = - βζ_t tl() tl() + tl §.§ Proof of Theorem <ref> First, by the definition of ^(1), there is μ^(1)_m _1, p = M^(1)_m, p. For l ∈ [L-2], Lemma <ref> implies h_i^(l+1)_l+1_m^2 ≤1/mjm |W^(l)_i, j|^2, and so μ^(l+1)_m _l+1, p^p = 1/mim h_i^(l+1)_l+1^p ≤ (M^(l+1)_m, p)^p. Finally, Lemma <ref> also implies f _L, m≤ M^(L)_m, p. Together, they prove the theorem. §.§ Proof of Theorem <ref> Let f be a function on with L∞(f) < ∞. Then, as introduced in Appendix <ref>, there exist probability measures μ^(1), ..., μ^(L-1) and deterministic functions Ξ^(1), ..., Ξ^(L-1) satisfying the conditions therein. Our strategy will be to consider a random approximation of f using a width-m NN that achieves low a approximation error in expectation. For each l ∈ [L-1], we let {^(l)_i }_i ∈ [m] be m independent samples in ^(l) from μ^(l). We define ^(1)_i ^(1)_i. Then, for l ∈ [L-2], writing ^(l)_i,jΞ^(l)(^(l+1)_i, ^(l)_j), we iteratively define ^(l+1)_i() 1/mjm^(l)_i,j^(l)_j() , and finally, writing _i Ξ^(L-1)(^(L-1)_i), we define _m() 1/mim^(L-1)_i() . ∀ l ∈ [L-1], ∀ i ∈ [m], ∀∈, almost surely, (^(l)_i() - ^(l)_i() )^2 | ^(l)_i ≤l-1/m∏_l'=1^lμ^(l')_^(l'), ∞^2 . The lemma is proved in Appendix <ref>. Thus, ∀∈, we have | _m() - f() |^2≤ (I) + (II) , where (I)   ( _m() - 1/mim_i ^(L-1)_i() )^2 =   ( 1/mim_i ( ^(L-1)_i() - ^(L-1)_i() ) )^2 ≤   ( 1/mim (_i)^2 ) ( 1/mim ( ^(L-1)_i() - ^(L-1)_i() )^2 ) ≤   ( 1/mim (_i)^2 ) ( 1/mim ( ^(L-1)_i() - ^(L-1)_i() )^2 | ^(L-1)_i) ≤ L-2/m ( ∏_l=1^L-1μ^(l)_^(l), ∞^2 ) 1/mim ( Ξ^(L-1) (^(L-1)_i ) )^2 ≤ L-2/m f _^(L)^2 ∏_l=1^L-1μ^(l)_^(l), ∞^2 , where on the third line we use the Cauchy-Schwartz inequality, on the fourth line we use that _i is measurable with respect to ^(L-1)_i, on the fifth line we use Lemma <ref>; and on the other hand, (II)   ( 1/mjm_i^(L-1)_i() - f() )^2 =  1/mim ( Ξ^(L-1)(^(L-1)_i) ^(L-1)_i() - Ξ^(L-1)(_i^(L-1)) ^(L-1)_i() )^2 ≤  1/m ( Ξ^(L-1)(^(L-1)) )^2 ( ^(L-1)() )^2  , where on the second and third lines, we use the independence among ^(L-1)_1, ..., ^(L-1)_m and their equivalence in law. By Theorem <ref>(<ref>), there is sup_∈ |^(L-1)_i()| ≤L-12(^(l)_i) = ^(L-1)_i _^(L-1)∏_l'=1^L-2μ^(l')_^(l'), 2. Therefore, it holds that ( ^(L-1)_i() )^2 ≤ ( ^(L-1)_i() )^2 ≤^(L-1)_i _^(L-1)^2 ∏_l'=1^L-2μ^(l')_^(l'), 2^2, which is almost surely bounded by ∏_l'=1^L-1μ^(l')_^(l'), ∞^2. Thus, (II) ≤  1/m ( ∏_l'=1^L-1μ^(l')_^(l'), ∞^2 ) ( Ξ^(L-1)(^(L-1)) )^2 ≤1/m f _^(L)^2 ∏_l'=1^L-1μ^(l')_^(l'), ∞^2 . Together, (<ref>) and (<ref>) imply that, ∀∈, | _m() - f() |^2≤L-1/m f _^(L)^2 ∏_l'=1^L-1μ^(l')_^(l'), ∞^2 = L-1/mL∞(f)^2 . Hence, ∀ν∈(), ∼ν| _m() - f() |^2 = ν| _m() - f() |^2≤L-1/mL∞(f)^2 . Thus, as a consequence of Markov's inequality, there exists a realization of (^(l)_i)_l ∈ [L-1], i ∈ [m] under which ∼ν| _m() - f() |^2≤L-1/mL∞(f)^2 . §.§.§ Proof of Lemma <ref> For l = 1, there is ^(1)_i = ^(1)_i, and hence ^(1)_i() - ^(1)_i() = 0, ∀∈. Suppose that the statement holds for some l ∈ [L-2]. Then, for level l+1, we can write ( ^(l+1)_i() - ^(l+1)_i() )^2 | ^(l+1)_i ≤   (I) + (II) , where (I)   ( ^(l+1)_i() - 1/mjm^(l)_i,j^(l)_j() )^2 | ^(l+1)_i =   ( 1/mjm^(l)_i,j ( ^(l)_j() - ^(l)_j() ) )^2 | ^(l+1)_i ≤   ( 1/mjm ( ^(l)_i,j )^2 ) ( 1/mjm ( ^(l)_j() - ^(l)_j() )^2 ) | ^(l+1)_i ≤   ( 1/mjm ( ^(l)_i,j )^2 ) ( 1/mjm ( ^(l)_j() - ^(l)_j() )^2 ) | {^(l)_j }_j ∈ [m] | ^(l+1)_i ≤  l-1/m ( ∏_l'=1^l^(l')_^(l'), ∞^2 ) ( 1/mjm ( ^(l)_i,j )^2 ) | ^(l+1)_i ≤  l-1/m ( ∏_l'=1^l^(l')_^(l'), ∞^2 ) ^(l+1)_i _^(l+1)^2 ≤  l-1/m ( ∏_l'=1^l+1^(l')_^(l'), ∞^2 ) , where on the fourth line, we use that 1) ^(l)_i,j is measurable with respect to ^(l+1)_i and ^(l)_j, as well as 2) ^(l)_j and ^(l)_j are independent from ^(l+1)_i; and on the fifth line we use the inductive hypothesis; and on the other hand, (II)   ( 1/mjm^(l)_i,j^(l)_j() - ^(l+1)_i() )^2 | ^(l+1)_i =   ( 1/mjm ( Ξ^(l)(^(l+1)_i, ^(l)_j) ^(l)_j() - Ξ^(l)(^(l+1)_i, ^(l)) ^(l)() | ^(l+1)_i ) )^2 | ^(l+1)_i ≤  1/m ( Ξ^(l)(^(l+1)_i, ^(l)) )^2 ( ^(l)() )^2 | ^(l+1)_i  . By Theorem <ref>(<ref>), there is sup_∈ |^(l)_j()| ≤L2(^(l)_j) = ^(l)_j _^(l)∏_l'=1^l-1^(l')_^(l'), 2. Therefore, there is ( ^(l)_j() )^2 ≤ ( ^(l)_j() )^2 ≤^(l)_j _^(l)^2 ∏_l'=1^l-1^(l')_^(l'), 2^2, which is almost surely bounded by ∏_l'=1^l^(l')_^(l'), ∞^2. Thus, (II) ≤  1/m ( ∏_l'=1^l^(l')_^(l'), ∞^2 ) ( Ξ^(l)(^(l+1)_i, ^(l)) )^2 | ^(l+1)_i ≤1/m^(l+1)_i _^(l+1)^2 ∏_l'=1^l^(l')_^(l'), ∞^2 , where the right-hand side is bounded almost surely by 1/m∏_l'=1^l+1^(l')_^(l'), ∞^2. Therefore, combining the bounds for (I) and (II), we get ( ^(l+1)_i() - ^(l+1)_i() )^2 | ^(l+1)_i ≤ (l-1/m + 1/m) ∏_l'=1^l+1^(l')_^(l'), ∞^2 ≤l/m∏_l'=1^l+1^(l')_^(l'), ∞^2 , which proves the inductive hypothesis at level l+1. § SUPPLEMENTARY MATERIALS FOR SECTION <REF> §.§ Proof of Theorem <ref> When σ is homogeneous, we see that L can be alternatively expressed as L(f) =  inf_μ^(1), ..., μ^(L-1) f _^(L) s.t. μ^(l)_l, 2 = 1 , ∀ l ∈ [L-1] In the following, for brevity, we will write sup_μ^(l) and sup_ξ for sup_μ^(l)∈(l) μ^(l)_^(l), 2≤ 1  and sup_ξ∈ L^2(L-1, μ^(L-1)) ξ_L^2(L-1, μ^(L-1))≤ 1 , respectively. Recall that the empirical Rademacher complexity is defined as _S((L)) = _ [1/nsup_f_L≤ 1knτ_k f(_k) ] For any λ > 0, we consider the function g_λ: → defined by g_λ(u) = exp(λ u), which is positive, monotonically increasing and convex. Thus, using Jensen's inequality, we can write n _S((L)) ≤  1/λlog ( g_λ ( _ [ sup_f_L≤ 1knτ_k f(_k) ] ) ) ≤  1/λlog ( _ [ g_λ ( sup_f_L≤ 1knτ_k f(_k) ) ] ) ≤  1/λlogℳ^(L)_λ , where we define, ∀ l ∈ [L], ℳ^(l)_λ_ [ g_λ ( sup_f_l≤ 1 | knτ_k f(_k) | ) ]. ℳ^(L)_λ≤ 2^L-1_ [ g_λ ( knτ_k _k ) ] . This lemma is proved in Appendix <ref>. Then, if we choose λ = √(2 (L-1) log (2))/√(kn_k _2^2), it is shown in <cit.> that 1/λlog ( 2^L-1_ [ g_λ ( knτ_k _k ) ] ) ≤ (√(2 L log (2)) + 1 ) √(kn_k _2^2) , which yields the desired result. §.§.§ Proof of Lemma <ref> We see that sup_f_L≤ 1 | knτ_k f(_k) | ≤  sup_μ^(1), ..., μ^(L-1), ξ | knτ_k ∫ξ(h) h(_k)μ^(L-1)(dh) | ≤  sup_μ^(1), ..., μ^(L-1), ξ | ∫knτ_k ξ(h/|ξ(h)|h(_k)/h_L-1 |ξ(h)| h_L-1μ^(L-1)(dh) | By the Cauchy-Schwartz inequality and the homogeneity of σ, there is | ∫knτ_k ξ(h/|ξ(h)|h(_k)/h_L-1 |ξ(h)| h_L-1μ^(L-1)(dh) | ≤   ( sup_h ∈L-1 | knτ_k ξ(h)/|ξ(h)|h(_k)/h_L-1 | ) ∫ |ξ(h)| h_L-1μ^(L-1)(dh) ≤   ( sup_ĥ_L-1≤ 1 | knτ_k ĥ(_k) | ) (∫ |ξ(h)|^2 μ^(L-1)(dh) )^1/2 (∫h_L-1^2 μ^(L-1)(dh) )^1/2 , and hence sup_f_L≤ 1 | knτ_k f(_k) | ≤sup_ĥ_L-1≤ 1 μ^(1), ..., μ^(L-2) | knτ_k ĥ(_k) | = sup_ĥ_L-1 | knτ_k ĥ(_k) | , if L ≥ 3 , sup_ĥ_1 | knτ_k ĥ(_k) | , if L = 2 . Notice that, since g is positive, there is g(|u|) ≤ g_λ(u) + g_λ(-u). Therefore, when L ≥ 3, ℳ^(L)_λ≤  _ [ g_λ ( sup_f_L≤ 1 | knτ_k f(_k) | ) ] ≤  _ [ sup_ĥ_L-1≤ 1 g_λ ( | knτ_k ĥ(_k) | ) ] ≤  _ [ sup_ĥ_L-1≤ 1 g_λ (knτ_k ĥ(_k) ) ] + _ [ sup_ĥ_L-1≤ 1 g_λ (- knτ_k ĥ(_k) ) ] ≤  _ [ g_λ (sup_ĥ_L-1≤ 1knτ_k ĥ(_k) ) ] + _ [ g_λ (sup_ĥ_L-1≤ 1kn (-τ_k) ĥ(_k) ) ] ≤   2 _ [ g_λ (sup_ĥ_L-1≤ 1knτ_k ĥ(_k) ) ] ≤   2 _ [ g_λ (sup_ĥ_L-1≤ 1knτ_k ĥ(_k) ) ] where for the fifth line we use the symmetry of the Rademacher distribution, and for the sixth line we the following version of the Contraction Lemma given by Equation 4.20 in <cit.>, leveraging the monotonicity and convexity of g. Hence, we derive that ℳ^(L)_λ≤ 2 ℳ^(L-1)_λ . Thus, by induction, it holds that ℳ^(L)_λ≤   2^L-1ℳ^(1)_λ =   2^L-1_ [ g_λ ( sup_f_1≤ 1 | knτ_k f(_k) | ) ] =   2^L-1_ [ g_λ ( sup__2 ≤ 1 | knτ_k ^⊺·_k | ) ] ≤   2^L-1_ [ g_λ ( knτ_k _k ) ]  , which proves the lemma. § SUPPLEMENTARY MATERIALS FOR SECTION <REF> §.§ Proof of Theorem <ref> Notations If (u_m)_m ∈ℕ_+ and (u_m')_m ∈ℕ_+ are two sequences of non-negative random variables, we write u_m = o_ℙ(u_m') if it holds almost surely that ∀ϵ < 0, ∃ M > 0 such that ∀ m > M, u_m ≤ϵ u_m'. Preliminaries We will start by defining the limiting dynamics in an alternative way in terms of flow maps, which is also made more general by taking into account the bias terms introduced in Appendix <ref>. For t ≥ 0, let _t, ^(1)_t, ..., ^(L-2)_t, ^(1)_t, ..., and ^(L-1)_t be random variables, let _t be a d-dimensional random vector, and let t1, ..., tL-1 and t1, ..., tL-1 be random fields on , which all depend on time and are defined as follows. At initial time, we let _0, _0, ^(1)_0, ..., ^(L-1)_0 be distributed independently with laws , , ρ_b^(1), ..., ρ_b^(L-1), respectively. Then, we define tl, tl, tl, tl and _t alternatively as: tl() =  t1(, _0, 01) , l = 1 tl(, 0l) , l ∈{2, ..., L-2} tL-1(, , 0L-1) , l = L-1 tl() =  t1(, _0, 01) , l = 1 tl(, 0l) , l ∈{2, ..., L-2} tL-1(, , 0L-1) , l = L-1 tl =  t1(_0, 01, 02) , l = 1 tl(0l, 0l+1) , l ∈{2, ..., L-2} tL-2(, 0L-2, 0L-1) , l = L-2 , tl =  t1(_0, 01) , l = 1 tl(0l) , l ∈{2, ..., L-2} tL-1(, 0L-1) , l = L-1 _t =   Z_t(_0, 01) , _t =   A_t(_0, 0L-1) . by introducing the following (deterministic) functions: tl, tl :  ×^d ×→ , l = 1, ×→ , l ∈{2, ..., L-2} , ××→ , l = L-1 , tl :  ^d ××→ , l = 1 , ×→ , l ∈{2, ..., L-3 } , ××→ , l = L-2 , tl :  ^d ×→ , l = 1 , → , l ∈{2, ..., L-2 } , ×→ , l = L-1 , Z_t :  ^d ×→^d , A_t :  ×→ , Then can be interpreted as flow maps and are defined as follows: ∀ t ≥ 0, * For l ∈ [L-1], tl is defined by, ∀∈, ∀∈^d, ∀ a, b ∈, t1(, , b) =   Z_t(, b)^⊺· , t2(, b) =  t1(_0, 01, b) t1(, _0, 01) + t1(_0, 01) , tl+1(, b) =  tl(0l, b) tl(, 0l) + tl(0l) , ∀ l ∈{2, ..., L-3} , tL-1(, a, b) =  tL-2(a, 0L-2, b) tL-2(, 0L-2) + tL-2(0L-2) . * For l ∈ [L-1], tl is defined by, ∀∈, ∀∈^d, ∀ a, b ∈, tL-1(, a, b) =   A_t(a, b) , tL-2(, b) =  tL-2(, b, 0L-1) tL-1(, 0L-1) tL-1(, ,0L-1)  + tL-1(,0L-1) , tl-1(, b) =  tl-1(b, 0l) tl(, 0l) tl(, 0l) + tl(0l), ∀ l ∈{1, ..., L-2} , t1(, , b) =  t1(, b, 02) t2(, 02) t2(, 02) + t2(02) . * For l ∈ [L-2], tl is defined by, ∀∈^d, ∀ a, b ∈ t1(, b, b') =   - ζ_t() t2(, b') t2(, b') + t2(b')t1(, , b) + t1(, b) , tl(b, b') =   - ζ_t() tl+1(, b') tl+1(, b') + tl+1(b')  tl(, b) + tl(b) , ∀ l ∈{2, ..., L-2} , tL-2(a', b, b') =   - ζ_t() tL-1(, a', b') tL-1(, a', b') + tL-1(a', b')  tl(, b) + tl(b) , together with the initial conditions 01(, b, b') =   0 , 0l(b, b') =   0  , ∀ l ∈{2, ..., L-2} , 0L-2(a', b, b') =   0 . * For l ∈ [L-1], tl is defined by, ∀∈^d, ∀ a, b ∈, t1(, b) =   -βζ_t() t1(, , b) t1(, , b) + t1(, b) , tl(b) =   -βζ_t() tl(, b) tl(, b) + tl(b) , ∀ l ∈{2, ..., L-2 } , tL-1(a, b) =   -βζ_t() tL-1(, a, b) tL-1(, a, b) + tL-1(a, b) , together with the initial conditions 01(, b) =   b , 0l(b) =   b , ∀ l ∈{2, ..., L-2 } , 0L-1(a, b) =   b . * Z_t is defined by, ∀∈^d, ∀ b ∈, Z_t(, b) = - ζ_t() t1(, , b) t1(, , b) + t1(, b) , together with the initial condition Z_0(, b) =  . * A_t is defined by, ∀ a, b ∈, A_t(a, b) = - ζ_t() tl(, a, b) + tl(a, b) , together with the initial condition A_0(a, b) = a . Lastly, f_t and ζ_t are defined in the same way as in Section <ref>. Defined in this way, one can verify that ^(l)_0 0, ∀ l ∈ [L-2], and furthermore, for t ≥ 0, the dynamics of _t, _t, ^(1)_t, ..., ^(L-2)_t, ^(1)_t, ..., ^(L-1)_t satisfy the following equations: _t =   - ζ_t() tL-1() + tL-1 , _t =   - ζ_t() t1() t1() + t1 , ∀ l ∈ [L-2] , tl =   - ℰ_{ζ_t() tl+1() tl+1()tl() + tl} , ∀ l ∈ [L-1] , ^(l)_t =   - βζ_t tl() tl() + tl , and moreover, the random fields satisfy t1() = _t^⊺·, tL-1() = _t, and ∀ l ∈ [L-2], ^(l+1)_t() =  ^(l)_t tl() + tl | ^(l+1)_t . tl() =  ^(l)_t tl+1() | tl . From (<ref>), we derive that, ∀ l ∈ [L-2], tl = - ∫_0^t ζ_s() sl+1() sl+1() + sl+1sl() + sl ds . When ρ_b^(1) = ... = ρ_b^(L-1) = δ_0 and β = 0, there is tl = 0, ∀ l ∈ [L-1]. Then, substituting (<ref>) into (<ref>) and (<ref>), we obtain (<ref>) and (<ref>). Hence, the definitions given by (<ref>) are consistent with the mean-field dynamics described in Section <ref>. To facilitate the propagation-of-chaos argument in the rest of the proof, we additionally define it =   Z_t(i0, i01) it =   A_t(a_i, 0, i0L-1) itl =  t1(i0, i01) , l = 1 tl(i0l) , l ∈{2, ..., L-2 } tL-1(a_i, i0L-1) , l = L-1 itl() =  t1(, i0, i01) , l = 1 tl(, i0l) , l ∈{2, ..., L-2} tL-1(, a_i, i0L-1) , l = L-1 itl() =  t1(, i0, i01) , l = 1 tl(, i0l) , l ∈{2, ..., L-2} tL-1(, a_i, i0L-1) , l = L-1 ijtl =  t1(j0, j01, i02) , l = 1 tl(j0l, i0l+1) , l ∈{2, ..., L-3 } tL-2(a_i, j0L-2, i0L-1) , l = L-2 . Main proof Given a function g on ^N, by the definition of each m,tl and the triangle inequality, there is | ∫ g(h(1), ..., h(N)) μ^(l)_m, t(dh) - g(tl(1), ..., tl(N)) | =   | 1/mim g(itl(1), ..., itl(N)) - g(tl(1), ..., tl(N)) | ≤  (I) + (II) , where (I)   | 1/mim g(itl(1), ..., itl(N)) - g(tl(1), ..., tl(N)) | (II)   | 1/mim g(itl(1), ..., itl(N)) - 1/mim g(itl(1), ..., itl(N)) | For the first term, there is (I) =   | 1/mim g( t1(1, _i, 0, i01), ..., t1(N, _i, 0, i01)) - g( t1(1, _0, 01), ..., t1(N, _0, 01) | , l = 1 | 1/mim g( tl(1, i0l), ..., t1(N, i0l)) - g( t1(1, 0l), ..., t1(N, 0l) | , l ∈{2, ..., L-2 } | 1/mim g( tL-1(1, a_i, i0L-1), ..., t1(N, a_i, i0L-1)) - g( tL-1(1, , 01), ..., t1(N, , 0L-1) | , l = L-1 Since each i0l, _0 and a_i are independent realizations of 0l, _0 and , and moreover, each tl is a Lipschitz function at any finite t ≥ 0 (due to the smooth dependence of solutions of ODEs to its initial condition), we know from the law of large numbers that (II) = o_(1). For the second term, if g ∈Lip(^N), then (II) = | 1/mim g(itl(1), ..., itl(N)) - 1/mim g(itl(1), ..., itl(N)) | ≤ ( 1/NkN | Δmtl (k) |^2 )^1/2 , where we define, ∀ l ∈ [L-1], ∀∈, Δmtl() ( 1/mjm | itl() - itl() |^2 )^1/2 . 1/NkN | Δmtl (k) |^2 = o_(1). This lemma is proved in Appendix <ref> using a propagation-or-chaos argument <cit.>, and it implies that (II) = o_(1). This concludes this proof of Theorem <ref>. §.§.§ Proof of Lemma <ref> We additionally define Δζ_m, t()   | ζ_m, t() - ζ_t()| Δ_m, t   ( 1/mjm | jt - jt|^2 )^1/2 , Δ a_m, t   ( 1/mim | a_i, t - ã_i, t|^2 )^1/2 , Δmtl   ( 1/mim | itl - itl|^2 )^1/2 , ∀ l ∈ [L-1] , Δmtl()   ( 1/mjm | itl() itl() + itl - itl() itl() + itl |^2 )^1/2 , ∀ l ∈ [L-1] , Δ W_m, t^(l)   ( 1/m^2i, jm | ijtl - ij0l - W̃^(l)_i, j, t)|^2 )^1/2 , ∀ l ∈ [L-2] =   ( 1/m^2 W_t^(l) - W_0^(l) - W_t^(l)_^2 )^1/2 ≥   ( 1/m^2 (W_t^(l) - W_0^(l) - W_t^(l))^⊺ (W_t^(l) - W_0^(l) - W_t^(l)) _2 )^1/2 , and finally, Δ_m, t =  sup_k ∈ [n]Δζ_m, t(_k) + Δ_m, t + Δ a_m, t   + lL-1 ( sup_k ∈ [n]Δmtl(_k) + sup_k ∈ [n]Δmtl(_k) + Δmtl ) + lL-2Δ W_m, t^(l) , At initial time, we see that Δ_m, 0 = 0. For t ≥ 0, we will bound its growth by examining each term on the right-hand side. * Δmtl When l=1, Δmt1() = O(Δ_m, t + Δmt1) For l ∈{2, ..., L-3 }, itl+1() - itl+1() =   (itl+1 - itl+1) + 1/mjmij0ljtl + jtl   + 1/mjm ( ij0l - ij0l - ijtl ) jtl + jtl   + ( 1/mjmijtljtl + jtl - 1/mjmijtljtl + jtl ) + ( 1/mjmijtljtl + jtl - tl+1(, i0l+1) ) By the Marchenko-Pastur law of the eigenvalues of sample covariance matrices <cit.>, under the assumption that has finite fourth moment, 1/m (W_0^(l))^⊺ W_0^(l) converges almost surely to some finite number, and hence 1/mim | 1/mjmij0ljtl + jtl |^2 =   O ( 1/m ( 1 + Δ h^(l)_m, t() + Δ b^(l)_m, t )^2 ( 1/m (W_0^(l))^⊺ W_0^(l) ) ) =   o_ (1 + (Δ h^(l)_m, t())^2 ) . In addition, 1/mim | 1/mjm ( ij0l - ij0l - ijtl ) jtl + jtl |^2 ≤  1/m^2 (W_t^(l) - W_0^(l) - W_t^(l))^⊺ (W_t^(l) - W_0^(l) - W_t^(l)) _2 1/mjm | jtl + jtl |^2 =   O ( (Δ W^(l)_m, t)^2 (Δ h^(l)_m, t() + Δjtl)^2 ) Moreover, since the deterministic maps tl and tl are Lipschitz at any finite t ≥ 0, we can deduce from the law of large numbers that ∀ i ∈ [m], | 1/mjmijtljtl + jtl - tl+1(, i0l+1) | =   | 1/mjmtl(i0l+1, j0l) tl(, j0l) + tl(j0l)   - tl(i0l+1, 0l) tl(, 0l) + tl(0l) | =   o_(1) . Thus, (Δmtl+1())^2 =   O ( (Δtl+1)^2 + (Δ W_t^(l))^2 + (Δmtl())^2 + (Δtl)^2 )  + o_ (1 + (Δ h^(l)_m, t() + Δtl)^2 ) and so Δmtl+1() = O (Δmtl+1 + Δ W_m, t^(l) + Δmtl() + Δmtl ) + o_ (1 + Δ h^(l)_m, t() + Δmtl ) . With a similar argument, we can obtain the same bound for Δmt2 and Δmtl+1. So, by induction, ∀ l ∈ [L-1], ∀∈, Δmtl() =   O ( Δ_m, t + l'lΔmtl' + l'l-1Δ W_t^(l') ) + o_(1 + Δ h^(l)_m, t() + Δ b^(l)_m, t) =   O(Δ_m, t) + o_(Δ_m, t)  . * Δmtl For l = L-1, | itL-1() itL-1() + itL-1 - itL-1() itL-1 + itL-1 | =   | a_i, titL-1() + itL-1 - ã_i, titL-1 + itL-1 | , and hence ΔmtL-1() = O (Δ a_m, t + ΔmtL-1() + ΔmtL-1)) . For l ∈{3, ..., L-2 }, jtl-1() jtl-1() + jtl-1 - jtl-1jtl-1() + jtl-1 =   ( 1/mimij0l-1itlitl + itl ) jtl-1 + jtl-1  + ( 1/mim (ijtl-1 - ij0l-1- ijtl-1 ) itlitl + itl ) jtl-1 + jtl-1  + ( 1/mimijtl-1 (itlitl + itl - itlitl + itl ) ) jtl-1 + jtl-1  + ( 1/mimijtl-1itlitl + itl ) ( jtl-1 + jtl-1 - jtl-1 + jtl-1 )  + ( 1/mimijtl-1itlitl + itl - jtl-1 ) jtl-1 + jtl-1 . Note that ∀ j ∈ [m], by the Lipschitzness of the deterministic maps at finite t and the law of large numbers, | 1/mimijtl-1itlitl + itl - jtl-1 | ≤   | 1/mimtl-1(i0l, j0l-1) tl(, i0l) tl(, i0l) + tl(i0l)   - tl-1(0l, j0l-1) tl(, 0l) tl(, 0l) + tl(0l) | =   o_(1) . Via other techniques analogous to those used in part (<ref>), we see that Δmtl-1() =   O (Δmtl() + Δmtl-1() + Δmtl-1 + Δ W_m, t^(l)   + (Δmtl() + Δmtl-1() + Δmtl-1 + Δ W_m, t^(l))^2 ) + o_(1 + Δmtl()) Similarly, we can obtain the same bound for ΔmtL-2() and Δmt1(). Thus, by induction, Δmtl = O ( Δ_m, t + (Δ_m, t)^2^L-l-1 ) + o_ (1 + Δ_m, t + (Δ_m, t)^2^L-l-2 ) * Δmtl For l ∈ [L-1], (itl - itl) =   - β (ζ_m, t() itlitl + itl - ζ_t() itlitl + itl ) =   - βζ_m, t() (itlitl + itl - itlitl + itl )  - β(ζ_m, t() - ζ_t()) itlitl + itl Thus, ( 1/mim | (itl - itl) |^2 )^1/2 =   O ( Δmtl() + Δζ_m, t() + Δζ_m, t() Δmtl() ) , which implies that mtl =   O ( mtl() + Δζ_m, t() + Δζ_m, t() mtl() ) =   O ( Δ_m, t + (Δ_m, t)^2 ) * Δ W_m, t^(l) For l ∈ [L-2], (ijtl - ijtl) =   - ζ_m, t() itl+1itl+1 + itl+1jtl + jtl  + ζ_t() itl+1itl+1 + itl+1jtl + jtl =   - ζ_m, t() (itl+1itl+1 + itl+1 - itl+1itl+1 + itl+1 ) jtl + jtl   - ζ_m, t() itl+1itl+1 + itl+1 (jtl + jtl - jtl + jtl )   - (ζ_m, t() - ζ_t()) itl+1jtl + jtl Thus, 1/m^2i,jm ( (ijtl - ijtl) )^2 =   O ( (1 + Δζ_m, t()) (Δmtl() + Δmtl-1() + Δmtl-1)^2 + Δζ_m, t() ) and so Δ W_m, t^(l) =   O ( (1 + Δζ_t()) (Δmtl() + Δmtl-1() + Δmtl-1 + Δζ_m, t() ) =   O ( Δ_m, t + (Δ_m, t)^2 ) . * Δ_m, t Δ_m, t =   O ( Δζ_m, t() + Δmt1() + Δζ_m, t() Δmt1() ) =   O ( Δ_m, t + (Δ_m, t)^2 ) . * Δ a_m, t Δ a_m, t =   O ( Δζ_m, t() + ΔmtL-1() + ΔmtL-1 + Δζ_m, t() (ΔmtL-1() + ΔmtL-1) ) =   O ( Δ_m, t + (Δ_m, t)^2 ) . * Δζ_m, t Δζ_m, t() =   O (Δ a_m, t + ΔmtL-1() + ΔmtL-1 + Δ a_m, t (ΔmtL-1() + ΔmtL-1 ) ) =   O ( Δ_m, t + (Δ_m, t)^2 ) . Therefore, for the duration in which Δ_m, t≤ 1, it holds that Δ_m, t =   O(Δ_m, t) + o_(1) . Hence, with Grönwall's inequality, it holds while Δ_m, t≤ 1 that Δ_m, t = o_(1) Thus, for any finite t ≥ 0, when m is large enough, we can always ensure that Δ_m, t≤ 1. Thus, (<ref>) holds for all finite t ≥ 0. Finally, applying (<ref>) to each ∈{1, ..., N} , we arrive at Lemma <ref>. §.§ Derivation of the Training Dynamics of Deep Linear NNs In the linear NN case, κ^(l)_t, s(, ') =  tl() sl(') =  [ (0l() - ∫_0^t ”ζ_r(”) κ^(l-1)_t, r(, ”) rl(”) dr ) (0l(') - ∫_0^s ”ζ_r(”) κ^(l-1)_s, r(, ”) rl(”) dr ) ] =  κ^(l)_0, 0(, ') + ∫_0^t ∫_0^s ”, ”'ζ_r(”) ζ_p(”') γ^(l)_r, p(”, ”') κ^(l-1)_t, r(, ”) κ^(l-1)_s, p(', ”') dr   dp   - ∫_0^t ”ζ_r(”) κ^(l-1)_t, r(, ”) l0l(') rl(”) dr   - ∫_0^s ”ζ_r(”) κ^(l-1)_s, r(', ”) l0l() rl(”) dr Since 0l() rl(”) =  0l(') ∫_0^r ”'ζ_p(”') γ^(l+1)_r, p(”, ”') pl(”') dp =  ∫_0^r ”'ζ_p(”') γ^(l+1)_r, p(”, ”') κ^(l)_0, p(, ”') dp  , we then have κ^(l)_t, s(, ') =  κ^(l)_0, 0(, ') + ∫_0^t ∫_0^s ”, ”'ζ_r(”) ζ_p(”') γ^(l)_r, p(”, ”') κ^(l-1)_t, r(, ”) κ^(l-1)_s, p(', ”') dr   dp   - ∫_0^t ”, ”'ζ_r(”) ζ_p(”') γ^(l+1)_r, p(”, ”') κ^(l-1)_t, r(, ”) κ^(l)_0, p(', ”') dr   - ∫_0^s ”, ”'ζ_r(”) ζ_p(”') γ^(l+1)_r, p(”, ”') κ^(l-1)_s, r(', ”) κ^(l)_0, p(, ”') dr  . Notice that κ^(l)_0, p(, ') = 0, ∀ l > 1, ∀ p ≥ 0 while κ^(1)_0, 0(, ') = κ^(0)_0, p(, ') = ^⊺·', ∀ p ≥ 0. Thus, using the linearity, we can derive (<ref>) for l ∈ [L-2], and moreover, K^(1)_t, s =   1 + ∫_0^t ∫_0^s c^(1)_r, p_r ·_p^⊺  dp  dr    - ∫_0^t ∫_0^r c^(2)_r, p_r ·_p^⊺· K^(1)_p, 0  dp  dr    - ∫_0^s ∫_0^r c^(2)_r, p K^(1)_0, p·_p ·_r^⊺  dp  dr . With a similar argument, we can derive (<ref>) for l ∈ [L-2], and moreover, c^(L-1)_t, s =   1 + ∫_0^t ∫_0^s _r^⊺· K^(L-1)_r, p·_p^⊺ dp  dr   - ∫_0^t ∫_0^r c^(L-1)_p, 0_r^⊺· K^(L-2)_r, p·_p^⊺ dp  dr   - ∫_0^s ∫_0^r c^(L-1)_p, 0_r^⊺· K^(L-2)_r, p·_p^⊺ dp  dr . § SUPPLEMENTARY MATERIALS FOR SECTION <REF> §.§ Proof of Lemma <ref> For any function f ∈L, we let (^(l))_l ∈ [L] be the L-level NHL that it belongs to. We may define μ^(L)1/2δ_f + 1/2δ_-f and let ^(L+1) = _μ^(L). As f ∈^(L) , we know that μ^(L)∈(^(L)) and satisfies ∀ p ∈ [2, ∞], μ^(L)_^(L), p = f _^(L), which implies that L+1p(^(L+1)) ≤f_^(L)·Lp(^(L)). Moreover, if we define ξ: ^(L)→ by ξ = 2 1_f - 2 1_-f, it holds for all ∈ that ∫ξ(h) h()μ^(L)(dh) = f() - -f() = f() , As ξ_L^2(^(L), μ^(L)) = 2, we know from Lemma <ref> that f _^(L+1)≤ 2. Therefore, f also belongs to the (L+1)-level NHL, (^(l))_l ∈ [L+1], with L+1(f) ≤ 2 L(f). This completes the proof. §.§ Proof of Lemma <ref> For each l ∈ [L], we will construct ^(l) through μ^(l) and show that they form a well-defined NHL that coincides with the set of all polynomials with degree 2^l-1. First, we define three sets of polynomial functions on . For each l ∈, we define p_1, l, p_2, l and p_3, l by p_1, l(x) = x^l, p_2, l(x) = x^l + x^l-1 and p_3, l(x) = x^l - x^l-1. Then, we define μ^(l)_+ 1/3 · 2^l-1+1 ( δ_p_1, 0 + ∑_α = 1^3 ∑_l'=1^2^l-1δ_p_α, l ) , and ^(l)_μ^(l)_+. Notice that each μ^(l)_+ is supported on the finite set { p_1, 0}∪{ p_α, l'}_α∈ [3], l' ∈ [2^l-1]. When l=1, we see that p_1, 0, p_1, 1, p_2, 1 and p_3, 1 are all affine functions, and hence μ^(1)_+ ∈(^(1) + ), where “^(l) +” denotes the sum of ^(l) and the space of constant functions on as vector subspaces of . In addition, for l ≥ 1, it holds that ∀ l' ∈ [2^l], x^l' = (x^l'/2)^2 , if l' is even, 1/4(x^(l'+1)/2 + x^(l'-1)/2)^2 - 1/4(x^(l'+1)/2 - x^(l'-1)/2)^2 , if l' is odd, which means that we can write p_1, l'(x) = σ(p_1, l'/2(x)) , if l' is even, 1/4p_2, (l'+1)/2(x) - 1/4p_3, (l'+1)/2(x) , if l' is odd, p_2, l'(x) = σ(p_1, l'/2(x)) + 1/4p_2, l'/2(x) - 1/4p_3, l'/2(x) , if l' is even, σ(p_1, (l'-1)/2(x)) + 1/4p_2, (l'+1)/2(x) - 1/4p_3, (l'+1)/2(x) , if l' is odd, p_3, l'(x) = σ(p_1, l'/2(x)) - 1/4p_2, l'/2(x) + 1/4p_3, l'/2(x) , if l' is even, - σ(p_1, (l'-1)/2(x)) + 1/4p_2, (l'+1)/2(x) - 1/4p_3, (l'+1)/2(x) , if l' is odd, where we take advantage of σ being the quadratic function. Using Lemma <ref>, this proves that ∀α∈ [3], ∀ l' ∈ [2^l], p_α, l'_^(l) < ∞. Together with the trivial observation that p_1, 0(x) = 1 = p_1, 0(x), we then conclude that μ^(l)_+ ∈(^(l) + ), and hence (^(l))_l ∈ [L] forms a well-defined NHL. Moreover, for l ≥ 2, Lemma <ref> further implies that ^(l) contains exactly those functions that can be written as a linear combination of {p_1, 0(·)}∪{p_α, l'(·)}_α∈ [3], l' ∈ [2^l-2]. We see from (<ref>) that they include all polynomials of degree at most 2^l-1, and obviously they do not contain polynomials with a higher degree or non-polynomials. Thus, ^(l) coincides with the set of all polynomials with degree at most 2^l-1. In fact, if (^(l)')_l ∈ [L] is any NHL under the quadratic activation function, then ^(L)' cannot contain either polynomials with a degree higher than 2^L-1 or non-polynomials. In other words, ^(L) as defined above is the maximal one. Hence, regardless of the choice of p, L_p = ^(L) is the set of all polynomials with degree at most 2^L-1.
http://arxiv.org/abs/2307.00184v1
20230701005851
Personality Traits in Large Language Models
[ "Mustafa Safdari", "Greg Serapio-García", "Clément Crepy", "Stephen Fitz", "Peter Romero", "Luning Sun", "Marwa Abdulhai", "Aleksandra Faust", "Maja Matarić" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CY", "cs.HC", "68T35", "I.2.7" ]
1]Mustafa Safdarimsafdari@google.com Authors contributed equally. 1,2,3]Gregory Serapio-Garcíags639@cam.ac.uk Authors contributed equally. 4]Clément Crepyccrepy@google.com 5]Stephen Fitzstephenf@keio.jp 3,5]Peter Romerorp@keio.jp 3]Luning Sunls523@cam.ac.uk 6]Marwa Abdulhaimarwa_abdulhai@berkeley.edu 1]Aleksandra Faustfaust@google.com Authors contributed equally. 1]Maja Matarićmajamataric@google.com Authors contributed equally. [1]Google DeepMind [2]Department of Psychology, University of Cambridge [3]The Psychometrics Ctr., Cambridge Judge Business School, University of Cambridge [4]Google Research [5]Keio University [6]University of California, Berkeley The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant text. As LLMs increasingly power conversational agents, the synthesized personality embedded in these models by virtue of their training on large amounts of human-generated data draws attention. Since personality is an important factor determining the effectiveness of communication, we present a comprehensive method for administering validated psychometric tests and quantifying, analyzing, and shaping personality traits exhibited in text generated from widely-used LLMs. We find that: 1) personality simulated in the outputs of some LLMs (under specific prompting configurations) is reliable and valid; 2) evidence of reliability and validity of LLM-simulated personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific personality profiles. We also discuss potential applications and ethical implications of our measurement and shaping framework, especially regarding responsible use of LLMs. Personality Traits in Large Language Models [ August 1, 2023 =========================================== § INTRODUCTION Large language models (LLMs) <cit.>, large-capacity machine learned models that generate text in natural language, recently triggered major breakthroughs in natural language processing (NLP) and conversational agents. LLMs are beginning to meet most of the key requirements for human-like conversation, contextual understanding, coherent and relevant responses, adaptability and learning <cit.>, question answering, dialog, and text generation. Much of this capability is a result of LLMs learning to emulate human language from large datasets of text from the Web <cit.>, examples “in context” <cit.>, and other sources of supervision, such as instruction datasets <cit.> and preference fine-tuning <cit.>. The vast amounts of human-generated data LLMs are trained on enables them to mimic human characteristics in their outputs and enact convincing personas—in other words, exhibit a form of synthetic personality. Personality is the characteristic set of an individual's patterns of thought, set of traits, and behaviors <cit.>. It is formed from biological and environmental factors and experiences, and influences basic social interactions and preferences <cit.>. Personality manifests in language through various linguistic features, patterns, vocabulary, and expressions <cit.>. Some observed LLM personas have displayed undesirable behavior <cit.>, raising serious safety and fairness concerns in recent computing, computational social science, and psychology research <cit.>. Recent work has tried to identify unintended consequences of the improved abilities of LLMs <cit.> including behaviors such as producing deceptive and manipulative language <cit.>, exhibiting gender, race or religious bias in behavioral experiments <cit.>, and showing a tendency to produce violent language, among many others <cit.>. LLMs can be inconsistent in dialogue <cit.>, explanation generation <cit.> and factual knowledge extraction <cit.>. As LLMs become the dominant human computer interaction (HCI) interface, it is important to understand the personality trait-related characteristics of the language generated by these models—and how LLM-synthesized personality profiles may be engineered for safety, appropriateness, and effectiveness. In prior attempts to set up LLM agent personas using zero- to few-shot prompting <cit.>, the resulting personality manifested in an LLM's language output has not been analyzed with the same rigor and standards as human personality evaluations using established metrics and methodologies from psychometrics. The field has explored efforts, such as few-shot prompting <cit.> to mitigate undesirable and extreme personality exhibited in LLM outputs. However, thus far no work has addressed how to rigorously and systematically measure personality of LLMs in light of their highly variable outputs and hypersensitivity to prompting. An LLM may display an agreeable personality profile by answering a personality questionnaire, but the answers it generates may not necessarily reflect its tendency to produce agreeable output for other downstream tasks. When deployed as a conversational chatbot in a customer service setting, for instance, the same LLM could also aggressively berate customers. The question of how to scientifically measure manifestations of personality in LLMs addresses calls from responsible AI researchers <cit.> to scientifically assess construct validity when studying social-psychological phenomena in AI systems. Construct validity, a central criterion of scientific research involving measurement <cit.>, refers to the ability of a measure to reliably and accurately reflect the latent phenomenon it is aiming to quantify <cit.>. Administering repeatedly a battery of survey measures to LLMs deterministically results in data that, while reproducible, does not have any explicit meaning beyond their implicit operationalization within the surveys themselves. On the other hand, administering the same measures to LLMs which provide non-deterministic responses to the same prompt, leads to random variance across survey administration sessions that cannot be linked. For this reason, recent attempts to measure psychological constructs in LLMs, such as personality and human values <cit.>, have not yet established the construct validity of such measurements. Our work aims to answer: 1) Are validated psychometric methods for characterizing human personality applicable to LLMs? 2) After applying validated psychometrics, does LLM-generated language exhibit personality traits in valid, reliable and meaningful ways similar to human-generated language? and 3) If LLMs can meaningfully simulate personality, can LLM-synthesized personality profiles be shaped and controlled? To address those questions, we present principled, validated methods from psychometrics to characterize and shape personality synthesized in LLMs. Our work makes three key contributions. First, it develops a methodology that establishes construct validity of characterizing personalities in LLM-generated text using established psychometric tests. Second, we propose a novel method of simulating population variance in LLM responses through controlled prompting, so that statistical relationships between personality and its external correlates can be assessed as they are in human social science data. Lastly, we contribute a LLM-independent personality shaping mechanism that changes LLM-observed levels of personality traits in a controlled way. We evaluate the methodology on LLMs of different sizes and training methodologies in two natural interaction contexts: multiple choice question answering (MCQA), and long generated text. We find that: 1) personality simulated in the outputs of some LLMs (under specific prompting configurations) is reliable and valid; 2) evidence of reliability and validity of LLM-simulated personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific personality profiles. The rest of the paper is structured as follows: Section <ref> places our work in the context of recent literature. Section <ref> provides necessary background in psychometrics and LLMs. The methodology and prompt structure for the evaluation and shaping of the personalities in Section <ref>. Section <ref> outlines the findings, while Section <ref> discusses the implications, limitations, future work, and ethical considerations. Finally, we conclude in Section <ref>. § RELATED WORK Several recent attempts to probe personality and psychopathological traits in LLMs suggest that some models exhibit dark personality patterns <cit.>, or demonstrate how to administer personality inventories to LLMs <cit.>. While these works outline the utility and importance of measuring social phenomena in LLMs <cit.>, the rigor has been less than what is standard in the quantitative social sciences when evaluating results on human responses. For example, they neither test if the surveys are valid for LLMs <cit.>, nor evaluate the psychometric properties of the assessments <cit.>. In contrast, the presented work establishes psychometrics-grounded construct validity when administering personality inventories to LLMs, thus creating a scientific foundation for quantifying personality in LLMs. To claim that simulated psychological test scores are meaningful in comparing LLM and human behavior, psychometrics requires establishing the construct validity of these simulated tests in terms of their 1) structural validity, 2) convergent and discriminant validity, and 3) criterion validity. Our work addresses these rigorous requirements which are lacking in previous efforts. Recent works that probe human traits in LLMs with psychometric tests utilize test administration/simulation in ways that are unconventional in psychometrics. We focus on two common elements. First, researchers collected LLM responses in the form of generated completions, often in dialogue mode. For instance, <cit.> administered psychological emotion measures to LLMs in the form of a research interview transcript, where a fictitious researcher posed measure items to a fictitious participant, who was instructed to respond to these items on a numeric scale. In psychometrics, questionnaire-based methods of assessment are distinct from interview-based methods. Human answers to both questionnaires and structured interviews measuring the same underlying construct do not necessarily converge (e.g., in the case of measuring personality disorders <cit.>). Indeed, administering questionnaires in this way to LLMs creates an arbitrary viewpoint from which to elicit human traits, and is likely biased by the ordering of the questionnaire itself <cit.> and prompting the LLM to respond in an interview setting (where it may respond differently knowing an interviewer is observing). Each LLM response to a given questionnaire item was not an independent event, but considered all previous responses shown in the transcript. Second, the LLMs in these studies were not used deterministically. This not only hampers reproducibility, but also poses implications for reliability. Computing reliability metrics for questionnaires scored in this unconventional way is precarious because such reliability metrics rely on item-level variance. If this item-level variance is contaminated by variation introduced by the model parameters in a different way for each item, it is difficult to compute valid indices of reliability. We overcome these challenges by proposing a prompt and persona sampling methodology that allows variance to be linked across administrations of different measures. The only published exploration of personality and psychodemographics in LLMs <cit.> did not find a consistent pattern in HEXACO Personality Inventory <cit.> and human value survey responses. Most importantly, it did not sufficiently evaluate the validity of its purported trait measurements. Our work, anchored in the first truly comprehensive construct validation and controlled population simulation of the Big Five model of personality <cit.> in LLMs finds evidence for consistent personality profiles in some, but not all LLMs. Similarly, to the LLMs responses to emotion questionnaires that find a positive correlation between the model size and human-aligned data <cit.>, we find that the larger LLMs tend to self-report personality in more human-consistent ways for larger models. PsyBORGS <cit.> administers a series of validated survey instruments of race-related attitudes and social bias to LLMs using psychometrics-informed prompt engineering. Our work utilizes the PsyBORGS framework. Prior to LLMs, simple heuristics, such as the user's name, shaped and conveyed agent personality in dialog <cit.>. When asked to evaluate the “humanness” of a piece of text, people look for traits that reflect aspects of emotion and attitude <cit.> and judge naturalness of a chatbot along four of the five dimensions of the Big Five personality taxonomy: conscientiousness, originality (i.e., openness), manner (i.e., agreeableness), and thoroughness <cit.>. In this work we are interested in evaluating personality traits that emerge from LLMs without explicit design. § BACKGROUND This section provides necessary background on personality science and large language models. Section <ref> sets forth the basics of personality psychology. Section <ref> discusses how to characterize and evaluate personality with psychometrics, and how to establish that psychometrics evaluations are valid. Further Section <ref> gives the basics of LLMs. §.§ Personality Psychology Personality psychology, a scientific study of human and non-human individuality, is concerned with what personality is and what it does. Personality psychology considers personality as enduring characteristics, traits, and patterns that shape thoughts, feelings, and behaviors across a diverse array of situations; e.g., social, spatial, and temporal contexts <cit.>. Decades of personality research synthesizing evidence from molecular genetics <cit.>, evolutionary biology <cit.>, neuroscience <cit.>, linguistics <cit.>, and cross-cultural psychology <cit.> have reduced such diverse characteristic patterns to a theorized handful of higher-order factors that define personality <cit.>. The Big Five model <cit.>, the most commonly cited research taxonomy of personality, identifies five personality trait dimensions (i.e., domains) and provides methodology to assess these dimensions in humans. The five dimensions are extraversion (EXT), agreeableness (AGR), conscientiousness (CON), neuroticism (NEU), and openness to experience (OPE). Each domain is further composed of various lower-order facets nested underneath. §.§ Psychometrics Psychometrics, a quantitative subfield of psychology and education science, encompasses the theory and technique of measuring unobservable, latent, abstract concepts called constructs, like personality, intelligence, or moral ideology. Psychometrics is commonly used in the development and validation of standardized educational tests (e.g., the SAT, LSAT, GRE) <cit.>, medical and psychological clinical assessments <cit.>, and large-scale public opinion polls <cit.>. Psychometric tests (e.g., survey instruments, measures, multi-item scales) are tools for quantifying latent psychological constructs like personality. Psychometric tests enable statistical modeling of the true levels of unobservable target constructs by relying on multiple indirect, yet observable, measurements across a sample of individuals drawn from a wider population. We refer to items as the individual elements (i.e., descriptive statements, sometimes questions) to be rated on a standardized rating scale within a psychometric test. A rating scale, is a standardized set of response choices that allows researchers to quantify subjective phenomena; a Likert-type scale is the most common rating scale that has respondents specify their level of agreement on a symmetric agree-disagree scale <cit.>. We refer to a subscale as a collection of items, usually resulting from a factor analysis, aimed at measuring a single psychological construct. Measures are themed collections of subscales. For example, the Big Five Inventory (BFI) <cit.> is a popular measure of personality; it comprises five multi-item subscales targeting each Big Five dimension. BFI Extraversion, for instance, is a scale within the BFI specifically targeting the dimension of extraversion. An example item under BFI Extraversion would read, “[I see myself as someone who] is talkative." Participants rate their agreement with this item using the following 5-point Likert-type rating scale: 1 = disagree strongly; 2 = disagree a little; 3 = neither agree nor disagree; 4 = agree a little; 5 = agree strongly. §.§.§ Construct Validity: Are Measured Phenomena Valid? Since psychometric tests measure physically unobservable constructs, such as personality traits, it is imperative to establish that such tests measure what they claim to measure. This process is called establishing a test's construct validity. Construct validity is a comprehensive judgement of how the scores and the theoretical rationale of a test reasonably reflect the underlying construct the test intends to measure <cit.>. Recently, construct validity has become a crucial focus of AI responsibility and governance <cit.>: operationalizing social phenomena in algorithmic systems in a principled way (e.g., through construct validation) is a core part of responsible AI. Bringing empirical rigor to the measurement of social constructs helps stakeholders make more informed judgments of characteristics that may be fair or harmful in AI systems. For instance, if low agreeableness is harmful in AI systems, we need a principled way to quantify and validate it. Validated scientific frameworks for establishing construct validity <cit.> for a new psychometric test <cit.> use the following overarching standards: * Substantive Validity: What exactly are we measuring? What are the theoretical bases of what we are measuring? * Structural Validity: Are measurements from the test reliable? Do items within a test correlate with each other in ways we expect? In psychometrics and the quantitative social sciences, the structural validity of a test can be established in terms of internal consistency and unidimensionality. * Internal consistency reliability: Is the test reliable across multiple measurements (i.e., its items)? In other words, do responses to the test's items form consistent patterns? Are test items correlated with each other? * Unidimensionality: Do the test's items reflect the variance of one underlying factor or construct? * External Validity: Are the test scores practically meaningful, outside (external to) the test context itself? Psychometricians and quantitative social scientists commonly operationalize external validity into three subtypes of validity <cit.>: * Convergent Validity: Does the test correlate with purported indicators (i.e., convergent tests) of the same or similar psychological construct? These correlations are called convergent correlations. * Discriminant Validity: Relative to convergent correlations, are test scores uncorrelated with scores on theoretically unrelated tests? These correlations are called discriminant correlations. * Criterion Validity: Does the test correlate with theoretically-related, non-tested phenomena or outcomes? There is extant work on establishing the substantive validity of personality as a theoretical construct <cit.>, a powerful predictor of other important human traits and life outcomes <cit.> and its manifestation in human language <cit.>, which forms the basis of LLMs, so it needs not be reestablished in the context of this work. Structural Validity The hallmark characteristic of a good psychometric test, and of any empirical test, is its ability to “measure one thing (i.e., the target construct)—and only this thing—as precisely as possible" <cit.>. The internal consistency of a scale is necessary but not sufficient evidence of its unidimensionality <cit.>; both internal consistency and unidimensionality are needed to demonstrate overall reliability. For example, a scale can possess strong internal consistency, but poor unidimensionality, by containing highly correlated items that actually measure two separate constructs. Internal consistency: Two metrics of a psychometric test's internal consistency in the social sciences are Cronbach's Alpha (α) <cit.> and Guttman's Lambda 6 (λ_6) <cit.>. α, the most widely-known measure of internal consistency, captures how responses to each item of a scale correlate with the total score of that scale. λ_6 evaluates the variance of each item that can be captured by a multiple regression of all other items. Both α and λ_6 can be biased by the number of items on a test <cit.>. However, λ_6 serves as a complement to α because it is not affected by differences in item variances. Cronbach's α is computed as follows: α = k/k-1(1 - ∑_i=1^k σ^2_y/σ^2_x) where k is the number of items on the test, σ_y^2 is the variance associated with each item i, and σ_x^2 is the overall variance of total scores. Guttman's λ_6 is calculated as: λ_6 = 1 - ∑_i=1^k(e_i^2)/V_x where k is the number of items on the test, e_i is the error term for item i, V_x is the variance of the total test score. While λ_6 can also be biased by the number of items in a test, it serves as a complement to α because it is not affected by differences in item variances. Unidimensionality: To test more robustly for unidimensionality (i.e., how well a test measures one underlying factor or construct) in a way that is unaffected by number of items, psychometricians compute McDonald's Omega (ω) <cit.>. This metric is generally considered a less biased test of reliability <cit.>. McDonald's ω uses factor analysis to determine if items statistically form a single factor, or actually measure separate factors. It is calculated as: ω_h = 1/k∑_i=1^kt_i^2/σ^2_i/1/k-1∑_i=1^kt_i^2/σ^2_i-1/k1/1-r_tt^2 where ω_h is McDonald's hierarchical omega, k is the number of items on the test, t_i is the standardized item score for item i, σ^2_i is the variance of the standardized item score for item i, and r_tt is the correlation between the total test score and the standardized total test score. Establishing External Validity Convergent and Discriminant Validity: The convergent and discriminant validity of a test are classically evaluated in psychometrics using <cit.>'s framework. In this framework, a test's convergent validity is established by “sufficiently large" correlations with separate tests meant to measure the same target construct. For example, to validate a new test measuring depression, one could calculate the test's convergent correlations with the Beck Depression Inventory (BDI) <cit.>—a widely-used measure of depression. To evaluate the discriminant validity of a test, psychometricians commonly gauge the extent to which the test's convergent correlations are stronger than its discriminant correlations—its correlations with test of other constructs. As a concrete example, a new test of depression should correlate more strongly with the BDI than with, say, a test measuring English proficiency. Criterion Validity: A common way to assess the criterion validity of a new psychometric test is to check its correlations with theoretically related external (non-test) criteria (hence the name, criterion validity) <cit.>. For example, to validate a new psychometric test of depression, one could test if it is substantially related to a known external criterion, like negative affect. §.§ Large Language Models Large language models (LLMs) <cit.> are massive neural networks that take a text input (prompt) and generate human-like text in response <cit.>. They are trained with deep learning techniques <cit.> and massive datasets of text (such as books, articles, and websites) and code <cit.>, allowing them to learn the statistical relationships between words and phrases <cit.>, and consequently the patterns, structures, and semantics of language <cit.>. There are three main techniques that change or control LLM's behavior and output to given input. These techniques can directly affect the model's weight parameters as in pretraining (i.e. training the LLM on a large dataset of general knowledge <cit.>), fine-tuning (i.e. further training a pretrained LLM on a smaller dataset specific to a particular task or domain <cit.>), or indirectly by influencing the activation of certain neurons or the flow of information through the model's inference process as in prompting. The most significant aspect of using prompts to control LLM behavior is to carefully design or engineer prompts to generate desired outputs from the LLM. Several types of prompt engineering techniques are commonly used with LLMs. In few-shot prompting <cit.>, a limited amount of example data is provided to the model in the prompt to guide it to perform a task. By leveraging this small set of examples, the LLM can generalize and produce responses beyond the provided instances. As such, few-shot prompting relies on the ability to bias an LLM's responses based on the input prompt. But because it introduces this bias, this approach is not useful in cases where we want to probe the default bias, behavior or tendency of an LLM to produce certain output (e.g. psychometric survey responses in our case). Zero-shot prompting <cit.>, on the other hand, involves instructing the model to generate responses for tasks it hasn't been specifically trained on and without providing any exemplars, relying on its pre-existing knowledge and language understanding acquired during pre-training. As such, it provides an insight into the weight parameters of the LLM, which tokens are more correlated than others, etc. For instance, if asked to complete an input prompt: “She went to see an expert about her stroke, who", an LLM trained on medical domain data is likely to respond “advised her to get an ECG test." whereas a sports-centric LLM might complete it as “coached her the best techniques from top golf pros." Several recent works in the field of Responsible AI have attempted to do that, i.e. uncover latent language biases in LLMs and to identify potential for harm, and suggest mitigation techniques <cit.>. Similarly in our work, we use zero-shot prompt engineering to analyze how such latent lingual features in LLMs give rise to a coherent personality when quantified psychometrically. We further analyze how these traits can be modified by engineering specific prompts and affecting the latent lingual features along the activation path in these LLMs. LLMs offer various modes of inference. In generative mode, the LLM is given a prompt or instruction, and it then generates text that is consistent with the prompt. This mode is useful for creative text generation tasks, such as story writing or poetry. In scoring mode, the LLM is given a pair (prompt, continuation) and it assigns a score or probability to it, indicating its quality or relevance or how likely it is to be generated from that model. Scoring mode <cit.> is often used for tasks like language evaluation or ranking text options. § METHODS We describe the methodology for characterizing the personality in LLMs in Section <ref> and present the method for shaping the LLM personality in Section <ref>. §.§ LLM Personality Characterization The methodology for characterizing LLM personality and quantifying its ability to coherently emulate human personality traits consists of two steps. First, we administer psychometric tests to LLMs and collect the scores. Second, those scores are used to establish construct validity. We begin by explaining the methodology for administering a single psychometric test to an LLM, in Section <ref>. Since establishing construct validity requires two personality inventories to be administered, a primary and a redundant one, we next discuss the selection of personality inventories, in Section <ref>. Further, to verify that our personality inventories are externally valid, we need to sufficient variance in the prompts to connect changes in personality scores changes in with theoretically-related external outcomes; in Section <ref> we presents the use of descriptive personas, item preambles, and item postambles as tools for simulating a population and controlled variance. Finally, construct validity is established when both structural and external validity hold. Section <ref> presents the methodology for establishing construct validity on a scored personality inventory. Figure <ref> provides a visual overview of the process. §.§.§ Administering Psychometric Tests to LLMs To administer a psychometric test to LLMs, we leverage their ability to complete a prompt. In our context, a given prompt instructs a given LLM to rate an item (i.e., a descriptive statement; e.g., “I am the life of the party.") from a psychometric test using a standardized response scale. For each item of a test, we construct all possible prompt framings for that item according to Section <ref>. To score a given item, we compare the output of the model with the possible standardized responses as defined in the psychometric test, simulating an LLMs “choice" of the most likely continuation <cit.>. This can be achieved with two distinct techniques or modes that are typically used with auto-regressive LLMs: generative and scoring mode. In both modes, an LLM's answer to each item of a psychometric test are independent events. As a result, response biases associated with the ordering of the test items are not captured by our approach. Finally, when all the variations of all the items in the survey are scored, the scores are statistically analyzed for construct validity. §.§.§ Personality Inventories To measure personality, we select two well-established psychometric measures to assess the Big Five taxonomy: one from the lexical tradition and one from the questionnaire tradition. Lexical tradition measures are grounded in the hypothesis that personality can be captured by the adjectives found in a given language <cit.>, while questionnaire tradition measures are developed with existing (and not necessarily lexical) taxonomies of personality in mind <cit.>. We hypothesize that lexical measures are better suited for LLMs because they are language-based and rely on adjectival descriptions. Questionnaire measures are less abstract and more contextualized, and they do not rely on trait adjectives. Our primary personality measure, the IPIP-NEO <cit.>, is a 300-item open source representation of the commercialized Revised NEO Personality Inventory <cit.>. The IPIP-NEO, hailing from the questionnaire tradition <cit.>, involves rating descriptive statements (e.g., “[I] prefer variety to routine"; 60 per Big Five domain) on a 5-point Likert scale. The IPIP-NEO has been translated and validated in many languages, facilitating cross-cultural research across populations <cit.>, and has been used in longitudinal studies to assess personality change and stability over time <cit.>. We choose this measure for its excellent psychometric properties, shown in <cit.>. As a robustness check and to assess convergent validity, we also measure LLM-synthesized personality using the Big Five Inventory (BFI) <cit.>. Developed in the lexical tradition, the BFI is a brief (44-item), adjectival statement-based measure of the broad Big Five traits. The BFI asks participants to rate short descriptive statements (e.g., “I see myself as someone who is talkative”) also on a 5-point Likert scale. The resulting summary scores indicating levels of Big Five trait domains range from 1.00 to 5.00. In the psychology literature, the BFI has demonstrated excellent structural validity (mean α reported across domain subscales = 0.83), convergent validity, and external validity. §.§.§ Simulating Population Variance Through Prompting The prompt for each item consists of four parts: an Item Preamble, Persona, Item, and Item Postamble. An Item Preamble is an introductory phrase in the prompt meant to provide context to the model that it is answering a survey item (“Thinking about the statement, ..."). A Persona Description leverages one of 50 short demographic descriptions of human personas sampled from <cit.> to enable the LLM to anchor its responses to a social context and create necessary variation in responses across prompts, with descriptions like the following: An Item is the descriptive statement (accompanied by a rating scale) taken from the original test (e.g., “I see myself as someone who is talkative"). An Item Postamble presents the possible standardized responses the model can choose from, e.g., It is empirically necessary to introduce controlled variation in LLM-simulated survey data to assess their reliability and statistical relationships with outcomes of interest; in short, controlled variation is required to statistically test for construct validity. When administering a survey, we systematically modify each component of a given prompt, shown below, to generate what we call “simulated participants": unique instances of a prompt that are re-used across administered measures as a way to link response variation in one measure to response variation in another measure. Table <ref> shows examples of various prompts, with different segments of the prompt color-coded. This prompt design enables thousands of variations of input prompts that can be tested, with two major advantages. First, variance in psychometric test responses created by unique combinations of the Persona Descriptions, Item Preambles, and Item Postambles enables us to quantify the validity of personality in LLMs. Unlike single point estimates of personality, or even multiple estimates generated from random resampling of LLMs, diverse distributions of personality scores conditioned on reproducible personas makes it possible to compute correlations with personality-related constructs. Second, variance in Item Preambles and Postambles facilitates a built-in robustness check: it is critical to know if personality scores remain stable even when the context or instructions surrounding original test items are modified. If personality scores are dependent on small perturbations in psychometric test instructions, then they are not empirically valid. §.§.§ Construct Validity of LLM Personality Test Scores The next step in the process must establish whether signals of personality derived from the IPIP-NEO are reliable and externally meaningful—that they possess construct validity. To do so, we use structured prompting to simulate a diverse population of LLM responses to a battery of psychometric tests of both personality (described above) and known correlates of personality. Next, informed by best practices in psychometric test construction and validation (see Section <ref>), we conduct a suite of statistical analyses to quantify the quality of the returned LLM data. We organize these analyses by subtypes of construct validity: structural and external validity, as described next. A personality construct is validly simulated in LLMs only when all the subtypes are valid. Establishing Structural Validity In LLM research, model responses to a series of seemingly related tasks intended to measure one latent construct may be anecdotally “consistent" <cit.> or inconsistent <cit.>. Descriptive consistency, however, is not sufficient evidence that the responses to those tasks are statistically reliable and unidimensional to reflect the latent constructs they target (see Section <ref>). Internal consistency: To establish internal consistency reliability, we compute Cronbach's Alpha (α; Eq. (<ref>)) and Guttman's Lambda 6 (λ_6; Eq. (<ref>)) on all IPIP-NEO and BFI subscales. Unidimensionality: To assess unidimensionality we compute McDonald’s Omega (ω; Eq. (<ref>)) on all IPIP-NEO and BFI subscales. We designate a given reliability metric (RM; i.e., α, λ_6, ω) < 0.50 as unacceptable, 0.50 ≤ RM < 0.60 as poor, 0.60 ≤ RM < 0.70 as questionable, 0.70 ≤ RM < 0.80 as acceptable, 0.80 ≤ RM < 0.90 as good, and RM ≥ 0.90 as excellent. The internal consistency is a necessary but not sufficient condition for demonstrating unidimensionality. Therefore, α, λ_6, and ω must be at least 0.70 for a given subscale to be deemed acceptably reliable. Establishing External Validity We operationalize external validity in terms of convergent, discriminant, and criterion validity (see Section <ref>). We use Campbell's classic multitrait-multimethod matrix (MTMM) <cit.> approach to evaluate convergent and discriminant validity. Criterion validity is evaluated by correlating LLM-simulated personality test data with LLM responses to theoretically-related psychometric test. Convergent validity: We evaluate convergent validity—how much our primary test of personality (the IPIP-NEO) positively relates to another purported test of personality (BFI)—by computing bivariate Pearson correlations between IPIP-NEO and BFI scores for extraversion, agreeableness, conscientiousness, neuroticism, and openness and comparing them to ensure correlations between each domain subscale are the strongest of their row and column, as outlined in <cit.>. For instance, IPIP-NEO Extraversion should be most correlated with BFI Extraversion, because these two subscales should convergently measure the same underlying construct. We operationalize convergent correlations between two psychometric tests (in this case, Big Five subscales from the IPIP-NEO and BFI) { (x_1,y_1),…,(x_n,y_n) }, reflecting n pairs of continuous score data, as Pearson product-moment correlations: r_x_y = ∑_i=1^n(x_i-x̅)(y_i-y̅) / √(∑_i=1^n(x_i-x̅)^2)√(∑_i=1^n(y_i-y̅)^2) where n is the sample size, x_i, y_i are a pair of data points i from sample, x̅ is the sample mean score for personality trait x of the IPIP-NEO, and y̅ is the sample mean score for corresponding personality trait y of the BFI. In the resulting MTMM, we consider strong correlations (|r_x_y| ≥ 0.60; <cit.>) between each IPIP-NEO domain subscale and its BFI domain scale counterpart (e.g., r(IPIP-NEO Extraversion, BFI Extraversion), r(IPIP-NEO Agreeableness, BFI Agreeableness), etc.) as evidence of convergent validity. For these and following results, we use <cit.>'s cut-offs for considering correlations as moderate, strong, and very strong (viz. .40 ≤ |r| < .60; .60 < |r| ≤ .80; .80 < |r|; respectively). In our tests for convergent validity, stronger convergent correlations between an LLM's IPIP-NEO and BFI scores indicate that we are capturing the same underlying signals of each personality domain even when we measure them using two separate instruments. Weak convergent correlations indicate that at least one of the personality domain subscales is not capturing these signals properly. Discriminant Validity: We assess discriminant validity—the extent to which our convergent subscales of personality (e.g., IPIP-NEO Extraversion and BFI Extraversion) remain relatively unrelated with discriminant subscales (e.g., IPIP-NEO Extraversion in relation to BFI Conscientiousness)—in two ways. First, we compare each of the convergent correlations with all other correlations located in the same row or column of the MTMM. A given IPIP-NEO subscale demonstrates discriminant validity if its convergent correlation with its BFI counterpart is the highest of its row or column in the MTMM. Second, we inspect how personality domains are relatively uncorrelated with external validity measures. For instance, evidence of discriminant validity IPIP-NEO Agreeableness would be that it more strongly correlates with BPAQ Aggression than SCSS Creativity. Criterion Validity: We evaluate the criterion validity of our LLM personality test data in three steps. First, for each Big Five domain, we identify at least one theoretically-related external (viz. non-personality) construct reported in human research. Next, according to this existing human research, we choose the appropriate psychometric tests to measure these related constructs and administer them to LLMs. Finally, we correlate LLM scores for each IPIP-NEO domain scale with these external measures, outlined below. For our purposes, criterion validity is established when the relative strength and direction of the correlations found in human data matches those observed in our LLM data. We identify theoretically-related external criteria for each Big Five trait as follows. Positive and negative emotions are external criteria known psychology to be related to extraversion and neuroticism <cit.>. Across decades of human research, aggression is known to be negatively correlated with agreeableness <cit.>. Creativity is a known external correlate of openness <cit.>. In the meta-analytic literature, human values of achievement, conformity, and security, defined by <cit.>, are positively related to conscientiousness <cit.>. Accordingly, we choose the following criterion measures (summarized in Table <ref>) to assess these constructs. Positive and Negative Affect Schedule (PANAS) <cit.> is used to validate our LLM measures of extraversion and neuroticism. The PANAS scales form the most widely-cited measure of positive and negative affect in the emotion research literature. PANAS' instructions are modifiable to distinguish between in-the-moment emotions and long-term, trait-level emotions. We used these instructions to capture trait levels of affect ( “you generally feel this way, that is, how you feel on the average"). 20 emotions (e.g., “excited," “ashamed") are rated on a 5-point Likert-type scale. Buss-Perry Aggression Questionnaire (BPAQ) <cit.> is used to validate our LLM measure of agreeableness. This measure contains subscales tapping into four domains of aggression: Physical Aggression, Verbal Aggression, Anger, Hostility. Short Scale of Creative Self (SSCS) <cit.> is used to validate our LLM measure of openness. This measure is organized into subscales for Creative Self-Efficacy (CSE) and Creative Personal Identity (CPI). Schwartz's Portrait Values Questionnaire (PVQ-RR) <cit.> is a widely used and translated measure of human values. We only use PVQ-RR subscales related to conscientiousness: Achievement (ACHV), Conformity (CONF), and Security (SCRT). Criterion Validity Metrics: We consider relatively stronger correlations between IPIP-NEO domain subscales with their respective related external measures, compared to correlations with unrelated measures, as evidence of criterion validity. For example, we would expect IPIP-NEO Extraversion to be more strongly correlated with a known external criterion, such as PANAS Positive Affect, relative to an unrelated external phenomenon, such social conformity values (e.g., measured by the PVQ-RR's Conformity subscale). Since external criteria are known to relate to personality at varying strength levels, we interpret criterion validity on a trait-by-trait basis and refrain from setting hard cutoffs. §.§ Shaping Personality in LLMs Having established a principled methodology for determining if an LLM personality is valid and reliable, we now investigate how that methodology can be applied to LLM prompting to shape that personality in desirable ways. §.§.§ Prompt Design and Rationale To shape personality in LLMs, we posit that salient descriptors of personality are encoded in language that could be used to prompt for specific facets of personality at different levels of intensity. Specifically, we rely on the lexical hypothesis in our prompt design, expecting that LLMs would be most responsive to prompts containing trait-relevant language, and contribute 104 adjectives that map to Big Five domains. As a result, in our prompt design, we adapt <cit.>'s list of 70 bipolar adjectives known to empirically correspond with the Big Five model of personality through human ratings and statistical factor analysis. In this list, for example, the adjectives “silent" and “talkative" correspond to the low and high ends of extraversion, respectively (see Table <ref>). We manually map these adjectives to each of the Big Five domains and 30 lower-order personality facets measured by the IPIP-NEO based on <cit.>. Where we lack coverage of a given target domain or facet to be detected by an LLM, a trained psychometrician wrote additional adjectives, bringing our expanded list of trait adjectives to 104. Examples of trait adjectives for agreeableness and extraversion are depicted in Table <ref>, while the full list is found in the Supplemental Table <ref>. Each pair of trait adjectives is associated with low and high levels of a specific component of the Big Five. To achieve more precise control of personality levels, we hypothesize that the linguistic qualifiers often used in Likert-type response scales <cit.> (e.g., “a bit," “very," “extremely") are useful for setting up a target level of each adjective. Therefore, we developed the resulting prompt design to facilitate granular shaping of any trait at the following nine levels: For example, to target a moderately high level (i.e., level 7/9) of extraversion, we use the five high adjectives from our list targeting extraversion at the domain level: Similarly, an example prompt targeting slightly below average (i.e., level 4/9) extraversion, using the five negatively-keyed adjectives targeting extraversion, is as follows: § RESULTS This section presents the results of applying the described personality trait characterization and shaping. §.§ Language Models We selected decoder-only models from the family <cit.> for the study, because of their established performance on generative tasks, especially in conversation contexts <cit.>. We varied the models in the family across three dimensions: model size, Q&A task fine-tuning, and training mode (see Table <ref>). First, we focused on three different model sizes: small (8B), medium (62B), and large (540B), because it size is a key determinant of performance for this model family <cit.>. Second, because we are also interested in evaluating LLM personality in the Q&A context, we investigated models variants, fine-tuned to follow instructions as they have been shown to perform better than base models for prompting-based Q&A tasks <cit.>. We specifically selected variants fine-tuned with the popular FLAN dataset <cit.>. Third, we examined traditional and high-data training methods, known as Chinchilla training <cit.>, which uses a fixed training budget to find the balance between model size and training dataset scale. Chinchilla training yields superior performance across a board set of tasks <cit.>. All experiments used quantized models <cit.> to reduce the memory footprint and speed up inference time. We performed the majority of our experiments in scoring mode on . §.§ LLM Personality Characterization Results We sought to verify that there exist LLMs configurations that output personality survey output not distinguishable from human respondents, to establish the construct validity of administering personality surveys to LLMs. We first validated the statistical distribution of the test scores and then established construct validity. Table <ref> summarizes the configuration parameters, showing the model size and training method in establishing construct validity. §.§.§ Descriptive Statistics Across Models We inspected the test scores on the IPIP-NEO and BFI across models to ensure that they reflected a normal distribution without many outliers. We examined how the distributions shifted as a function of model size (holding model training method constant) and model training method (holding model size constant). Figure <ref> summarizes the findings. By model configuration: At 62B parameters, the base model showed nearly uniform personality score distribution for both the IPIP-NEO and BFI, with 25th, 50th, and 75th percentile values identical within each BFI domain. Instruction-tuned variants, and , showed more normal distributions of personality, with lower kurtosis. By model size: IPIP-NEO (Figure <ref>) and BFI (Figure <ref>) scores were stable across model sizes. Median levels of socially-desirable BFI subscales (EXT, AGR, CON, OPE) substantially increased as model size increased (see Supplemental Table <ref>). In contrast, median levels of BFI NEU decreased (from 2.75 to 2.38) as model size increased from 8B to 540B. Distributions of IPIP-NEO scores were more stable across sizes of : only IPIP-NEO EXT and CON showed noticeable increases by model size. For instance, across sizes of , median levels IPIP-NEO OPE remained close to 3.30. Meanwhile, median BFI AGR scores monotonically increased from 3.33 to 3.67 and 3.89 for , , and , respectively (see Supplemental Table <ref>). Model Robustness to Prompt Perturbations: Big Five score distributions for all three 62B-parameter models remained stable across n = 5 variations of Item Preambles and n = 5 variations of Item Postambles, indicating that estimates of personality for models were robust against perturbations in prompt design (Figure <ref>). §.§.§ Construct Validation Results As mentioned in Section <ref>, we administer the IPIP-NEO <cit.> and BFI <cit.> inventories to measure personality in LLMs. Importantly, we administer a comprehensive battery of non-personality measures to critically assess the construct validity of the resulting scores. We use these additional measures to assess if the statistical properties of personality test responses (in relation to responses to non-personality tests) of LLMs align with those of humans. In summary, we find evidence for construct validity of simulated personality scores in medium (62B) and large (540B) variants of family models (see Table <ref>). We find that LLM-simulated psychometric data are most human-aligned for , the largest model we tested. The rest of the section details the results from the individual validity study. Structural Validation Results Following established frameworks from measurement science outlined in Sections <ref>, we evaluated the structural validity (i.e., reliability) of the tests—the extent to which they dependably measure single underlying factors—by quantifying internal consistency and unidimensionality for each administered subscale. Table <ref> summarizes the results. By model configurations: Among the models of the same size (, , and ) instruction fine-tuned variants' responses to personality tests were highly reliable; and demonstrated excellent internal consistency (α, λ_6) and unidimensionality (ω), with all three metrics in the mid to high 0.90s. In contrast, we found (i.e., not instruction fine-tuned) model responses to be highly unreliable (-0.55 ≤α≤ 0.67). Although personality test data appeared unidimensional for each Big Five trait, with close to perfect (> 0.99) values for McDonald's ω, its responses were highly inconsistent, with values for Cronbach's α ranging from poor (0.67) to unacceptable (-0.55). We note that computing reliability indices for 's IPIP-NEO CON and OPE data required removal of two items showing zero variance; for these two items, provided the same response across 1,250 simulated participant prompt sets. By model size: Across different model sizes of the same training configuration (i.e., , , and ), the reliability of simulated personality increased with model size. Across model sizes of , as shown in Table <ref>, internal consistency reliability (i.e., α) of IPIP-NEO scores improved from acceptable to excellent. At 8B parameters, internal consistency was acceptable for IPIP-NEO Openness (α = 0.75), good for IPIP-NEO Extraversion and Agreeableness (αs 0.83, .88, respectively), and excellent (α≥ 0.90) for IPIP-NEO Conscientiousness and Neuroticism. At 62B parameters, internal consistency was good for IPIP-NEO Openness (α = 0.84) and excellent for all other traits (α≥ 0.90). At 540B parameters, all IPIP-NEO domain scales showed excellent internal consistency (α≥ 0.90). Our other reliability indices, Guttman's λ_6 and McDonald's ω, improved within the same excellent range from 8B to 540B variants of . External Validation Results Convergent and Discriminant Validation Results. The external validity of personality in LLMs—in terms of convergent and discriminant validity—varies across two axes: model size and model training method. Figure <ref> illustrates convergent validity in terms of how IPIP-NEO and BFI scores convergently correlate across models. Table <ref> summarizes the average convergent and discriminant rs across models. The results allow us to draw two conclusions. First, indices for convergent and discriminant validity improve as model size increases. Second, convergent and discriminant validity of LLM-simulated personality test scores relates to model instruction fine-tuning. See Tables <ref> and <ref> for qualitative and quantitative summaries, respectively. Convergent validity by model size: Convergent correlations (i.e., those between each of 's Big Five domain scores on the IPIP-NEO and BFI) were inconsistent at 8B parameters (Figure <ref>). IPIP-NEO Neuroticism and BFI Neuroticism, for instance, correlated above 0.80 (constituting excellent evidence of convergent validity), while IPIP-NEO Openness and BFI Openness subscales correlated less than 0.40 (which constitutes questionably low convergence). In contrast, these convergent correlations grew stronger and more uniform in magnitude for . We found that convergent correlations between LLM-simulated IPIP-NEO and BFI scores (e.g., r(IPIP-NEO Extraversion, BFI Extraversion)) were strongest for . Discriminant validity by model size: Indices of discriminant validity similarly improved with model size. The absolute magnitude of all five convergent correlations between the IPIP-NEO and BFI for and were the strongest of their respective rows and columns of the MTMM outlined in Section <ref>). Comparatively, only three of 's convergent correlations were the strongest of their row and column of the MTMM, indicating mixed evidence of discriminant validity. As seen in the third column of Table <ref>, the average differences between convergent and respective discriminant correlations increased from 0.23 at 8B parameters to 0.51 at 540B parameters. Convergent validity by model configuration: Out of , , and (62B), scores on the IPIP-NEO and BFI were only strongly (convergently) correlated for instruction fine-tuned models, and (Figure <ref>). Of these three sets of simulated responses, 's IPIP-NEO scores presented the strongest evidence of convergent validity, with an average convergent correlation of 0.90 (Table <ref>). Discriminant validity by model configuration: Evidence for discriminant validity clearly favored instruction fine-tuned over (base) , when we held model size constant at 62B parameters. Again, all five of 's convergent correlations passed <cit.>'s standard of discriminant validity. In contrast, 's discriminant correlations (avg. r_disc = 0.29) outweighed their convergent counterparts in many cases (avg. r_conv = 0.05; Table <ref>)—indicating that, for this model, self-reported personality was not consistent across different modes of assessment. Criterion Validity Results. As another component of external validity, the criterion validity of LLM-simulated personality scores similarly varied across the model characteristics of size and instruction fine-tuning. IPIP-NEO scores simulated in larger, instruction fine-tuned models showed relatively stronger criterion validity. Figure <ref> summarizes the results by Big Five domain. Extraversion. Human extraversion is strongly positively correlated with positive affect and moderately negatively correlated with negative affect <cit.>. Simulated IPIP-NEO Extraversion scores for all models, except those for base , showed excellent evidence of criterion validity in their relation to PANAS Positive Affect and Negative Affect subscale scores (see Figure <ref>). This suggests that the external validity of extraversion in LLMs may only emerge due to instruction fine-tuning. LLM alignment with human research data—in terms of the strength and direction of correlations between self-reported personality and emotions—increased with model size. Agreeableness. In humans, agreeableness is strongly negatively related to aggression <cit.>, IPIP-NEO Agreeableness data for all 62B-parameter models and larger showed good-to-excellent evidence of external validity in their relation to our tested aggression subscales taken from the BPAQ: Physical Aggression (PHYS), Verbal Aggression (VRBL), Anger (ANGR), and Hostility (HSTL). As depicted in Figure <ref>, model size rather than instruction fine-tuning related more to the external validity of agreeableness in LLMs. Conscientiousness. In humans, conscientiousness is meta-analytically related to the human values of achievement, conformity, and security <cit.>. In Figure <ref>, we see that, for all of instruction fine-tuned variants of , evidence of external validity for conscientiousness was stronger compared to that for . was the best performer, by a small margin, with criterion correlations of 0.74, 0.73 and 0.59 for PVQ-RR ACHV, CONF, and SCRT values, respectively. Neuroticism. Human neuroticism is strongly positively correlated with negative affect and moderately negatively correlated with positive affect in human research <cit.>. IPIP-NEO Neuroticism data for all models, except those for , showed excellent evidence of external validity in their relation to PANAS Positive Affect and Negative Affect subscale scores (see Figure <ref>). LLM alignment with human data, in terms of the strengths and directions of these criterion correlations, increased with model size. Openness. Openness to experience in humans is empirically linked to creativity across multiple studies <cit.>. Figure <ref> illustrates how evidence of criterion validity of openness is strongest for medium-sized, fine-tuned variants of , with criterion correlations for SSCS CSE and CPI ranging from moderate (r = 0.59) to strong (r = 0.84). Notably, we observed negative correlations between openness and creativity for in contrast to those shown for , the smallest model tested. §.§ Results of Shaping Personality in LLMs This section explores the extent to which personality in LLMs can be verifiably controlled and shaped by presenting three evaluation studies. §.§.§ Personality Shaping Evaluation Methodology Shaping a Single LLM Personality Domain In the first study, we tested if LLM-simulated Big Five personality domains (measured by the IPIP-NEO and LLM-generated text) can be independently shaped. The prompts were constructed as follows: first, we created sets of prompts for each Big Five trait designed to shape each trait in isolation (i.e., without prompting any other trait) at nine levels (as described in Section <ref>. This resulted in prompts reflecting 45 possible personality profiles. Next, we used the same 50 generic Persona descriptions employed in Section <ref> to create additional versions of those personality profiles to more robustly evaluate how distributions (rather than point estimates) of LLM-simulated personality traits may shift in response to personality profile prompts. In our main construct validity study (described in Section <ref>), we showed that IPIP-NEO scores are robust across various item preambles and postambles, so we optimized the computational cost of this study by using only one default item preamble and postamble across prompt sets. In all, with 45 personality profiles, 50 generic PersonaChat descriptions, and no variation in item preambles and postambles, we generated 2,250 unique prompt sets that were used as instructions to a given LLM to administer the IPIP-NEO 2,250 times. See Table <ref> for a summary. As an additional measure of external validity, we tracked how shaping latent levels of personality in LLMs can directly affect downstream model behaviors in user-facing generative tasks. To do so, we instructed to write social media status updates based on the personas contained in those prompts. To assess the results of the study, we generated ridge plots of IPIP-NEO score distributions across prompted levels of personality. To quantitatively verify changes in personality test scores in response to our shaping efforts, we computed Spearman's rank correlation coefficient (ρ) between prompted levels (i.e., 1–9) and resulting IPIP-NEO subscale scores of each Big Five trait. We used Spearman's ρ (cf. Pearson's r) because prompted personality levels constitute ordinal, rather than continuous, data. We compute Spearman's ρ as follows: ρ = r_sR(X),R(Y) = cov(R(X), R(Y))/σ_R(X)σ_R(Y), where r_s represents Pearson's r applied to ordinal (ranked) data; cov(R(X), R(Y)) denotes the covariance of the ordinal variables; and σ_R(X) and σ_R(Y) denote the standard deviations of the ordinal variables. Shaping Multiple LLM Personality Domains Concurrently In the second study, we tested if all LLM-simulated personality domains can be concurrently shaped to two levels, extremely low and extremely high to test if their resulting targeted scores for those traits are correspondingly low and high, respectively. We used the same method and rationale described above to independently shape personality in LLMs, but with modified personality profile prompts that reflect simultaneous targeted changes in personality traits. To optimize the computational cost of this study, we generated 32 personality profiles, representing all possible configurations of extremely high or extremely low levels of the Big Five (i.e., 2^5). Combining these 32 personality profiles with the same 50 generic PersonaChat descriptions and default item preamble and postamble set in the previous experiment, we generated 1,600 unique prompts and used them to instruct a given LLM to respond to the IPIP-NEO 1,600 times (see Table <ref>). We quantified the results similarly to the first study, by visually inspecting the differences in observed score distributions and quantifying the extent to which prompted target levels of personality aligned with observed levels, using Spearman's ρ. Shaped LLM Personality Expression Evaluation Methodology The third study served as an ultimate test of construct validity, evaluating the ability of survey-based signals of personality in LLMs to reflect levels of personality observed in LLM-generated text. We adapted the structured prompts described in Section <ref> to instruct to write 100 social media status updates according to the descriptive profiles of 2,250 simulated participants. We then rated the personality of the status updates, generating an aggregate prediction for each simulated participant using the Apply Magic Sauce (AMS) API <cit.>, a psychodemographic prediction tool. AMS was trained using a volunteer dataset of 6 million social media users; its automatic ratings of user personality have been shown in research to be 1) more accurate than human observer ratings of personality <cit.>, and 2) a more naturalistic behavioral signal of personality that avoids potential biases of self-rated questionnaires <cit.>. Finally, we computed Pearson's correlations between our survey- and generated-text-based estimates of personality, taking advantage of the fact that these data are linked by the same 2,250 prompts. A moderate or stronger correlation between survey-based and language-based estimates of personality in LLMs (as demonstrated in human data reported by <cit.>) would demonstrate that our survey-based measure of personality can be used as a latent signal of personality that manifests in downstream LLM tasks, such as text generation. §.§.§ Results of Shaping a Single LLM Personality Domain This study tested if LLM-simulated Big Five personality traits can be independently shaped at nine levels. The study achieved a remarkably high level of granularity in independently shaping personality traits in LLMs. When building prompts containing only information for one Big Five domain at a time, with no information about any other domain, observed levels of the targeted domain change as intended while those of other traits remained relatively unchanged (see Figure <ref>). Specifically, as prompted levels of a targeted personality trait moved from extremely low (level 1/9) to extremely high (level 9/9), observed levels of that personality trait in LLM psychometric test scores monotonically increased. For example, when prompting for extremely low (level 1) extraversion, we observed a distribution of extremely low extraversion scores. When prompting for very low (level 2/9) extraversion, the distributions of extraversion scores shifted higher, and so on. Finally, prompting for extremely high (level 9/9) extraversion, we observed a distribution of extremely high extraversion scores. This validates our hypothesis about the effectiveness of using the linguistic qualifiers from Likert-type response scales to set up a target level of each trait, achieving granularity of up to nine levels. We also observed that the range of LLM test scores matches each prompt's intended range. With possible scores ranging from 1.00 to 5.00 for each trait, we observed median levels in the low 1.10s when prompting for extremely low levels of that trait. When prompting for extremely high levels of a trait domain, median observed levels reached 4.22–4.78. Notably, scores of unprompted traits remained relatively stable. As shown on the right side of Figure <ref>, the medians of observed openness scores remained steady near 3.00 when all other Big Five domains were shaped. Similar patterns of stability were observed for extraversion and agreeableness. Conscientiousness and neuroticism scores fluctuated the most, but the fluctuations did not reach the strength and direction of the score changes we observed in the ridge plots of targeted traits (as shown in plots on the diagonal, from top-left to bottom-right). We statistically verified the effectiveness of our shaping method by computing Spearman's rank correlation coefficients (ρ; see Eq. (<ref>)) between the targeted ordinal levels of personality and continuous IPIP-NEO personality scores observed for each Big Five trait. The correlations are all very strong across the tested models (Table <ref>); the first column of Table <ref> depicts the correlations for . Finally, our method successfully shaped personality observed in LLM-generated text. The third column of Table <ref> depicts Spearman's ρ between prompted levels of personality and linguistic estimates of personality. §.§.§ Shaping Multiple LLM Personality Domains Concurrently This experiment tests if Big Five personality domains can be concurrently shaped at levels 1 (extremely low) and 9 (extremely high). We successfully shape personality domains, even as other domains are shaped at the same time (see Figure <ref>). However, the ranges of our observed personality scores are more restricted for medium-sized models and smaller, indicating lower levels of control. For instance, 's median scores on IPIP-NEO Agreeableness shift from only 2.88 to 3.52 when agreeableness is prompted to be “extremely low" (level 1/9) versus “extremely high" (level 9/9), respectively. In contrast, we achieve levels of control observed in Section <ref> only with our largest model, . §.§.§ Shaped LLM Personality Expression Results We find that psychometric survey signals of personality in LLMs robustly reflect personality in downstream LLM behavior, as expressed in 22,500 social media status updates written by . Figure <ref> depicts the ability of LLM-simulated personality test scores to reflect levels of personality observed in LLM-synthesized social media status updates, expressed as convergent Pearson's correlations between Flan-PaLM 540B's questionnaire-based levels of personality and (AMS-derived) language-based levels of personality. This ability exceeds established human associations between personality self-reports and personality derived from social media status updates, reported by <cit.>. § DISCUSSION This section discusses how our methods and results relate to other performance benchmark trends in LLMs, discusses their limitations, and broader implications. §.§ Effect of model training Instruction fine-tuning: Fine-tuning PaLM LLMs on multiple-task instruction-phrase datasets dramatically improves performance over the base, pretrained, non fine-tuned PaLM model on natural language inference tasks, reading comprehension tasks, and closed book QA tasks tasks <cit.>. The inference and comprehension of tasks are most relevant in the context of our current work. Similarly, we observed the most dramatic improvements in PaLM's ability to synthesize reliable and externally valid personality profiles on its instruction fine-tuned variants (Section <ref>). Particularly, the smallest instruction fine-tuned model (Flan-PaLM 8B) drastically outperforms the mid-size base model (PaLM 62B; Figure <ref>). Additionally, instruction finetuning on chain-of-thought (CoT) data enables the resulting model to perform reasoning in a zero-shot setting <cit.>, and instruction funetuned variants use in this work are finetuned on CoT datasets. This ability is particularly important as we neither include exemplars in our prompt nor do extensive prompt engineering, and we use diverse preambles and postambles in the prompt. As such, the improved performance we observe to instruction fine-tuned models could be the result of this reasoning ability in zero-shot setting. Across reliability results, reported in Section <ref>, internal consistency reliability (α and λ_6) improve after instruction fine-tuning. However, factor saturation (captured in McDonald's ω) does not improve; it is indistinguishably high for both base and instruction fine-tuned models of the same size (PaLM, Flan-PaLM, and Flan-PaLMChilla). How is it possible that base model's (PaLM 62B) responses are unidimensional (e.g., their variance reflects one underlying factor in statistical analysis) but not internally consistent (e.g., generally coherent when measured multiple times)? We turn to a possible explanation from human psychometrics. Humans can generate unidimensional responses to questionnaires that are simultaneously inconsistent when the questionnaire items 1) have varying levels of difficulty or 2) are actually measuring different underlying psychological constructs. When an LLM responds to some items with all 5s or all 1s, those items may be too “easy" or “difficult". As a result, they contribute unequally to the total test score, deflating metrics anchored on total score variance like Cronbach's α. Meanwhile, McDonald's ω would remain high in those cases because it takes into account the difficulty of items when estimating a test's reliability. The second related possibility, that the items actually measure different things (vs. one thing), may manifest in an LLM's ability to accurately attend to the intended meaning of certain items. For instance, an LLM could mistakenly associate the meaning of extraversion items with concepts meant to be distinct from extraversion (e.g., conscientiousness)—perhaps the phrasing of an extraversion item matches the phrasing of a random string of text totally unrelated to being extraverted. In both cases, instruction fine-tuning may affect a model's ability to respond to human-optimized psychological tests in a manner that is internally consistent and unidimensional. Longer training with more tokens: PaLMChilla 62B was trained longer than PaLM 62B, with almost double the number of tokens but with only fractional increase in training FLOP count; it performed slightly better on some zero-shot English NLP tasks like reasoning <cit.>. Our studies comparing Flan-PaLM 62B and Flan-PaLMChilla 62B did not find a discernible difference in their reliability and validity (as reported in Section <ref>). This could be because (as also shown in <cit.>) the zero-shot performance of the two models on such tasks is similar. Overall, our results show that there is a positive association between a model's performance on LLM benchmarking tasks and reliability and validity of simulated personality traits in LLMs. §.§ Effect of model size PaLM's performance on reading comprehension and passage completion tasks is linked to model size <cit.>; PaLM's ability to understand broad context and carry out common-sense reasoning is stronger for larger models. We similarly see improvement in reliability (measured via Cronbach's α and Guttman's λ_6), convergent validity (measured by Pearson's r between IPIP-NEO and BFI domain scores), and criterion validity (measured by IPIP-NEO domain correlations with non-personality measures), summarized in Table <ref>. <cit.> further mentioned that performance on tasks requiring sophisticated abstract reasoning capability to understand complex metaphors follows a discontinuous improvement curve, i.e., this capability of the model emerges only after a certain scale is reached. We observe a similar phenomenon in our construct validation experiments, where LLM-simulated extraversion, openness, and agreeableness are only externally valid (i.e., correlate with theoretically-related psychological constructs) for 62B-parameter models or larger. Only when model size increases to 62B parameters, we see a theoretically-expected strong negative relationship between LLM-reported agreeableness and aggression, which with do not observe in our smallest tested model (Figure <ref>). The external correlations of LLM-synthesized conscientiousness and neuroticism, however, do not show such a dramatic jump, and these personality traits in smaller models demonstrate sufficient external validity. We hypothesize this could be due to the language content associated with the items measuring these dimensions. Extraversion, openness and agreeableness might be characterized by (similar) language that is much more nuanced than the language used to define neuroticism or conscientiousness which might be easy to define. Consequently, displaying external validity for extraversion, openness and agreeableness requires a model to have the capacity (size) to understand that nuanced language. Overall, improvements in reliability, convergent validity, and criterion validity appear positively linked to model size and performance on LLM benchmarks, and that the model performance on complex reasoning benchmarks appears to track an LLM's ability to meaningfully simulate personality. §.§ Malleability of Personality Traits in LLMs Given a piece of text generated by an LLM prompted with a specific combination of personality traits, we can accurately predict the IPIP-NEO scores the model would have with the same prompt setup. This indicates that LLM-simulated IPIP-NEO test responses we generated accurately capture the latent signals of personality in LLMs that manifest in downstream behaviors such as generating text for social media updates. This validates our initial hypothesis of the malleability of the personality traits in LLMs. Figure <ref> shows some of the most frequent words in the generated text for the social media updates when the LLM was prompted to have the lowest traits of neuroticism (or highest emotional stability). The words are mostly about positive emotions, such as “happy", “relaxing", “wonderful", “hope", and “enjoy". In contrast, Figure <ref> shows the most frequent words from the LLM prompted with the highest traits of neuroticism (or lowest emotional stability). Those words are characteristic of elevated levels of neuroticism, such as “hate", “depressed", “annoying", “stressed", “nervous" and “sad"; they are not seen in the emotionally stable case. These examples are remarkably similar to the wordcloud distribution seen in human responses in <cit.>, reconfirming our hypothesis that there exists a reliable and valid methodology to shape personality traits in LLM responses to be more human-like. §.§ Limitations and Future Work This section outlines the key limitations and possible extensions of the the current work. Personality traits in other LLMs: One of the core contribution of this work is to understand how personality traits in generated language are affected by model size and training procedure. We focus on the PaLM family of language models and personality traits in their simulated survey responses. However, the described methodology for administering psychometric surveys does not constrain the use of a specific model family, and is applicable to any other decoder-only architecture model, such as GPT. Limited psychometric test selection: Another core contribution of this work is a principled way to establish the reliability and validity of personality psychometric tests in the LLM context, with appropriate statistical validity. The work is validated on a specific and limited set of psychometric tools. However, the presented methodology does not constrain the use of specific psychometric tools; some may show better psychometric properties in the LLM space than others. While this study relies on the 300-item IPIP-NEO as the primary measure, the presented framework can be used on other personality measures of different lengths (e.g., the 120-item version of the IPIP-NEO <cit.>) and theoretical traditions (e.g., the HEXACO Personality Inventory, which uses a cross-cultural six-factor model of personality <cit.>). Multilingual and cultural personality considerations: This work contributes evidence that at least some LLMs exhibit personality traits consistent with human personalities. We only considered English and did not make cultrual considerations beyond the applied psychometrics. While the LLMs we used performed well on NLP benchmark tasks on multiple languages, we cannot generalize the observed efficacy of our techniques to other languages. Most psychometric tests we used have also been extensively validated in cross-cultural research and have non-English versions that have gone through rigorous back-translation and validation (e.g., the IPIP-NEO has dozens of validated translations). Thus, a future direction of research could administer these same tests to LLMs in different languages. Similarly, while the Big Five model of personality has well-established cross-cultural generalizability <cit.>, some cultures have additional personality dimensions that do not exist in universal personality taxonomies <cit.>. These dimensions may be better represented in culture-specific (i.e., idiographic) approaches to measuring personality (at the cost of not being able to make direct comparisons of personality across cultures). Evaluation settings: Unlike surveys administered to humans, the presented methodology does not consider prior answers; all items selections are independent events. Advantages of this approach include reproducibility and the removal of ordering effects. On the other hand, social science surveys are designed with the assumption of being administered in order, meaning that our method is not rigorously identical to a human administration. Response evaluation methods: Our model responses are considered in scoring mode. Other scoring strategies, such as generating a free form response and then using a regular expression or a classifier model that can map the response onto one of the multiple choices available to answer a survey question. It could be argued that free form generation is the most common format for using LLMs in the dialog context. §.§ Broader Implications This works demonstrates that it is possible to configure an LLM such that its output to a psychometric personality test is indistinguishable from a human respondent's, and that it is possible to control the personality and in turn the output from such a model in a principled way. Furthermore, this work provides a complete pipeline to a) reliably and validly probe personality traits that may be perceived by humans in LLM output; b) identify the positive and negative emotions and other psychological factors they may be correlated with; and c) provide mechanisms to increase or decrease levels of specific LLM-synthesized traits. These findings have implications for responsible AI, human alignment, increased transparency, explainability, help with bias mitigation, and user-facing application development. Human value alignment: Being able to probe personality traits LLM outputs and shape them is particularly useful in the field of responsible AI. Controlling levels of specific traits that lead to toxic or harmful language output (e.g., very low agreeableness, high neuroticism) can make interactions with LLMs safer and less toxic. At a higher level, the values of judgement and moral foundations that are present in LLMs by virtue of pretraining and language features can be made to align better with human values by tuning for personality traits, since personality is meta-analytically linked to human values <cit.>. Additionally, this same framework can be used to more rigorously quantify efforts towards LLM value alignment. Transparency, Explainability, and Bias Mitigation: While it is inevitable that some forms of controlling personality levels in LLM outputs will become commonplace in order to enhance user experience, it is crucial to provide clear explanations to users about how their interactions are influenced and how the personality customization process works. Users deserve a clear understanding of the underlying mechanisms and any potential limitations and biases associated with personalized LLMs. Developers must be vigilant in identifying and mitigating biases that could arise from the customization process; this work provides a toolset for doing so for LLM-synthesized personality traits. Safe LLM deployment: The methodology for establishing construct validity contributes a process to be used as part of the evaluation of a newly developed LLM prior to its deployment. Such evaluation may produce user-facing chatbots with safer and more consistent personality profiles. Furthermore, the personality shaping methodology can be used for chatbot adversarial testing to probe another LLM's responses in an adversarial situation, or even to train humans on how to handle adversarial situations. User-facing implications: Users could have customized interactions with LLMs tailored to specific personality traits to enhance their engagement and satisfaction. For instance, if a user prefers a more extraverted or agreeable LLM, they could customize the model's synthesized personality accordingly. LLMs with customized personality levels can enable applications where a chatbot's personality profile is adapted to the task. For example, engaging chatbots would be helpful as virtual companions in a range of settings, from games to education and training, while more empathetic virtual assistants are needed in customer service or counseling applications. Similarly, a more conscientious and organized LLM could provide task management or planning assistance. §.§ Ethical Considerations This work applies validated psychometrics to quantitatively characterize personality in LLMs, and presents methods that intentionally shape LLM personality. The aim of the work is to inform shifting from current unexpected properties of LLM-generated language toward desirable, safe, and predictable LLM behavior. However, ethical considerations merit further attention; we highlight three most related to the contributions of this paper. Personalized LLM persuasion: Aligning personalities of agents and users can make the agents more effective at encouraging and supporting behaviors <cit.>. The same personality traits that contribute to persuasiveness and influence could be used to encourage undesirable behaviors; for instance, personality alignment has been shown to increase the effectiveness of real-life persuasive communication <cit.>. Given the broad availability of LLMs, the possibility of using them to persuade individuals, groups, and even society at large must be taken seriously. Since persuasive techniques are already ubiquitous in society, we believe that the best way to address the risks of persuasive LLM personalities is to enable structured and scientifically-backed LLM personality measurement, analysis, and modifications, such as with the methods our work presents. Anthropomorphized AI: Personalization of conversational agents has documented benefits <cit.>, but there is a growing concern about harms posed by the anthropomorphization of AI. Recent research suggests that anthropomorphizing AI agents may be harmful to users by threatening their identity, creating data privacy concerns, and undermining their well-being <cit.>. Our work establishes, beyond qualitative probing explorations, the unexpected ability of LLMs not only to appear anthropomorphic, but also to respond to psychometric tests in ways consistent with human responses, thanks to the vast amounts of their human language training data. The presented methods can be used in future responsible investigation of anthropomorphized AI. Detection of incorrect LLM information: It is well established that LLMs can generate convincing but incorrect responses and content <cit.>. One of the methods used to determine if a piece of text about a world fact is generated by an LLM (and hence might need to be vetted) is to use the predictable traits, lack of human-like personality, and linguistic features in the LLM-generated language <cit.>. However, with personality shaping, that method may be rendered ineffective, thereby making it easier for adversaries to use LLMs to generate misleading content. The solution for this problem, while out of scope of this work, is related to solving the broader alignment and grounding of LLMs—an area that should continue to be explored further in industry and academia. § CONCLUSION The perception of synthetic “personality" in LLM outputs is well-established, but personality as a complex psychosocial phenomenon has not yet been rigorously quantified and validated in the LLM research. Proper quantification and validation is needed to verifiably steer LLM-based interactions toward safer and more predictable behavior. This work has presented a comprehensive quantitative analysis of personality traits exhibited in text generated by widely-used LLMs by administering validated psychometric surveys. We have shown conclusively that synthetic levels of personality measured via LLM-simulated psychometric test responses and LLM-generated text demonstrate reliability and construct validity for larger and instruction fine-tuned models. We have also presented methods for shaping LLM personality along desired dimensions to resemble specific personality profiles and discussed the ethical implications of such engineering of LLM personalities. § ACKNOWLEDGEMENTS We would like to express our sincere appreciation to several individuals who contributed to the development of this research paper. We are grateful to Lucas Dixon, Douglas Eck, and Kathy Meier-Hellstern for feedback on early versions of this paper. We would also like to thank David Stillwell for providing access to the Apply Magic Sauce API used in this study, which played a vital role in the generated text analysis. Additionally, we extend our gratitude to Jason Rentfrow and Neda Safaee-Rad for their valuable advice on the personality-related aspects of the paper. § DISTRIBUTIONS OF LLM-SIMULATED PERSONALITY TEST SCORES § ADJECTIVES § WORD CLOUDS FROM THE TEXT GENERATED BY PERSONALITY-PROMPTED FLAN-PALM 540B Word clouds showing some of the highest frequency words appearing in the social media updates text generated by the Flan-PaLM 540B model when prompted to have high/low traits for a specific dimension.
http://arxiv.org/abs/2307.03295v1
20230706211020
Lensing in the Blue II: Estimating the Sensitivity of Stratospheric Balloons to Weak Gravitational Lensing
[ "Jacqueline E. McCleary", "Spencer W. Everett", "Mohamed M. Shaaban", "Ajay S. Gill", "Georgios N. Vassilakis", "Eric M. Huff", "Richard J. Massey", "Steven J. Benton", "Anthony M. Brown", "Paul Clark", "Bradley Holder", "Aurelien A. Fraisse", "Mathilde Jauzac", "William C. Jones", "David Lagattuta", "Jason S. -Y. Leung", "Lun Li", "Thuy Vy T. Luu", "Johanna M. Nagy", "C. Barth Netterfield", "Emaad Paracha", "Susan F. Redmond", "Jason D. Rhodes", "J\\''urgen Schmoll", "Ellen Sirks", "Sut Ieng Tam" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.GA" ]
0000-0002-9883-7460]Jacqueline E. McCleary Department of Physics, Northeastern University, 360 Huntington Ave, Boston, MA Jacqueline E. McCleary j.mccleary@northeastern.edu 0000-0002-3745-2882]Spencer W. Everett 0000-0002-7600-3190]Mohamed M. Shaaban 0000-0002-3937-4662]Ajay S. Gill 0009-0006-2684-2961]Georgios N. Vassilakis Department of Physics, Northeastern University, 360 Huntington Ave, Boston, MA 0000-0002-9378-3424]Eric M. Huff 0000-0002-6085-3780]Richard J. Massey 0000-0002-4214-9298]Steven J. Benton 0000-0001-5101-7302]Emaad Paracha 0000-0002-9618-4371]Susan F. Redmond 0000-0002-7542-0355]Ellen Sirks Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan McCleary et al. The Superpressure Balloon-borne Imaging Telescope () is a diffraction-limited, wide-field, 0.5 m, near-infrared to near-ultraviolet observatory designed to exploit the stratosphere's space-like conditions. 's 2023 science flight will deliver deep, blue imaging of galaxy clusters for gravitational lensing analysis. In preparation, we have developed a weak lensing measurement pipeline with modern algorithms for PSF characterization, shape measurement, and shear calibration. We validate our pipeline and forecast survey properties with simulated galaxy cluster observations in 's near-UV and blue bandpasses. We predict imaging depth, galaxy number (source) density, and redshift distribution for observations in 's three bluest filters; the effect of lensing sample selections is also considered. We find that in three hours of on-sky integration, can attain a depth of b = 26 mag and a total source density exceeding 40 galaxies per square arcminute. Even with the application of lensing-analysis catalog selections, we find b-band source densities between 25 and 30 galaxies per square arcminute with a median redshift of z=1.1. Our analysis confirms 's capability for weak gravitational lensing measurements in the blue. § INTRODUCTION The abundance of galaxy clusters as a function of redshift depends sensitively upon both the geometry of the universe <cit.> and the ongoing mechanism of structure formation via gravitational collapse <cit.>. Cluster number counts provide a statistically significant constraint on cosmological parameters, and as the largest particle colliders in the Universe, galaxy clusters themselves are proving grounds for alternative models of dark matter <cit.>. Because most of the mass in a cluster is invisible dark matter, a major challenge confronting cluster cosmology is the difficulty of measuring their masses. The most direct method takes advantage of clusters' weak gravitational lensing signal: the small but coherent magnification of background galaxy fluxes and observed distortion of background galaxy shapes. High-quality weak gravitational lensing studies illuminate the relationship between the true masses of galaxy clusters and their observable gas and stars. In this context, our collaboration will deploy the Superpressure Balloon-borne Imaging Telescope (): a stratospheric imaging system that will deliver space-quality imaging from the near-ultraviolet to the near-infrared. has been optimized for measurement of cluster gravitational lensing: the telescope has a 15× 23 field of view to enable efficient measurements of the weak lensing signal of galaxy clusters at z ≥ 0.05, and provides stable, near-diffraction-limited imaging for well-measured galaxy shapes. Floating above more than 97% of the Earth's atmosphere, the telescope experiences nearly perfect transmission from 280 nm to 900 nm. The stratosphere also offers low sky backgrounds: <cit.> show that experiences 23.6–25.5 mag arcsec^-2 in its b filter (365 nm – 575 nm), up to three mag arcsec^-2 fainter than the darkest ground-based sites with 22.7 mag arcsec^-2. While most surveys measure weak gravitational lensing at red wavelengths, the dark sky background and diffraction-limited optics in the stratosphere uniquely mean that lensing measurements are more efficient in the blue <cit.>. Beyond weak gravitational lensing measurements, 's deep, blue imaging enables a range of scientific investigations. For example, its near-UV (300-400 nm) photometry spans the Balmer and 4000 Å breaks used to fit galaxy templates for photometric redshift estimation; including NUV photometry can halve uncertainties on the resulting photometric redshifts <cit.>. To prepare for 's 2023 science flight, we have created a suite of simulated galaxy cluster observations with realistic galaxy flux, size, and redshift distributions. PSF models are informed by previous test flights, and background galaxies are gravitationally lensed by foreground cluster halos. We have then developed a weak lensing analysis pipeline built from modern, publicly-available tools like PIFF for PSF characterization, NGMix for galaxy shape measurement, and Metacalibration for galaxy shear calibration. At a basic level, processing the simulated observations validates our pipeline performance. More interestingly, this procedure enables us to flow down science requirements into an efficient observing strategy. Galaxy clusters have highly localized weak lensing signal, which makes galaxy number density and average redshift the primary figures of merit for cluster surveys. However, the total number density of galaxies observed is less important than the number that survive cuts on redshift, signal-to-noise, and size for weak lensing analysis. In this paper, we will forecast imaging depths, source density, and redshift distributions for stratospheric observations in 's near-UV and blue bandpasses. This paper is organized as follows. We summarize the platform in Section <ref>, and lensing theory in Section <ref>. We describe our galaxy shape measurement pipeline in Section <ref>, and our mock observations in Section <ref>. We present our results in Section <ref>, provide additional context in Section <ref>, and conclude with Section <ref>. § THE OBSERVING PLATFORM §.§ Instrument is a 0.5-m mirror telescope that exploits the super-pressure balloon capabilities provided by the National Aeronautics and Space Administration (NASA), which offers mid-latitude long-duration balloon flights up to 100 days. has been developed and iteratively improved through four one-night commissioning flights. Successful recovery after each flight enabled efficient, closed-loop engineering cycles. A complete description of the resulting mechanical, thermal, control systems, and software architecture appears in <cit.>, <cit.>, and <cit.>. The platform consists of a gondola pointing system and an optical assembly that work together to achieve 0.05 focal plane stability via three successive pointing and stabilization regimes: coarse target acquisition to within 0.5, fine telescope stabilization at the 0.5 level, and finally 0.05 image stabilization at the focal plane. During the most recent test flight in September 2019, maintained telescope stability of 0.3 (0.5) over a 5 minute (30 minute) exposure, and image stability of 0.046 (0.048) over a 5 minute (30 minute) exposure. This enabled the first measurements of gravitational lensing from the stratosphere <cit.>, using images of Abell 2218 (Figure <ref>), and defined a fiducial exposure time of 5 minutes for future observations <cit.>. cccc bandpasses and sky backgrounds Filter name Wavelength range Pivot wavelength Sky brightness (nm) (nm) (e^- s^-1 pix^-1) u 300–435 395 0.029 b 365–575 476 0.052 g 515–705 597 0.052 r 570–720 640 0.030 nir 706–1100 814 0.064 370–710 522 0.084 530–830 650 0.15 Summary of the 2023 flight filters and expected sky brightnesses in each. The shape filter is deprecated and included in this analysis for comparison purposes. Because of the fast development time scales of balloon-borne missions, has had the ability to upgrade its core technologies between flights. To wit, the 2023 flight camera is a marked improvement over the CCD flown in 2019. The 2023 science camera is a 9600×6422 pixel Sony IMX 455 CMOS detector with 3.76 μ m (0.141) square pixels. At operating temperature -10C, this has low read noise (rms ∼1.7 e^-/pixel) and low dark current (∼0.0022 e^-/s/pixel). It is sensitive from 300 to 900 nm; its quantum efficiency (QE) and optical throughput are presented in <cit.>. Its filter wheel currently includes five broadband filters (u, b, g, r, nir) plus one very broad filter () designed to collect as much light as possible (Table <ref> and Figure <ref>). We also show the filter, which is very similar in range to the Euclid VIS filter and was at one point designated for galaxy shape measurements (hence the name). §.§ Survey and expected data During its planned, up to 100 day science flight from NASA's Long Duration Balloon facility in Wanaka, New Zealand (scheduled for April 2023 at the time of writing), will be able to observe almost anywhere in the Southern hemisphere and up to 20^∘ North. Target selection will depend on launch date and balloon path, but targets will be automatically drawn from a list of galaxy clusters at redshift z<0.5. These include well-studied clusters from the Hubble Frontier Fields, CLASH, RELICS, LoCuSS and COSMOS surveys that are required for calibration, plus merging clusters identified principally via bimodality in Chandra X-ray imaging. The clusters have abundant ancillary data: all with X-ray imaging, most with infrared (IR) and radio imaging, and many with substantial investments of ground-based spectroscopy. To these data, will add deep, wide-field near-UV and optical imaging with angular resolution of 03. With a minimal sample of 45 clusters and assuming a per-cluster scatter in mass of 20% <cit.>, this data can yield weak lensing masses with an ensemble M_200c fractional uncertainty of 0.2/√(45) = 0.03. Based on 's 2018 and 2019 test flights, calculations in <cit.>, and results in this work, each cluster target will be observed for 3 hours (36×300 second exposures) in b, plus shorter integrations in u and g bands. § GRAVITATIONAL LENSING AND METACALIBRATION §.§ Weak gravitational lensing formalism Gravitational lenses like galaxy clusters introduce an isotropic magnification of background galaxies and percent-level distortions in their shapes. The magnification of galaxy images is described by the convergence κ, a scalar quantity equal to the Laplacian of the gravitational potential of the lens projected along the line of sight. The convergence κ can be related to the surface mass density of the galaxy cluster, Σ, as κ≡1/2∇^2 Ψ(θ)=Σ/Σ_ crit; Σ_ crit =c^2/4π GD_ s/D_ l D_ ls. where the critical surface mass density Σ_ crit of the lens depends on the angular diameter distances to the background galaxy D_ s, the lens D_ l and the lens and source D_ ls, respectively. The distortion of galaxy images introduced by gravitational lenses is represented as a complex shear γ: γ = γ_1 + iγ_2 = |γ|e^2iϕ Distortion along the real axes (x/y) is described by the γ_1 component of shear; the γ_2 component describes the galaxy image distortion along axes rotated through π/4 radians. The shear γ can be related to the cluster gravitational potential Ψ as: γ(θ) = DΨ, D = ∂^2_1 - ∂^2_2/2 + i∂_1 ∂_2 Observations of gravitationally lensed galaxies actually return the reduced shear g g =γ/1-κ = g_1 + i g_2, where the variables g_1 and g_2 in Equation <ref> are the polarization states of background galaxies with reduced shear g. Irrespective of the presence of a gravitational lens, the shapes of galaxies measured on an image can be characterized by an ellipticity e: e = e_1 + i e_2, e_1 = e cos(2θ), e_2 = e sin(2θ), e = a^2-b^2/a^2+b^2. where a and b are the major and minor axes of the galaxy image ellipse. The shear γ can be extracted from galaxy ellipticities e in the weak lensing regime, where the distortion introduced by the lens is much smaller than the galaxy images themselves, i.e., where κ,γ≪ 1. In that case, in the absence of intrinsic alignments and for source galaxies at the same redshift, γ≃ g ≃⟨ e ⟩/2 ℛ The factor ℛ encodes the shear response factor 1 - σ^2_e. Because the lensing potential induces curl-free distortions in galaxy images, we estimate the reduced shear about a point on the sky with the tangential ellipticity: g_tan = -(g_1 cos(2ϕ) + g_2 sin(2ϕ)) . where ϕ is the azimuthal angle from the fiducial center of mass to the galaxy. Because it is a curl-free statistic, in analogy with electromagnetism, Equation <ref> is sometimes called E-mode signal. A divergence-free statistic, the B-mode or cross-shear, is obtained by rotating Equation <ref> through π/4 radians: g_× =g_2cos(2ϕ) - g_1sin(2ϕ).[Note that some authors also use the opposite sign convention, g_× = g_1sin(2ϕ)- g_2cos(2ϕ).] Galaxy shapes are also convolved with the point spread function (PSF) of the telescope and atmosphere. PSFs tend to circularize galaxy shapes, diluting the real weak lensing signal, while the anisotropic components introduce ellipticities into the galaxy shapes that mimic weak lensing shear. Accurate shear inference thus requires that the PSF be modeled and deconvolved from galaxy shape measurements. Readers interested in a comprehensive review of galaxy cluster weak gravitational lensing, including considerations of the PSF, may consult <cit.>. §.§ Metacalibration In real measurements, the measured galaxy shears g_1, g_2 are biased estimators of the underlying shear distribution and need to be converted into a true estimator for the weak lensing shear g_ tan. This is generally accomplished by dividing each galaxy's ellipticity by an appropriate “shear responsivity factor” R, which characterizes the response of the galaxy shape estimator ĝ to an applied shear γ <cit.>: ⟨ĝ⟩ = ⟨ĝ⟩ |_γ = 0 + ⟨Rγ⟩ + O(γ^2) ≈ ⟨Rγ⟩, where in the absence of an external shear field, the average ellipticity should be zero. Image simulations are often used to obtain the shear calibration (e.g. ), but face the usual difficulties in replicating all the effects that affect real images. Instead, we use the Metacalibration algorithm, which calibrates shear estimators from the galaxy image data itself, without requiring significant prior information about galaxy properties. Metacalibration's data-driven approach is particularly valuable in a new survey like . Metacalibration introduces an artificial shear to images and calculates how the shear estimator responds to that applied shear. More specifically, the original galaxy image is deconvolved from the PSF and then sheared by some amount γ along each ellipticity component g_i. The sheared image is reconvolved with a function slightly larger than the original PSF to suppress noise amplified by the deconvolution process, and measurement of ĝ is repeated. The shear responsivity R is then obtained through the finite difference derivative: R_k,l = ĝ_k^+ - ĝ_k^-/Δγ_l where ĝ^+ is the measurement made on an image sheared by +γ_l and ĝ^- is the measurement made on an image sheared by -γ_l for all objects k. The responsivities can be computed for every galaxy in an observation catalog, but are very noisy since the ellipticity estimators ĝ themselves are noisy. So in practice, a shear estimate is obtained by dividing the galaxy ellipticity estimator by the mean responsivity over the entire galaxy sample: ⟨γ̂⟩ = ⟨R⟩^-1⟨ĝ⟩ Estimation of weak lensing shear commonly requires selection cuts on quantities like galaxy size and signal-to-noise ratio. The probability that a galaxy passes selection cuts changes after the application of an artificial shear. The responsivity then includes both the shear response and the effect of sample selections. We continue to follow the formalism of <cit.> and break up the responsivity into two components: ⟨R⟩ = ⟨R_γ⟩ + ⟨R_S⟩, where brackets denote the average over galaxies k=1...N_gals, ⟨R_γ⟩ captures the ensemble response of galaxy shapes to an applied shear, and ⟨R_S⟩ represents the response of the selections to an applied shear. § SHAPE MEASUREMENT PIPELINE In anticipation of 's 2023 science flight, we have developed a galaxy shape measurement and weak lensing analysis pipeline that employs state-of-the-art algorithms, such as NGMix for optimal estimation of galaxy shapes <cit.> and Metacalibration to correct for multiplicative shear bias <cit.> (also see <ref>). We provide an overview of our pipeline below. Upon acceptance of the paper, we intend to make the pipeline public. The pipeline is divided into three modules: creation of the input files for galaxy shape fitting (); galaxy shape fitting and shear bias correction (); and calculation of the galaxy clusters' tangential and cross shear profiles (). For ease of use, the pipeline has code to auto-generate the configuration files needed to run this pipeline from beginning to end, based only on a few user inputs. §.§ Creation of shape measurement input files The input for NGMix and Metacalibration is a multi-epoch data structure (MEDS[<https://github.com/esheldon/meds/wiki/MEDS-Format>]): a kind of FITS binary table with an entry for every object detected in an observation. Each object's MEDS entry contains the following: a postage-stamp cutout of the object, a rendering of the point spread function (PSF) at the location of the object, a weight, a segmentation map, and a bad pixel mask for every exposure in which the object was detected <cit.>. In our pipeline, MEDS files for observations are created with the module. Much of the “standard operating procedure” for astronomical imaging is implemented in ; we detail the particulars here for reference in future analyses. §.§.§ Detection catalog Image data supplied to is assumed to be calibrated CMOS or CCD imaging data for which bias subtraction and flat-fielding have already been performed. Exposure weight maps and bad pixel masks are required as well. The AstrOmatic tool SWarp is used to combine single-epoch exposures into a deep detection image from which the master observation catalog—the basis of the MEDS file—is obtained with SExtractor <cit.>. To maximize the number of sources detected, we set a relatively low detection threshold of 1.5 σ. This necessarily generates spurious detections that do not correspond to galaxies in any exposure. Rather than cut these items out of the MEDS file, which risks introducing an uncontrolled shear selection bias, spurious sources with no cutouts are flagged to be skipped during shape fitting. Segmentation maps and catalogs for single-epoch exposures are also generated with SExtractor; segmentation maps go into the MEDS, and single-epoch catalogs are used to identify stars for PSF model fitting. §.§.§ PSF estimation As discussed in Section <ref>, accurate shear inference hinges upon the successful deconvolution of the observation's PSF and galaxy shape measurements. Before their light passes through the atmosphere and telescope, stars are effectively point sources, so the shape and size of their surface brightness profiles axiomatically define the PSF at that location. Using stars as fixed points, the PSF can be interpolated across the rest of the image. Star catalogs for PSF modeling are generated with simple selections to the single-exposure detection catalogs based on SExtractor , a minimum signal-to-noise, and a magnitude range. A sample star catalog is highlighted in Figure <ref>. Should greater sample purity be required, we have incorporated into an option to cross-reference candidate stars against a reference catalog and also added the capability to query the Gaia star database <cit.> on the fly. Though the Gaia catalog is relatively shallow, the high purity of the Gaia catalog avoids the problem of star-galaxy confusion. The use of the Gaia catalog for PSF fitting is also considered in <cit.>. We model PSFs with the recently introduced PIFF software package[<https://rmjarvis.github.io/Piff/_build/html/overview.html>]. Like most PSF fitters, PIFF takes an input catalog of stars, fits their surface brightness profiles with a user-specified model, interpolates the PSF parameters across the FOV following some schema, and saves the resulting description of the observation's PSF to file. A notable feature of PIFF is that PSF models are expressed in sky coordinates, as opposed to the pixel coordinates commonly used in other PSF modeling software. Because high-frequency components of the PSF, e.g., astrometric distortion, vary more smoothly across the detector FOV when considered in sky coordinates, PIFF avoids the “size bias” (mismatch between the real and model PSF size) that can affect other PSF fitting software <cit.>. Following the DES Y3 approach, we use the model, which treats the PSF profile as a two-dimensional grid of points smoothed by a Lanczos kernel with n=3. The total number of free model parameters is then equal to the number of pixels in the grid. We also follow the DES Y3 approach to interpolate the PSF model across the FOV by using the scheme, which solves for the model parameters (pixel fluxes) in terms of the interpolation coefficients. PSF model residuals are quantified with the ρ statistics introduced in <cit.> and expanded in <cit.>. The ρ statistics below summarize the spatial correlations of size and ellipticity residuals between the real (star) and model PSFs; large values imply a systematic error in the model. ρ_1(θ) ≡⟨δ e^*_ PSF(x) δ e_ PSF(x+θ) ⟩ ρ_2(θ) ≡⟨ e^*_ PSF(x) δ e_ PSF(x+θ) ⟩ ρ_3(θ) ≡⟨( e^*_ PSFδ T_ PSF/T_ PSF)(x) (e_ PSFδ T_ PSF/T_ PSF)(x+θ) ⟩ ρ_4(θ) ≡⟨δ e^*_ PSF(x) (e_ PSFδ T_ PSF/T_ PSF)(x+θ) ⟩ ρ_5(θ) ≡⟨ e^*_ PSF(x) (e_ PSFδ T_ PSF/T_ PSF)(x+θ) ⟩ Here e_ PSF is the ellipticity of the real PSF, i.e., the star ellipticity; T_ PSF is the size of the real PSF; δ e_ PSF is the difference between the ellipticity of the real and model PSFs at position x; and δ T_ PSF is the difference between the sizes of the real and model PSFs at position x. Brackets denote averages over all pairs within a separation θ, and asterisks denote complex conjugates. An example of ρ statistics plotted as a function of distance between neighboring stars is shown in Figure <ref>. We will also compute the two-point spatial correlations of star and galaxy ellipticities: C_i = ⟨ e_i(x) × e_i( x+θ)⟩, i={1,2} where e_i is the ith ellipticity component of a PSF-corrected star or galaxy at position x. The correlations C_1/2 of galaxy-galaxy pairs should have a relatively high amplitude, reflecting the correlated shear introduced by the galaxy cluster. However, the C_1/2 functions should vanish when evaluated over star-galaxy pairs, as there should be no correlation between the shapes of circularized stars and PSF-corrected galaxy shapes <cit.>. §.§.§ Multi-epoch data structure With object cataloging and PSF modeling complete, an instance of the MEDS class is created. For each object in the detection catalog, an entry in the MEDS is made to hold a binary table with postage stamp cutouts from the single-epoch exposures, a PSF model rendering, weights, masks, and segmentation maps. The MEDS is also populated with objects' celestial and image coordinates, original catalog ID number, and WCS information. §.§ Galaxy shape measurement We measure galaxy ellipticities using the NGMix[<https://github.com/esheldon/ngmix>] package, which implements Gaussian mixture models to recover the shear from 2D images with good accuracy for even very low S/N galaxies. Rather than a single-point estimate, NGMix returns an estimator of the shape from an ensemble of measurements of the galaxy – generally every epoch, in every filter in which the galaxy was observed. We implement NGMix with a Python wrapper script that creates an instance of the NGMixMEDS class and populates it with observation information for all sources in the supplied MEDS file. Because of the high source density of observations, many of the postage stamps in the MEDS contain not one but two sources: the galaxy of interest and an interloping star or galaxy. Left unmasked, the presence of interlopers introduces a large scatter in the final tangential shear measurements, as NGMix treats both sources as a single galaxy. Following the solution used in DES SV and Y1, we mitigate interlopers using so-called überseg masks. These masks are generated using the detection (coadd) image's SExtractor segmentation maps, projected onto the plane of single-epoch exposures. Pixels in the MEDS weight cutouts are set to zero if they are more closely associated with an interloping object than with the galaxy of interest <cit.>. §.§ Weak lensing shear profile calculation Tangential and cross shear profiles of galaxy clusters are produced in the module of our pipeline. At this stage, redshift information is added to the galaxy shape fit catalog, selection cuts (including redshift selection) are applied, and Metacalibration responsivities are calculated and applied to galaxy shapes. The (g_tan, g_×) shears are computed from (g_1, g_2) and then averaged in radial bins from the cluster center. Further detail is provided below. §.§.§ Creation of galaxy shear catalog A top-level catalog with galaxy shape parameters, responsivity components, detection parameters, and redshifts is generated by joining the SExtractor and NGMix catalogs on sky coordinates (α, δ) and then matching to a third catalog with redshift information. The galaxy shear catalog is then built from galaxies meeting the following criteria: * 10 < S/N < 1000, where the signal-to-noise measure is the galaxy from NGMix fits * Galaxy size (really, area) 0 < < 10, where is in units of arcsec^2 * Ratio of galaxy to PSF size (/) > 1.0 * Galaxy redshift z_gal greater than the cluster redshift z_cl When appropriate, i.e., for a nearly round PSF, we base size and signal-to-noise cuts on the “roundified” size and signal-to-noise and . These selections are based on those in the DES analyses <cit.>. Shear and selection responsivities are calculated from the NGMix shape fit parameters to produce responsivity-corrected galaxy shear (g_1,g_2). Selection of background galaxies through redshift cuts is included within the calculation of the selection responsivity. Galaxies are weighted by their shape fit covariances σ^2_g1/2 and a shape noise of σ_SN = 0.26 is based on our own fits to COSMOS galaxies: 1/σ^2_SN+σ^2_g1+σ^2_g2 §.§.§ Shear profile calculation Response-corrected (g_1, g_2) moments are transformed into tangential and cross ellipticities (g_tan, g_x) using the galaxy image coordinates (x_i, y_i) and user-specified location of the galaxy cluster center (x_c, y_c). g_tan = -(g_1 cos(2ϕ) + g_2 sin(2ϕ)), g_× = g_1 sin(2ϕ) - g_2 cos(2ϕ), ϕ = 1/2arctan(y - y_c/x - x_c) The class of the Python package package to compute weighted averages of (g_tan, g_×) in radial bins about the cluster center. The final outputs are a shear profile catalog with averaged (g_tan, g_×) and a plot of the cluster's cross and tangential shear profiles. Two examples from simulated observations are shown in Figure <ref>. §.§ Shear bias estimator While we do not attempt shear calibration in this analysis, we have developed an estimator for shear bias tailored to cluster shear tangential profiles. It is included in the pipeline to support future efforts. Borrowing the language of large cosmological surveys, we express the difference between input (simulated) and output (measured) tangential shears as a shear bias α, which we quantify with a maximum likelihood estimator α̂. Each galaxy's measured shear g_tan is considered a random sample of the true halo shear g_ true at the galaxy's position. The joint probability distribution (likelihood) ℒ of the data then follows a multivariate Gaussian in which the mean shear ⟨ g_ tan⟩ converges to α g_ true in a radial bin, where α = ⟨ g_tan⟩/⟨ g_ true⟩ If the measurements g_ tan are unbiased measurements of the true shear g_ true, then α≡ 1 and ⟨ g_ tan⟩ = g_ true. Any α≠ 1 indicates a biased measurement. To obtain an optimal estimator α̂ for the tangential shear bias α, we express the log-likelihood as logℒ = - [𝐠_tan - α 𝐠_ true ] 𝐂^-1/2[𝐠_tan - α 𝐠_ true ]^ T The measurement uncertainties of the data 𝐠_tan are expressed as the covariance 𝐂. Note that matrix quantities are written in boldface. Differentiating Equation <ref> with respect to α, setting the result to zero, and then solving for α, we obtain the maximum likelihood estimator for shear bias: α̂ = 𝐠_ true^ T 𝐂^-1 𝐠_tan/𝐠_ true^ T 𝐂^-1 𝐠_ true The uncertainty on α̂ is given by the the Cramér-Rao bound: σ^2_α̂ = 1/𝐠_ true^ T 𝐂^-1 𝐠_ true An unbiased cluster tangential shear measurement has α̂ = 1 ± σ_α̂. The goal for shear calibration will be a shear bias consistent with unity within the mass uncertainty of the full cluster sample (2–3 %). As the value of α̂ calculated from a large number of simulations is a useful metric for shear calibration, the pipeline contains tools for the calculation of the average α̂ as well. An example setup is shown in Figure <ref>. The shear calibration analysis will be presented in S. Everett et al. (in preparation). § SIMULATED GALAXY CLUSTER OBSERVATIONS To plan observations and calibrate the analysis pipeline for 's science flight, we have used GalSim <cit.> to produce mock observations of galaxy clusters. These simulate 3 hour observations in each of 's , , and filters, divided into =36 individual, dithered exposures of =300 seconds. The central region of a full 3-hour observation of one simulated cluster is shown in Figure <ref>. For each cluster, we create 30 mock sets of images with independent distributions of stars, cluster member galaxies, and field galaxies both in front of and behind the galaxy cluster. We store a truth catalog containing the objects' positions, sizes, fluxes, redshifts, and applied lensing distortion (for galaxies behind the cluster). For distortion calculations, we set Ω_M = 0.3 and Ω_Λ = 0.7. Simulated clusters have mass M_200c = 4.1 14 M_⊙ h^-1 (the mean mass of clusters in the target list) and three redshifts (z=0.059, 0.3, 0.45). Cluster mass distributions are modeled with Navarro, Frenk, and White (1996; NFW) density profiles: ρ(r) = ρ_0/r/R_S( 1 + r/R_S)^2 §.§ Point spread function and stars The PSF is well modeled with two components (jitter+optics or `optics-on') that combine the residual telescope jitter measured during test flights with spherical aberrations for the optical train derived with ray-tracing software. However, we base survey forecasts on an (`optics-off') Gaussian approximation to the PSF because of a temporary limitation in the NGMix method that we use for shape measurement. NGMix does not currently include templates for diffraction-limited PSFs, and tends to over-estimate the PSF size T_ PSF by about ∼ 50%. Given that almost all weak lensing analyses select galaxies based on their size relative to PSF size (T/T_ PSF), this artificially decreases the source density. An extension to the NGmix template set will be presented in the shear calibration paper by S. Everett et al. Meanwhile, we implement Gaussian approximations to the jitter+optics PSF, with FWHM of 0.278 in , 0.315 in , 0.333 in , and 0.37 in . These values are the combination of the jitter FWHM of 0.05 and the FWHM obtained with ray-tracing models of the optical train in each bandpass. The `optics-on' and `optics-off' versions of the b-band PSF are compared in Figure <ref>. We simulate the spatial clustering and magnitude distribution of foreground stars by sampling Gaia DR2 catalogs <cit.> at the RA and Dec coordinates of 52 of 's target clusters. We convert Gaia G/G_ BP fluxes to AB fluxes, then choose one of these star fields at random for each realization of mock images. Because the star fields span a range of galactic latitudes, this effectively marginalizes over stellar number density when predicting shear biases. §.§ Galaxies Our simulation input catalog is a hybrid of two different COSMOS catalogs. A full description for generating the mock source galaxy catalog will appear in a paper by A. Gill et al. (in preparation); a high-level overview is presented here. The baseline is the UltraVISTA-DR2 region of the COSMOS 2015 catalog <cit.>, which contains 518,404 galaxies with high-quality redshifts spread out over 1.5 deg^2. The number density, redshift, and magnitude distributions of our simulated background source galaxies are drawn directly from COSMOS 2015. To convert COSMOS 2015 fluxes to their equivalent in bandpasses, we access the spectral energy distribution fits from the EL-COSMOS project <cit.>, convolve these with the wavelength-dependent OTA throughput, detector QE, and filter transmission curves, and finally integrate counts over the collecting area of the mirror (cf. Section <ref>). We add morphological information to COSMOS 2015 with a heuristic match in luminosity (m_ C15) and redshift (z_ C15) to galaxies in the GalSim COSMOS F814W<25.2 catalog. In our simulations, galaxies are drawn as single-component Sersíc profiles with half-light radius R_1/2 and index n, position angle ϕ, and major-to-minor axis ratio q. Parameter values are chosen with the following algorithm: * In the best-case scenario of z_ C15 < 5 and 18 <m_ C15<25.2, a source is selected from the GalSim COSMOS that best matches the COSMOS 2015 galaxy and its shape parameters are assigned to the COSMOS 2015 galaxy. * If z_ C15 < 5 and 25.2 <m_ C15<30, a source is selected from GalSim COSMOS that best matches the COSMOS 2015 galaxy redshift and used to set the half-light radius. The Seríc index n is selected from a uniform distribution U[0, 4]. The position angle ϕ is also chosen from a uniform distribution U[-2, 2] radians. The axis ratio q is selected from a uniform distribution U[0.1, 1]. * If z_ C15 > 5 but 18 <m_ C15<25.2, n, q and ϕ are chosen based on the closest match in m_ F814W between GalSim COSMOS and COSMOS 2015. The half-light radius R_1/2 is randomly chosen from a uniform distribution U = [5,20] pixels (plate scale = 003/pixel). * All other z_ C15 and m_ C15 cases correspond to outliers with no equivalents in the GalSim COSMOS catalog. In this instance, all galaxy shape parameters are chosen from uniform distributions. While GalSim does have ready-made galaxy catalogs available, their maximum depth of F814W = 25.2 would limit our ability to simulate deep observations. Moreover, the number of galaxies with photometric redshifts has increased since 2007 (the year of the original GalSim COSMOS catalog's release). These limitations motivated us to create our own galaxy catalog for simulations. §.§ Simulation procedure First, we initialize the random number generators for stars, source galaxies, cluster galaxies, noise and dither offsets, passing any seeds set in the GalSim configuration file. The blank exposure is represented with an instance of the GalSim object (GSObject) set to match the instrument properties of Section <ref>, and includes a model world coordinate system (WCS). The image is filled with the raw sky background derived in <cit.>; approximately 45 ADU for a 300-second exposure in the filter. The cluster lensing potential is represented with an instance of the class. The halo concentration is set to 4 in all simulations. For each source galaxy to be injected into the image, the following process is repeated. A galaxy entry is randomly drawn from the mock galaxy catalog and assigned some right ascension and declination on the observation. The galaxy's photometric redshift, shape parameters, and flux in the filter of choice are accessed from our mock galaxy catalog. The galaxy image is created as an instance of with shape parameters set to the catalog values. To convert the catalog flux from units of photoelectrons s^-1 to equivalent observed analog-to-digital units (ADU), we multiply the flux by the exposure time and the gain. The source galaxy object is sheared and magnified according to its redshift with the object, or if the source galaxy redshift is below the cluster redshift, the galaxy's magnification and distortion are set to 1 and 0 respectively. The galaxy image then convolved with the PSF model. For later reference, the galaxy position, lensing magnification, reduced shear moments, redshift, and stamp flux are passed to a truth catalog. Finally, the galaxy image is converted to a “stamp” GSObject and drawn onto the observation at the appropriate coordinates. We inject a fiducial number of 99 galaxies per square arcminute. Cluster galaxies are generated in much the same way as source galaxies, except that they are concentrated in the center of the observation and no lensing distortion is applied. The number of cluster galaxies (30) is set to approximately match the source density of bright cluster galaxies in the 2019 Abell 2218 observation. They are uniformly distributed in a circle of radius 200 pixels (28). A random offset is added of ± 50 pixels, about 7 per galaxy. Because the cluster galaxies are generally large and bright, the default GalSim COSMOS F814W < 23.5 sample catalog is sufficient for modeling cluster galaxy sizes and brightnesses. For recording in the truth catalog, they are assigned a redshift equal to the redshift in the class. We have an ensemble of catalogs containing star positions and brightness. These catalogs are made using the Gaia satellite observations of the galaxy clusters in 's planned target list. For each simulation, we select a catalog and draw the same number of stars as observed by Gaia, using their fluxes to accurately represent the stars' brightness, while the spatial density is also preserved. Star positions, however, are not specifically replicated. Pre-seeing stars are modeled as objects, with a flux randomly drawn from the selected cluster's Gaia catalog of real stars. The star model is convolved with the same PSF model as above, before itself being drawn into the observation. Unless otherwise specified in the configuration file, the total number of stars injected over the entire field of view matches the number of entries in the selected Gaia catalog. Once injection of all stars and background and cluster galaxies is complete, we add dark current to the image. The final step is application of the method, which adds Poisson noise to the image based on the pixel values (including read noise). At this point, the simulated observation may be saved to file, and the process is repeated up to the total desired exposure time, in each desired filter. To provide a reference for shear bias calculations, NFW tangential shear catalogs are generated in every (M, z) bin with a modified version of the simulations code. The redshift distributions of the reference NFW catalogs are identical to the input COSMOS catalog; however, they will differ significantly from the redshift distributions of the final mock observation catalogs. We circumvent this problem by resampling the NFW references catalog with a Monte Carlo rejection sampling algorithm until the redshift distributions match the mock observation catalogs. Figure <ref> shows an example of the resulting, nearly indistinguishable redshift distributions. § RESULTS Having developed this data analysis infrastructure, we now consider its application to our simulated galaxy cluster observations. Table <ref> summarizes the mean source density, imaging depth, and galaxy redshift distributions for mock observations of clusters in three redshift bins: z=0.059, z=0.3, and z=0.45. These results are computed from 30 independent realizations in each redshift bin for a total of 90 unique cluster fields. We estimate survey properties for the total number of galaxies observed (“all galaxies” in Table <ref>) and lensing-analysis galaxies that pass selection cuts in Section <ref> (“lensing”). To separate the effect of redshift cuts from the rest of the lensing selections in <ref>, we also compute survey properties for the background galaxies for each cluster (“z_gal > z_clust”) without any size selections. All quantities are computed on the coadded images and obey the following color convention in plots: is shown in pink, in blue, in orange, and in red. All magnitudes are expressed in the AB magnitude system. ccccccc Forecast Observation Depths and Redshifts Cluster z Galaxy sample Filter Source density S/N=10 depth Median z Mean z (N_ gals arcmin^-2) (AB mag) 0.059 All galaxies u 15.4 25.5 0.9 1.0 0.059 All galaxies b 43.1 26.3 1.1 1.3 0.059 All galaxies lum 45.5 26.3 1.1 1.3 0.059 All galaxies shape 36.5 25.2 0.9 1.2 0.059 z > z_ clust u 15.2 25.5 0.9 1.1 0.059 z > z_ clust b 42.9 26.3 1.1 1.3 0.059 z > z_ clust lum 45.2 26.3 1.1 1.3 0.059 z > z_ clust shape 36.3 25.2 0.9 1.2 0.059 Lensing u 9.1 25.4 0.9 1.0 0.059 Lensing b 31.4 26.3 1.0 1.2 0.059 Lensing lum 33.5 26.2 1.0 1.2 0.059 Lensing shape 26.0 25.1 1.0 1.2 0.3 z > z_ clust u 12.9 25.4 1.0 1.1 0.3 z > z_ clust b 38.5 26.3 1.2 1.3 0.3 z > z_ clust lum 40.6 26.3 1.1 1.3 0.3 z > z_ clust shape 32.1 25.2 1.0 1.3 0.3 Lensing u 7.6 25.4 0.9 1.1 0.3 Lensing b 28.3 26.2 1.1 1.3 0.3 Lensing lum 30.2 26.2 1.1 1.3 0.3 Lensing shape 23.2 25.1 1.0 1.2 0.45 z > z_ clust u 11.1 25.5 1.1 1.2 0.45 z > z_ clust b 34.5 26.3 1.2 1.4 0.45 z > z_ clust lum 36.4 26.3 1.2 1.4 0.45 z > z_ clust shape 28.2 25.2 1.1 1.3 0.45 Lensing u 6.4 25.4 1.0 1.2 0.45 Lensing b 25.3 26.2 1.2 1.4 0.45 Lensing lum 27.2 26.2 1.2 1.4 0.45 Lensing shape 20.5 25.1 1.1 1.3 Results are based on three hours of integration time per band per cluster. The z>z_clust and “all galaxies” samples have a 5 selection. “Lensing” galaxies pass the selection criteria listed in Section <ref>. §.§ Source density We compute the galaxy number (source) density as a function of exposure time as follows. Upon completion of a cluster realization, a script generates a list of exposures that are a subset of the total number. Next, a pared-down version of combines the exposures into a coadd and produces a source catalog, which is then matched to the galaxy and lensing analysis catalogs of the full observation. The process is repeated for 1-6 exposures and then intervals of 3 exposures. Once this process is complete for all thirty realizations in that (M, z) and bandpass bin, the script computes summary statistics such as mean and standard deviation of galaxy catalog lengths for the n-exposure coadds. Figures <ref> and <ref> show mean number of galaxies per square arcminute as a function of on-sky integration time. Results are shown for the u, b, lum, and shape bands. Error bars are standard error of the mean across the 30 cluster realizations of each redshift bin. Integration time is expressed in number of coadded five-minute exposures (the fiducial exposure time) to reach a total of 3 hours (36 × 5 minutes). Total galaxy number densities are shown in Figure <ref>; these samples have no selections on galaxy shape fits or redshifts beyond a SExtractor >5 cut. The source densities for three hours of integration time are, 45.5 galaxies per square arcminute in lum, 43.1 in b, 36.5 in shape, and 15.4 in u. The growth of source density is well fit by a logarithmic function. In the planned shear measurement band b, N_gals = 11.01 log_2(2.99 + N_exp) - 15.34 Extrapolating outwards, increasing the b source density from 43 to 50 would take an additional 1.3 hours of observation. The background galaxy number densities in Figure <ref> include the lensing sample selections of Section <ref>. Lensing-analysis samples for clusters at z=0.059 have mean source densities of 33.5 in lum; 31.4 in b; 26.0 in shape; and 9.1 in u (though we would not attempt weak lensing measurements in u). For clusters at z=0.3, the corresponding source densities are 30.2 in lum; 28.3 in b; 23.2 in shape; and 7.6 in u. Source densities clusters at z=0.45 (the highest redshift bin considered) have mean source densities of 27.2 in lum; 25.3 in b; 20.5 in shape; and 6.4 in u. To separate the effect of redshift cuts from the rest of the lensing selections in <ref>, we also calculate source densities of background galaxies with no additional selections. Table <ref> shows that redshift cuts alone produce more modest drops in source density than the lensing selections. The change in source density for a cluster at z=0.059 is insignificant within error bars, but lensing selections reduce the source density in b by 27%, from 43.1 to 31.4 . The source density behind z = 0.3 is 38.5 galaxies arcmin^-2 (about 10% drop from 43.1), but the rest of the lensing selections leaves 28 galaxies arcmin^-2 (two-thirds of the original source density). Similarly, the source density behind z = 0.45 is 34.5 galaxies arcmin^-2 in b (a 20% drop), while the lensing sample has a source density of 25.3 galaxies arcmin^-2 (40% lower than the full galaxy sample). We find that lensing-analysis selections tend to decrease the source densities more strongly than redshift cuts alone. §.§ Depths Depth, or the limiting magnitude for some threshold, is a commonly used figure of merit in astronomical surveys. We adopt the magnitude limit corresponding to a fixed ∼ S/N=10 threshold (9.8-10.2) based on δ F / F ∼ 0.1, where F = and δ F = <cit.>. Three hours of observation in yields a ∼ S/N=10 depth of 26.3 before any lensing selections are made; lensing selections do not significantly change the depth. S/N=10 depths in are similar to , while and depths are about a magnitude shallower. Values for all filters are listed in Table <ref>. Figure <ref> and the top panel of Figure <ref> show galaxy brightness distributions, displayed as histograms of detected galaxy counts. Magnitudes are obtained from Source Extractor values, using the IMX455 detector gain and quantum efficiency to convert to the AB system. Distributions are shown for each of , , , and , and the histograms are normalized such that the product of bin width and probability is one. The top panel of Figure <ref> shows the distribution of the “all galaxies” sample with only a 5 selection. Figure <ref> presents number counts as a function of brightness for galaxies behind clusters at z=0.059 (top row), z=0.3 (middle row), and z=0.45 (bottom row). The left panels show the z_gal > z_clust sample for each cluster, and the right panels show the lensing-analysis galaxy samples. §.§ Redshift distributions The strength of weak lensing signal depends on the relative distances of the cluster and background galaxies. Accordingly, we calculate galaxy redshift distributions (Figures <ref>, bottom panel, and <ref>). Distributions are obtained with kernel density estimation and are normalized to show relative probability density within a bandpass (the product of bin width and probability density equals unity). Dotted lines mark the median redshift in a given filter. The bottom panel of Figure <ref> shows redshift distributions for all galaxies detected in (pink), (blue), (orange), and (red) coadds. The probability density in all bands peaks around z=0.83, with long tails past redshift z=1.5. Consistent with the depths in Section <ref>, and observations have mean and median redshifts about 0.15 units lower than the deeper bandpasses. As in Section <ref>, redshift distributions are shown both for a z_gal > z_clust selection and for galaxies that pass all lensing analysis selections. The mean redshifts of background galaxies increases slightly with increasing cluster redshift, from a mean b redshift of z̅ = 1.3 for z_clust = 0.059 to z̅ = 1.4 for z_clust = 0.45. However, the changes are small, and lensing selections do not appear to change the mean or median background galaxy redshifts. The mean and median values of redshift in all bandpasses and galaxy samples are summarized in Table <ref>. It is reasonable to ask whether the redshift distributions of the galaxy lensing samples are actually distinct from the input COSMOS 2015 catalog. We compare these in Figure <ref>, which compares the redshift distributions of the COSMOS 2015 simulation input catalog and the lensing-analysis catalogs of observations of clusters with (M, z) = (4.1 14 M_⊙ h^-1, 0.059) and (4.1 14 M_⊙ h^-1, 0.3). The probability densities of both the COSMOS 2015 catalog and lensing samples are maximized at z ∼ 0.9. However, the COSMOS 2015 catalog has a higher probability density at z>1. The mean redshift of the COSMOS 2015 catalog is z̅=1.5, compared with z̅ = 1.2 for the z>0.059 lensing-analysis sample and z̅ = 1.3 for the z>0.3 lensing-analysis sample. The calculations above assume perfect knowledge of the redshift. In real observations, we will separate background (lensed) galaxies from foreground (unlensed) galaxies with galaxy color cuts. To optimize the exposure time per bandpass for an effective foreground/background separation, we investigated the evolution of (u - b), (b - g) and (g-r) colors with redshift for a range of galaxy types. We sampled galaxy redshifts in the range 0 < z_ gal < 1.5 at δ z = 0.02 intervals for spectral templates from elliptical to starburst <cit.>. For each δ z = 0.02 point, we transformed the galaxy's spectral energy distribution (SED) to the desired redshift and scaled the SED flux to achieve an integrated S/N of 10 in the b-band filter, representing the minimum S/N for inclusion in lensing analysis. Based on the scaled SED flux, we calculated the b-, g-, and u-band magnitudes along with their respective magnitude errors. By calculating uncertainties for a galaxy with S/N = 10, we obtained conservative error bars that allowed us to define realistic color-cut boundaries for our galaxy selection. For a fiducial cluster redshift of z = 0.5, we determined that 3 hours of integration time in b, 1.5 hours in g, and 3 hours in u provided optimal separation for galaxies of most spectral types. Figure <ref> illustrates the color evolution for two spectral types (elliptical and disk-dominated spiral), depicted by solid lines. The small points represent the galaxy colors calculated at each δ z = 0.02 interval. To aid the reader, we highlight specific foreground (z_ gal < 0.5) and background (z_ gal≥ 0.6) locations as blue stars and red squares, respectively, at intervals of δ z = 0.1. Error bars in Figure <ref> represent predicted 1 σ color uncertainties for the aforementioned exposure times and a galaxy b-band S/N = 10. Cyan error bars correspond to galaxies in the foreground of a z = 0.5 cluster, while magenta error bars indicate galaxies behind the cluster (z_ gal > 0.6). The dashed black lines in Figure <ref> demarcate a `clean' color-color space for a z=0.5 cluster. The galaxy sample below the black lines is dominated by background galaxies at z_ gal > 0.6, with minimal contamination from foreground galaxies (z_ gal≤ 0.5). §.§ Mean shear profiles As part of the pipeline validation effort, we also produce weak gravitational lensing shear profiles for all cluster realizations. Two examples of single-realization cluster shear profiles were shown in Figure <ref>. To examine the claim that is capable of weak lensing measurements in blue bandpasses, we compare the mean tangential shear profiles of cluster observations in ('s intended filter for galaxy shape measurement), , and the Euclid VIS-like filter in Figure <ref>. The mean tangential shear profiles of 30 realizations of z=0.059 clusters are shown in the top panel and z=0.45 clusters in the bottom panel. Each point represents the mean value of the cluster tangential shear profiles, while error bars show the standard deviation of the mean in each radial bin. We find that the tangential shear profiles are easily detected in all three bandpasses. No differences in the mean values for , and are readily apparent for either cluster in Figure <ref>. Qualitatively, the band error bars appear slightly larger than the and error bars, which is consistent with the lower source densities in Table <ref>. We emphasize that the shear profiles of Figure <ref> are averages of averages, and would not be used for shear calibration or mass fitting. Instead, the figure highlights the variability and reliability of the measured tangential shear across the sample of clusters. § DISCUSSION We provide some additional commentary on our analyses and results here. §.§ Simulation inputs and effect on forecasts The simulations presented in Section <ref> and which form the basis for Section <ref> have many realistic features: an NFW cluster weak lensing profile, star flux and densities from Gaia coverage of targets, measured stratospheric sky brightnesses from <cit.>, and real galaxy redshifts and luminosities from COSMOS catalogs transformed to bandpasses. Although the simulated observations incorporate considerable complexity, there are a few limitations. First, they use a Gaussian approximation of the PSF. In reality, the space-like PSF features Airy rings and diffraction spikes (cf. Figure <ref>). We also do not model uncertainties of galaxy redshifts. This was a deliberate choice, as the systematic errors that redshift uncertainties introduce to weak lensing analysis are orthogonal to the pipeline validation aspect of this work and the shear calibration in forthcoming efforts. Even if we did attempt to incorporate redshift uncertainties, 's strategy for determining redshifts may evolve as the campaign progresses, rendering such forecasting estimates moot. In addition, the validity of our forecast is limited by the simulation input catalog. There are few deep, high resolution observations in the blue and near-UV. A workaround is presented in Section <ref>, but assumes that galaxy morphology parameters in the (z̅ = 0.9) GalSim-COSMOS catalog can be extrapolated to the (z̅ = 1.5) galaxies in the COSMOS 2015 photometric catalog. A more theory-driven approach could involve hydrodynamical simulations. However, the morphology of intermediate- to high-z galaxies is itself a very active area of research. On balance, the high accuracy of the galaxy fluxes and realistic redshift distributions in our input COSMOS 2015 catalog outweigh any uncertainty in galaxy shapes. A full treatment of the galaxy catalog will be presented in a forthcoming paper by A. Gill et al. (in preparation). Finally, the input Gaia star catalogs are incomplete, as illustrated by the gap in the stellar locus of Figure <ref>. We do not believe that the dearth of faint stars affects our conclusions, as very faint stars near the “zone of confusion” would be excluded from PSF fits anyway. Future simulations will incorporate theoretical star distributions. §.§ Estimated observation depths, source densities, and redshifts A major goal of this analysis was to quantify the effect of weak lensing selections on galaxy number density. Table <ref> shows that weak lensing analysis selections cause a more significant decrease in source density (30-40%) than redshift selections alone. The addition of lensing selections does not appear to significantly change the mean and median redshifts of the samples any more than a redshift cut alone. A surprising result of Section <ref> is the high depth and source density in u. The deep NUV CLAUDS survey <cit.> provides one of the few points of comparison for our own findings. At a similar depth to ours (25.5 mag), they report a 5 σ source density of log_10 N = 4.58/ deg^2/0.5 mag, or 10.7 galaxies per arcmin^2. This is 50% lower than our maximum reported value of 15.4 galaxies per arcmin^2. The change in source density with redshift is also noteworthy: we report a 27% decline in u source density from z=0.059 to z=0.45, while over approximately the same redshift range, the CLAUDS survey reports a decline of ∼ 12% <cit.>. One possibility for the divergence is 's smaller PSF: the CLAUDS survey experienced an average PSF FWHM of 0.92, but the PSF FWHM is about 0278. A smaller PSF translates to a higher source density, as objects that might otherwise be blended or smeared out over noisy pixels become resolvable. A more likely explanation is that our UV luminosities do not account for foreground extinction by Milky Way dust, which is significant in the UV and will certainly depress source counts in real observations. If the GalSim-COSMOS shape parameters cannot be extrapolated to bluer bands and fainter galaxies, it is also possible that the galaxy morphologies in our catalog are inaccurate for observations. The ultimate calibration for our simulations will be provided by the analysis of real observations in and . §.§ Impact on observation strategy Figures <ref>, <ref>, and <ref> show that the source density achieved in three hours of observation in or is completely adequate for shear profile measurements. Observations in b or lum longer than three hours would confer limited advantages at a high cost in integration time (see Equation <ref>). In fact, future analysis may reveal that shorter integration times would suffice, saving time during flight and allowing a greater number of targets to be observed. The final observation strategy will depend on the results of ongoing optics-on (jitter+optics) simulations in all bandpasses as well as a redshift analysis that is currently underway. However, Figures <ref> and <ref> strongly support the conclusions of <cit.> that and observations are both faster and deeper than the Euclid VIS-like when observing from the stratosphere. Our estimated source densities in these bandpasses also agree with <cit.> within uncertainties. Finally, Figure <ref> shows the feasibility of measuring galaxy cluster weak lensing signal in and and that a broadband red filter like offers no noticeable advantage over the bluer filters. This result supports our planned observing strategy of deep b observations for galaxy shape measurements. § CONCLUSIONS AND OUTLOOK FOR 2023 In this work, we have presented a first iteration of the galaxy shape measurement pipeline for 's weak lensing analysis. The software and algorithms we employ—GalSim, SExtractor, PIFF, Metacalibration, NGMix—have been rigorously tested and were intended for widespread adoption by the community. Processing simulated observations has allowed us to test their implementation in this pipeline. Several years after the release of these tools, there is now a growing number of pipelines similar to ours, e.g. <cit.> and <cit.>, with more likely to come. Beyond pipeline validation, our simulated observations and catalogs provide estimates for the expected number density, depth, and redshift distribution of galaxies in deep, stratospheric imaging. We predict that can attain a depth of 26.3 mag in the filter and 25.5 mag in the filter – competitive with even the deepest ground-based surveys. We also find a total source density greater than 40 in three hours of integration time in both the and bands. The source density remains high even after the application of lensing catalog selections: 25–30 in the bandpass. We expect that instrumental effects (including the optical PSF) will depress the source density. However, the relative performance of , , and is unlikely to be affected and supports 's observation strategy. This work also offers a look at the weak lensing tangential shear profiles expected for cluster observations, further confirming 's capacity for weak gravitational lensing measurements in the blue. As with the other forecast survey properties, these weak lensing profiles are based on Gaussian approximations to the PSF and do not include redshift uncertainties. The vagaries of real observations will add some scatter to the final weak lensing measurements. Even with these caveats, the relative performance of different filters also supports 's observation strategy. The pipeline and simulations remain in active development. Forthcoming improvements include source detection on a multi-bandpass composite image; galaxy shape measurement with the full PSF; inclusion of faint stars in simulated observations using stellar population synthesis models; and the addition of redshift uncertainty to the input galaxy catalog. While the pipeline includes tools for shear calibration, we do not validate them here. Instead, a complete shear calibration analysis will be presented in a forthcoming paper by S. Everett et al. (in preparation). Though our pipeline has been developed specifically for weak lensing measurements, it is generic and can be refactored for weak lensing observations with other instruments. An obvious example is 's successor mission, GigaBIT: a planned 1.3 m gigapixel class balloon-borne observatory <cit.>. Future pipeline developments will facilitate forecasting and survey planning for both and GigaBIT. Since the initial submission of this paper, we are excited to announce the successful launch and completion of the mission, which spent 40 days at float. The data calibration process is currently underway, and we will subsequently conduct an analysis along the lines described in this paper. offers a new data product: wide-field, diffraction-limited λ < 600 nm imaging deep enough to enable galaxy cluster weak lensing analysis. Our forecast galaxy number density and redshift distribution confirm 's capability for weak lensing mass measurement in blue wavelengths. This demonstrates that even in the era of multi-billion-dollar space telescopes like JWST, Roman, and Euclid, nimble and low-cost missions like offer immense scientific potential and a complementary paradigm for space-based scientific observations. Astropy <cit.>, GalSim <cit.>, Source Extractor <cit.>, NGMIX <cit.>, Seaborn <cit.>, Matplotlib <cit.> § ACKNOWLEDGEMENTS Support for the development of is provided by NASA through APRA grant NNX16AF65G. Launch and operational support for the sequence of test flights from Palestine, Texas are provided by the Columbia Scientific Balloon Facility (CSBF) under contract from NASA's Balloon Program Office (BPO). Launch and operational support for test flights from Timmins, Ontario are provided by the Centre National d'Études Spatiales (CNES) and the Canadian Space Agency (CSA). JR, EH, and SE are supported by JPL, which is run under a contract by Caltech for NASA. Canadian coauthors acknowledge support from the Canadian Institute for Advanced Research (CIFAR) as well as the Natural Science and Engineering Research Council (NSERC). The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. UK coauthors acknowledge funding from the Durham University Astronomy Projects Award, STFC [grant ST/P000541/1], and the Royal Society [grants UF150687 and RGF/EA/180026]. The simulation input catalog is based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. aasjournal
http://arxiv.org/abs/2307.01587v1
20230704092647
ICRC2023 Proceedings: Proposal of a gauge-invariant treatment of $l=0,1$-mode perturbations on the Schwarzschild background spacetime
[ "Kouji Nakamura" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "hep-th", "math-ph", "math.MP" ]
OrthoBoXY: A Simple Way to Compute True Self-Diffusion Coefficients from MD Simulations with Periodic Boundary Conditions Without Prior Knowledge of the Viscosity Dietmar Paschek August 1, 2023 at  =================================================================================================================================================================== 1. Introduction ———- From the direct observation of gravitational waves <cit.>, in 2015, the gravitational-wave astronomy and multi-messenger astronomy including gravitational waves began. One of future directions of gravitational-wave astronomy is the development as a precise science by the detailed studies of source science and the tests of general-relativity. To support such precise sciences, higher-order perturbation theories in general relativity are useful. Among future targets of gravitational-wave sources, the Extreme-Mass-Ratio-Inspiral (EMRI) is one of the targets of the Laser Interferometer Space Antenna <cit.>. The EMRI is a source of gravitational waves, which is the motion of a stellar mass object around a supermassive black hole, and black hole perturbation theories are used to describe this EMRI. Therefore, theoretical sophistications of black hole perturbation theories and their higher-order extensions are necessary. Although realistic black holes have their angular momentum and we must consider the perturbation theory of a Kerr black hole for direct applications to the EMRI, further sophistication is possible even in perturbation theories on the Schwarzschild spacetime. Based on the pioneering works by Regge and Wheeler, and Zerilli <cit.>, there have been many studies on the perturbations of the Schwarzschild spacetime. Because the Schwarzschild spacetime has the spherical symmetry, we decompose perturbations through the spherical harmonics Y_lm and classify them into odd- and even-modes based on their parity. However, l=0 and l=1 modes should be separately treated, and “gauge-invariant” treatments for l=0 and l=1 even-modes remain unknown. In this situation, we proposed a gauge-invariant treatment of l=0,1-modes and derived the solutions to the linearized Einstein equations for these modes <cit.>. The obtained solutions <cit.> are physically reasonable. For this reason, we may say that our proposal is also reasonable. In addition, owing to our proposal, the formulation of higher-order gauge-invariant perturbation theory developed in <cit.> becomes applicable to any-order perturbations on the Schwarzschild background spacetime <cit.>. In this manuscript, we briefly explain these issues. 2. Brief review of general-relativistic gauge-invariant perturbation theory ———- General relativity is a theory based on general covariance, and that covariance is the reason that the notion of “gauge” has been introduced into the theory. In particular, in general relativistic perturbations, the second-kind gauge appears in perturbations <cit.>. In general-relativistic perturbation theory, we usually treat the one-parameter family of spacetimes {(_λ,Q_λ)|λ∈[0,1]} to discuss differences between the background spacetime (,Q_0) = (_λ=0,Q_λ=0) and the physical spacetime (_ ph,Q̅) = (_λ=1,Q_λ=1). Here, λ is the infinitesimal parameter for perturbations, _λ is a spacetime manifold for each λ, and Q_λ is the collection of the tensor fields on _λ. Since each _λ is a different manifold, we have to introduce the point identification map _λ : →_λ to compare tensor fields on different manifolds. This point-identification is the gauge choice of the second kind. Since we have no guiding principle by which to choose identification map _λ due to the general covariance, we may choose a different point-identification _λ from _λ. This degree of freedom in the gauge choice is the gauge degree of freedom of the second kind. The gauge-transformation of the second kind is a change of this identification map. We note that this second-kind gauge is a different notion of the degree of freedom of coordinate choices on a single manifold, which is called the gauge of the first kind <cit.>. Once we introduce the second-kind gauge choice _k : → _λ, we can compare the tensor fields on different manifolds {_λ}, and perturbations of a tensor field Q_λ are represented by the difference _λ^*Q_λ - Q_0, where _λ^* is the pull-back induced by the gauge choice _λ and Q_0 is the background value of the variable Q_λ. This representation of perturbations completely depends on _λ. If we change the gauge choice from _λ to Y_λ, the pulled-back variable of Q_λ is represented by _λ^*Q_λ. These different representations are related through the gauge-transformation rule _λ^*Q_λ = Φ^*_λ_λ^*Q_λ , Φ_λ := _λ^-1∘_λ. Φ_λ is a diffeomorphism on the background spacetime . In the perturbative approach, we treat the perturbations of the pulled-back variable _λ^*Q_λ through the Taylor series with respect to the infinitesimal parameter λ as _λ^*Q_λ =: ∑_n=0^kλ^n/n!^(n)_Q + O(λ^k+1), where ^(n)_Q is the representation of the kth-order perturbation of Q_λ under _λ with ^(0)_Q=Q_0. Similarly, we can have the representation of the perturbation of Q_λ under the different gauge choice _λ from _λ. Since these different representations are related to the gauge-transformation rule (<ref>), the order-by-order gauge-transformation rule between ^(n)_Q and ^(n)_Q is given from the Taylor expansion of Eq. (<ref>). In general, Φ_λ is given by a knight diffeomorphism <cit.>: Let Φ_λ be a one-parameter family of diffeomorphisms, and T a tensor field such that Φ_λ^*T is of class C^k. Then, Φ_λ^*T can be expanded around λ=0 as Φ_λ^*T = ∑_n=0^kλ^n∑_{j_i}∈ J_n C_n,{j_i}_ξ_(1)^j_1⋯_ξ_(n)^j_nT + O(λ^k+1) . Here, J_n:={{j_i} | ^∀i ∈, j_i∈, s.t. ∑_i=1^∞ ij_i=n} and C_n,{j_i} := ∏_i=1^n1/(i!)^j_ij_i!. The vector fields ξ_(1), ..., ξ_(k) in Eq. (<ref>) are called the generators of Φ_λ. Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we obtain the order-by-order gauge-transformation rules between ^(n)_Q and ^(n)_Q as ^(n)_ Q - ^(n)_ Q = ∑_l=1^nn!/(n-l)!∑_{j_i}∈ J_l C_l,{J_i}_ξ_(1)^j_1⋯_ξ_(l)^j_l^(n-l)_ Q . Inspecting the gauge-transformation rule (<ref>), we first defined gauge-invariant variables for metric perturbations <cit.>. We consider the metric g̅_ab on (_ ph,Q̅) = (_λ=1,Q_λ=1), and we expand the pulled-back metric _λ^*g̅_ab to through a gauge choice _k as _λg̅_ab = ∑_n=0^kλ^n/n!^(n)_g_ab + O(λ^k+1), where g_ab:=^(0)_g_ab is the metric on . The expansion (<ref>) of the metric depends entirely on _λ. Nevertheless, henceforth, we do not explicitly express the index of the gauge choice _λ if there is no possibility of confusion. In <cit.>, we proposed a procedure to construct gauge-invariant variables for higher-order perturbations. Our starting point of this construction was the following conjecture for the linear metric perturbation h_ab:=^(1)g_ab: If the gauge-transformation rule for a pulled-back tensor field h_ab from _ ph to is given by _h_ab - _h_ab = _ξ_(1)g_ab with the metric g_ab on , there then exist a tensor field _ab and a vector field Y^a such that h_ab is given by h_ab =: _ab + _Yg_ab, where _ab and Y^a are transformed as __ab - __ab = 0 and _Y^a - _Y^a = ξ^a_(1) under the gauge transformation, respectively. We call _ab and Y^a as the gauge-invariant and gauge-variant parts of h_ab, respectively. Based on Conjecture <ref>, in <cit.>, we found that the nth-order metric perturbation ^(n)_g_ab is decomposed into its gauge-invariant and gauge-variant parts as [ Precisely speaking, to reach to the decomposition formula (<ref>), we have to confirm Conjecture 4.1 in Ref. <cit.> in addition to Conjecture <ref>. ] ^(n)g_ab = ^(n)_ab - ∑_l=1^nn!/(n-l)!∑_{j_i}∈ J_l C_l,{j_i}_-^(1)Y^j_1⋯_-^(l)Y^j_l^(n-l)g_ab . Furthermore, through the gauge-variant variables ^(i)Y^a (i=1,...,n), we also found the definition of the gauge-invariant variable ^(n) for the nth-order perturbation ^(n)Q of an arbitrary tensor field Q. This definition of the gauge-invariant variable ^(n) implies that the nth-order perturbation ^(n)Q of any tensor field Q is always decomposed into its gauge-invariant part and gauge-variant part as ^(n)Q = ^(n) - ∑_l=1^nn!/(n-l)!∑_{j_i}∈ J_l C_l,{j_i}_-^(1)Y^j_1⋯_-^(l)Y^j_l^(n-l)Q . For example, the perturbative expansion of the Einstein tensor and the energy-momentum tensor, which are pulled back through _λ, are given by _λ^*G̅_a^ b = ∑_n=0^kλ^n/n!^(n)_G_a^ b + O(λ^k+1) , _λ^*T̅_a^ b = ∑_n=0^kλ^n/n!^(n)_T_a^ b + O(λ^k+1) . Then, the nth-order perturbation ^(n)_G_a^ b of the Einstein tensor and the nth-order perturbation ^(n)_T_a^ b of the energy-momentum tensor are also decomposed as ^(n)G_a^ b = ^(n)_a^ b - ∑_l=1^nn!/(n-l)!∑_{j_i}∈ J_l C_l,{j_i}_-^(1)Y^j_1⋯_-^(l)Y^j_l^(n-l)G_a^ b , ^(n)T_a^ b = ^(n)_a^ b - ∑_l=1^nn!/(n-l)!∑_{j_i}∈ J_l C_l,{j_i}_-^(1)Y^j_1⋯_-^(l)Y^j_l^(n-l)T_a^ b . Through the lower-order Einstein equation ^(k)_G_a^ b=8π^(k)_T_a^ b[ We use the unit G=c=1, where G is Newton's constant of gravitation, and c is the velocity of light. ] with k≤ n-1, the nth-order Einstein equation ^(n)_G_a^ b=8π^(n)_T_a^ b is automatically given in the gauge-invariant form ^(n)_a^ b = ^(1)_a^ b[^(n)] + ^( NL)_a^ b[{.^(i)|i<n}] = 8π^(n)_a^ b, where ^(1)_a^ b is the gauge-invariant part of the linear-order perturbation of the Einstein tensor. Explicitly, ^(1)_a^ b[A] for an arbitrary tensor field A_ab of the second rank is given by <cit.> ^(1)_a^ b[A] := ^(1)Σ_a^ b[A] - 1/2δ_a^ b^(1)Σ_c^ c[A] , ^(1)Σ_a^ b[A] := - 2 ∇_[a^H_d]^ bd[A] - A^cb R_ac , H_ba^ c[A] := ∇_(aA_b)^ c - 1/2∇^cA_ab . As derived in <cit.>, when the background Einstein tensor vanishes, we obtain the identity ∇_a^(1)_b^ a[A] = 0 for an arbitrary tensor field A_ab of the second rank. We emphasize that Conjecture <ref> was the important premise of the above framework of the higher-order perturbation theory. 3. Linear perturbations on spherically symmetric background ———- We use the 2+2 formulation of the perturbations on spherically symmetric spacetimes. The topological space of spherically symmetric spacetimes is =_1× S^2, and the metric on this spacetime is g_ab = y_ab + r^2γ_ab , y_ab = y_AB (dx^A)_a(dx^B)_b , γ_ab = γ_pq (dx^p)_a (dx^q)_b , where x^A = (t,r), x^p=(θ,ϕ), and γ_pq is a metric of the unit sphere. In the Schwarzschild spacetime, y_ab=-f(dt)_a(dt)_b+f^-1(dr)_a(dr)_b with f=1-2M/r. On this (,g_ab), we consider the components of the metric perturbation as h_ab = h_AB (dx^A)_a(dx^B)_b + 2 h_Ap (dx^A)_(a(dx^p)_b) + h_pq (dx^p)_a(dx^q)_b . In Ref. <cit.>, we proposed the decomposition of these components as h_AB = ∑_l,mh̃_AB S_δ , h_Ap = r ∑_l,m[ h̃_(e1)AD̂_pS_δ + h̃_(o1)Aϵ_pqD̂^qS_δ] , h_pq = r^2∑_l,m[ 1/2γ_pqh̃_(e0) S_δ + h̃_(e2)( D̂_pD̂_q - 1/2γ_pqΔ̂) S_δ + 2 h̃_(o2)ϵ_r(pD̂_q)D̂^r S_δ] , where D̂_p is the covariant derivative associated with the metric γ_pq on S^2, D̂^p:=γ^pqD̂_q, and ϵ_pq=ϵ_[pq] is the totally antisymmetric tensor on S^2. The decomposition (<ref>)–(<ref>) implicitly state that the Green functions of the derivative operators Δ̂:=D̂^rD̂_r and Δ̂+2:=D̂^rD̂_r+2 should exist if we require the one-to-one correspondence between {h_Ap, h_pq} and {h̃_(e1)A, h̃_(o1)A, h̃_(e0), h̃_(e2), h̃_(o2)}. Because the eigenvalue of the operator Δ̂ on S^2 is -l(l+1), the kernels of the operators Δ̂ and Δ̂+2 are l = 0 and l = 1 modes, respectively. Thus, the one-to-one correspondence between {h_Ap, h_pq} and {h̃_(e1)A, h̃_(o1)A, h̃_(e0), h̃_(e2), h̃_(o2)} is not guaranteed for l = 0,1 modes in Eqs. (<ref>)–(<ref>) with S_δ=Y_lm. To recover this one-to-one correspondence, we consider the scalar harmonics <cit.> S_δ = { Y_lm l≥ 2; k_(Δ̂+2)m l=1; k_(Δ̂) l=0 } . As the explicit functions of k_(Δ̂) and k_(Δ̂+2)m, we employ k_(Δ̂) = 1 + δln(1-z/1+z)^1/2 , k_(Δ̂+2)m=0 = z + δ(z/2ln1+z/1-z-1) , k_(Δ̂+2)m=± 1 = (1-z^2)^1/2{ 1 + δ(1/2ln1+z/1-z+z/1-z^2) } e^± i ϕ , where δ∈ and z = cosθ. This choice guarantees the linear-independence of the set {S_δ, D̂_pS_δ, ϵ_pqD̂^qS_δ, 1/2γ_pqS_δ, (D̂_pD̂_q-1/2γ_pqΔ̂)S_δ, 2ϵ_r(pD̂_q)D̂^rS_δ} of the harmonic functions including l=0,1 modes if δ≠ 0, but is singular if δ≠ 0. On the other hand, when δ = 0, we have k_(Δ̂)∝ Y_00 and k̂_(Δ̂+2)m∝ Y_1m. Through the above harmonics functions S_δ, in Ref. <cit.>, we proposed the following strategy: We decompose the metric perturbations h_ab on the background spacetime with the metric (<ref>), through Eqs. (<ref>)–(<ref>) with the harmonic functions S_δ given by Eq. (<ref>). After deriving the mode-by-mode field equations such as linearized Einstein equations using S_δ, we choose δ=0 when we solve these field equations as the regularity of solutions. Once we accept Proposal <ref>, we can justify Conjecture <ref> for the linear-order perturbation h_ab on spherically symmetric background spacetimes <cit.>. Then, we showed that above our formulation of a gauge-invariant perturbation theory is applicable to perturbations on the Schwarzschild spacetime including l=0,1 modes, and derived the l=0,1 solutions to the linearized Einstein equation <cit.>. From Eq. (<ref>), the linearized Einstein equation ^(1)G_a^ b=8π^(1)T_a^ b for h_ab=_ab+_Yg_ab with the vacuum background Einstein equation G_a^ b=8π T_a^ b=0 is given by ^(1)_a^ b[]=8π^(1)_a^ b, and the linear-order continuity equations of the energy-momentum tensor is given by ∇^a^(1)_a^ b = 0. We decompose the components of the linear perturbation of ^(1)_ac as ^(1)_ac = ∑_l,mT̃_AC S_δ (dx^A)_a (dx^C)_c + 2 r ∑_l,m{T̃_(e1)AD̂_pS_δ + T̃_(o1)Aϵ_pqD̂^qS_δ} (dx^A)_(a (dx^p)_c) + r^2∑_l,m{T̃_(e0)1/2γ_pq S_δ + T̃_(e2)( D̂_pD̂_q - 1/2γ_pqΔ̂) S_δ + T̃_(o2)ϵ_s(pD̂_q)D̂^sS_δ} (dx^p)_a (dx^q)_c . Since we impose δ=0 after deriving mode-by-mode perturbative Einstein equations, we may choose T̃_(e2) = T̃_(o2) = 0 for l=0,1 modes, and T̃_(e1)A=0=T̃_(o1)A for l=0 modes. This choice and Eq. (<ref>) leads T̃_(e0)=0 for l=0 mode. Then, we derived the l=0,1-mode solutions to Eq. (<ref>) <cit.>: For l=1 m=0 odd-mode perturbations, we derived 2 ^(1)_Ap(dx^A)_(a(dx^p)_b) = ( 6M r^2∫ dr 1/r^4 a_1(t,r) ) sin^2θ (dt)_(a(dϕ)_b) + _V_(1,o1)g_ab , V_(1,o1)a = (β_1(t) + W_(1,o)(t,r)) r^2sin^2θ (dϕ)_a . Here, β_1(t) is an arbitrary function of t. The function a_1(t,r) is the solution to Eq. (<ref>) given by a_1(t,r) = - 16 π/3M r^3 f ∫ dt T̃_(o1)r + a_10 = - 16 π/3M∫ dr r^31/fT̃_(o1)t + a_10 , where a_10 is the constant of integration which corresponds to the Kerr parameter perturbation. On the other hand, rf ∂_rW_(1,o) of the variable W_(1,o) in Eq. (<ref>) is determined by the evolution equation ∂_t^2(r f ∂_rW_(1,o)) - f ∂_r( f ∂_r(r f ∂_rW_(1,o)) + 1/r^2 f [ 3f-1 ] (r f ∂_rW_(1,o)) = 16 π f^2T̃_(o1)r . For the l=0 even-mode perturbation, we have ^(1)_ab = 2/r( M_1 + 4 π∫ dr [r^2/fT̃_tt] ) ( (dt)_a(dt)_b + 1/f^2 (dr)_a(dr)_b) + 2 [ 4 π r ∫ dt ( 1/fT̃_tt + f T̃_rr) ] (dt)_(a(dr)_b) + _V_(1,e0)g_ab , V_(1,e0)a := ( 1/4 f Υ_1 + 1/4 r f ∂_rΥ_1 + γ_1(r) ) (dt)_a + 1/4f r ∂_tΥ_1 (dr)_a , where M_1 is the linear-order Schwarzschild mass parameter perturbation, γ_1(r) is an arbitrary function of r. The variable ^(1)F̃:=∂_tΥ_1 in the generator (<ref>) satisfies the following equation: - 1/f∂_t^2F̃ + ∂_r( f ∂_rF̃ ) + 1/r^2 3(1-f) F̃ = - 8/r^3 m_1(t,r) + 16 π[ - 1/fT̃_tt + f T̃_rr] , where m_1(t,r) = 4 π∫ dr [r^2/fT̃_tt] + M_1 = 4 π∫ dt [ r^2 f T̃_rt] + M_1 , M_1∈ . For the l=1 m=0 even-mode perturbation, we have ^(1)_ab = 16π r^2/1-f{ - f^2/3[ 1+f/2T̃_rr + r f ∂_rT̃_rr - T̃_(e0) - 4 T̃_(e1)r] (dt)_a(dt)_b. . + [ (1-f) T̃_tr - 2r/3f∂_tT̃_tt] (dt)_(a(dr)_b) + 1-3f/2f^2[ T̃_tt - 2rf/3(1-3f)∂_rT̃_tt] (dr)_a(dr)_b. . - r^2T̃_tt/3γ_ab}cosθ + _V_(1,e1)g_ab , V_(1,e1)a := - r ∂_tΦ_(e)cosθ (dt)_a + ( Φ_(e) - r ∂_rΦ_(e)) cosθ (dr)_a - r Φ_(e)sinθ (dθ)_a , where Φ_(e) satisfies the following equation - 1/f∂_t^2Φ_(e) + ∂_r[ f ∂_rΦ_(e)] - 1-f/r^2Φ_(e) = 16 πr/3(1-f) S_(Φ_(e)) , S_(Φ_(e)) := 3(1-3f)/4fT̃_tt - 1/2 r ∂_rT̃_tt + 1+f/4 f T̃_rr + 1/2 f^2 r ∂_rT̃_rr - f/2T̃_(e0) - 2 f T̃_(e1)r . 4. Extension to the higher-order perturbations ———- As shown in Sec. 2, the n-th order Einstein equation is given in Eq. (<ref>), which we rewrite as ^(1)_a^ b[^(n)] = - ^( NL)_a^ b[ {. ^(i)_cd| i<n }] + 8 π^(n)_a^ b =: 8 π^(n)_a^ b . Here, the left-hand side in Eq. (<ref>) is the linear term of ^(n)_ab and the first term in the right-hand side is the non-linear term consists of the lower-order metric perturbation ^(i)_ab with i<n. The right-hand side 8 π^(n)_a^ b of Eq. (<ref>) is regarded an effective energy-momentum tensor for the nth-order metric perturbation ^(n)_ab. The vacuum background condition G_a^ b=0 implies the identity ∇_a^(1)_b^ a[A] = 0 and Eq. (<ref>) implies ∇^a^(n)_a^ b = 0. This equation gives consistency relations which should be confirmed. Note that ^(n)_a^ b does not include ^(n)_ab, since the terms -^( NL)_a^ b[ {. ^(i) F_cd| i<n }] and ^(n)_a^ b in Eq. (<ref>) don't include ^(n)_ab due to the vacuum background condition. This situation is same as that when we solved the linear equations (<ref>)–(<ref>). Furthermore, we decompose ^(n)_ab as ^(1)_ab =: ∑_l,m_AB S_δ (dx^A)_a (dx^B)_b + 2 r ∑_l,m{_(e1)AD̂_pS_δ + _(o1)Aϵ_pqD̂^qS_δ} (dx^A)_(a (dx^p)_b) + r^2∑_l,m{_(e0)1/2γ_pq S_δ + _(e2)( D̂_pD̂_q - 1/2γ_pqΔ̂) S_δ + _(o2)ϵ_s(pD̂_q)D̂^sS_δ} (dx^p)_a (dx^q)_b . Then, the replacements T̃_AB→_AB, T̃_(e1)A→_(e1)A, T̃_(o1)A→_(o1)A, T̃_(e0)→_(e0), T̃_(e2)→_(e2), T̃_(o2)→_(o2) in the solutions (<ref>)–(<ref>) yield the solutions to Eq. (<ref>). 5. Summary ———- We proposed a gauge-invariant treatment of the l=0,1-mode perturbations on the Schwarzschild background spacetime as the Proposal <ref>. Following this proposal, we derived the l=0,1-mode solutions to the Einstein equations with the general linear perturbations of the energy-momentum tensor in the gauge-invariant manner. The derived solution in the l=1 odd mode actually realizes the linearized Kerr solution in the vacuum case. Furthermore, we also derived the l=0,1 even-mode solutions to the Einstein equations. In the vacuum case, in which all components of ^(1)_ab vanish, the l=0 even-mode solution realizes the only the additional mass parameter perturbation of the Schwarzschild spacetime. These results are the realization of the linearized gauge-invariant version of uniqueness theorem of Kerr black hole and these solutions are physically reasonable. Owing to this realization, we may say that our proposal is also physically reasonable. Details of our discussions are given in Ref. <cit.>. The fact that we confirmed Conjecture <ref> for the linear-metric perturbations in the Schwarzschild background case including the l=0,1 modes implies that the extension to any-order perturbations through our gauge-invariant formulation <cit.> was possible, at least, in the case of the Schwarzschild background case. Thus, we can develop a higher-order gauge-invariant perturbation theory on the Schwarzschild background spacetime <cit.>. We leave the development for specific astrophysical situations such as gravitational-wave astronomy through our formulation as future works. 99 LIGO-GW150914-2016 B. P. Abbot et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 116 (2016), 061102. LISA-homepage LISA home page: https://lisa.nasa.gov T.Regge-J.A.Wheeler-1957 T. Regge and J. A. Wheeler, Phys. Rev. 108 (1957), 1063; F. Zerilli, Phys. Rev. D 2 (1970), 2141. K.Nakamura-2021a K. Nakamura, Class. Quantum Grav. 38 (2021), 145010. K.Nakamura-2003 S. Sonego and M. Bruni, Commun. Math. Phys. 193 (1998), 209; K. Nakamura, Prog. Theor. Phys. 110, (2003), 723; K. Nakamura, Prog. Theor. Phys. 113 (2005), 481. K.Nakamura-2011 K. Nakamura, Class. Quantum Grav. 28 (2011), 122001; K. Nakamura, Int. J. Mod. Phys. D 21 (2012), 124004; K. Nakamura, Prog. Theor. Exp. Phys. 2013 (2013), 043E02. K.Nakamura-2014 K. Nakamura, Class. quantum Grav. 31, (2014), 135013. K.Nakamura-2021b K. Nakamura, Lett. High Energy Phys. 2021 (2021), 215. K.Nakamura-2010 K. Nakamura, Advances in Astronomy, 2010 (2010), 576273; K. Nakamura et al., “Theory and Applications of Physical Science vol.3,” (Book Publisher International, 2020). DOI:10.9734/bpi/taps/v3. (Preprint arXiv:1912.12805); K. Nakamura, arXiv : 2110.13508v7 [gr-qc]; arXiv : 2110.13512v4 [gr-qc]; arXiv : 2110.13519v4 [gr-qc].
http://arxiv.org/abs/2307.02740v1
20230706025947
Dense Retrieval Adaptation using Target Domain Description
[ "Helia Hashemi", "Yong Zhuang", "Sachith Sri Ram Kothur", "Srivas Prasad", "Edgar Meij", "W. Bruce Croft" ]
cs.IR
[ "cs.IR", "cs.CL" ]
0000-0001-7258-7849 Part of this work was done during a research internship with Bloomberg. University of Massachusetts Amherst United States hhashemi@cs.umass.edu 0000-0002-7858-5569 Bloomberg United States yzhuang52@bloomberg.net 0009-0006-4858-3737 Bloomberg United States skothur@bloomberg.net 0009-0006-0379-7416 Bloomberg Canada sprasad60@bloomberg.net 0000-0003-0516-3688 Bloomberg United Kindgom emeij@bloomberg.net 0000-0003-2391-9629 University of Massachusetts Amherst United States croft@cs.umass.edu In information retrieval (IR), domain adaptation is the process of adapting a retrieval model to a new domain whose data distribution is different from the source domain. Existing methods in this area focus on unsupervised domain adaptation where they have access to the target document collection or supervised (often few-shot) domain adaptation where they additionally have access to (limited) labeled data in the target domain. There also exists research on improving zero-shot performance of retrieval models with no adaptation. This paper introduces a new category of domain adaptation in IR that is as-yet unexplored. Here, similar to the zero-shot setting, we assume the retrieval model does not have access to the target document collection. In contrast, it does have access to a brief textual description that explains the target domain. We define a taxonomy of domain attributes in retrieval tasks to understand different properties of a source domain that can be adapted to a target domain. We introduce a novel automatic data construction pipeline that produces a synthetic document collection, query set, and pseudo relevance labels, given a textual domain description. Extensive experiments on five diverse target domains show that adapting dense retrieval models using the constructed synthetic data leads to effective retrieval performance on the target domain. <ccs2012> <concept> <concept_id>10002951.10003317.10003338.10003343</concept_id> <concept_desc>Information systems Learning to rank</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Learning to rank Dense Retrieval Adaptation using Target Domain Description W. Bruce Croft August 1, 2023 ========================================================== § INTRODUCTION The effectiveness of neural information retrieval (IR) models has been well-established in recent years <cit.>. However, these models have primarily demonstrated strong performance in settings where the training and test data follow a similar data distribution <cit.>. When well-performing neural models developed for one test collection, e.g., MS MARCO <cit.>, are applied to a substantially different one, the results are often worse than those produced by much simpler bag-of-words models such as BM25 <cit.>. This poses a problem in real-world applications, where access to large, domain-specific training data is limited. To address this issue, a group of methods known as “domain adaptation” have been developed. There are various approaches to domain adaptation in information retrieval, as summarized in Table <ref>. In the zero-shot setting, the assumption is that the model has been trained on a large-scale test collection in a source domain, but no data from the target domain is available during training. It is worth noting that in the zero-shot setting, there is essentially no adaptation taking place, as the model is simply being tested on the target domain. In contrast, unsupervised domain adaptation models assume that the target document collection is available for adaptation. The few-shot setting takes this further and assumes that a small set of query-document pairs with relevance labels on the target domain is available, allowing the retrieval model to be adapted to the target. In this work we introduce a new category of domain adaptation methods for neural information retrieval, which we refer to as “domain adaptation with description.” Studying this problem is not only interesting from an academic perspective, but also has potential applications in several real-world scenarios, where the target collection and its relevance labels are not available at training time. For example, these may not be available yet or at all or, even if they were, target domain owners may be hesitant to provide them for various reasons such as legal restrictions. There are also applications with privacy concerns, for instance in the case of medical records or where the data contains personally identifiable information. Another example can be found when a competitive advantage is involved, as potential use of the data may benefit competitors. Therefore, if an organization lacks the resources for training neural IR models in-house and desires to outsource the process, they should be able to provide a high-level textual description that outlines the task and characteristics of the data in a general manner. Our approach then allows the organization to convey the necessary information to a third party without compromising sensitive information or violating legal restrictions. In this paper, we investigate the task of domain adaptation for information retrieval (IR) tasks by utilizing target domain descriptions. We propose a taxonomy for the task and analyze the various ways and attributes by which a domain can be adapted. We differentiate our task from similar studies that have been conducted in recent years and explain the limitations of existing technologies. To address these limitations, we propose a novel pipeline that utilizes the domain descriptions to construct a synthetic target collection and generate queries and pseudo relevance labels to adapt the initial ranking model trained on a source domain. Our approach takes advantage of state-of-the-art instruction-based language models to extract the properties of the target domain based on its given textual description. We show that a retrieval-augmented approach for domain description understanding can effectively identify various properties of each target domain, including the topic of documents, their linguistic attributes, their source, etc. The extracted properties are used to generate a seed document using generative language models and then an iterative retrieval process is employed to construct a synthetic target collection, automatically. Following prior work on unsupervised domain adaptation <cit.>, we automatically generate queries from our synthetic collection based on the query properties extracted from the target domain description. We then generate pseudo relevance labels for each query given an existing cross-encoder reranking model and use the created data for adapting dense retrieval models to the target domain. Extensive experiments on five diverse target collections, ranging from financial question answering to argument retrieval for online debate forums, demonstrate the effectiveness of the proposed approach for the task of domain adaptation with description. In summary, the main contributions of this work include the following. * Introducing the novel task of domain adaptation with description for information retrieval. * Proposing an automatic data construction pipeline from each target domain description. * Proposing a taxonomy of domain attributes in information retrieval that should be identified for effective adaptation. * Introducing an effective implementation of the proposed pipeline. § RELATED WORK This work is related to the domain adaptation as well as prompt-based language model literature. §.§ Domain Adaptation in Neural IR Research in this area can be categorized into two main groups: supervised and unsupervised. In supervised (often few-shot) domain adaptation, the assumption is that labeled data is available in the source domain and a limited amount of labeled data is available in the target domain. This problem can be formulated as a few-shot learning scenario, as demonstrated by <cit.>. A common approach within this category is transfer learning, which utilizes a pre-trained model from the source domain and fine-tunes it on the target domain using a small set of labeled data. This approach has been shown to improve model performance by allowing the model to learn the specific characteristics of the target domain <cit.>. The unsupervised setting assumes that access to target documents is available, but queries and relevance labels are not. <cit.> proposed a generative pseudo-labeling approach for this scenario. They generated synthetic queries, from documents and applied a re-ranking based pseudo labeling approach for each query and document pair. Then, the model was fine-tuned using the generated query-document pairs. <cit.> proposed an answer-aware strategy for domain data selection, which selects data with the highest similarity to the new domain. The source data examples were sorted based on their distance to the target domain center, and the most similar examples were chosen as pseudo in-domain data to re-train the question generation model. Additionally, they presented two confidence modeling methods, namely, generated question perplexity and BERT fluency score, which emphasized labels that the question generation model was more confident about. Recently, <cit.> introduced a zero-shot dense retrieval model for adaptations by using a generative model to generate hypothetical documents relevant to the query. These documents were used as queries and, with the use of pre-trained Contriever <cit.>, documents from the target domain were retrieved. §.§ Prompt-based Language Models Language models have been widely used in information retrieval (IR) and natural language processing (NLP) applications due to their ability to accurately represent text. They are machine learning models that are trained to predict the likelihood of a sequence of words. Currently, the state-of-the-art approach is to use large transformer-based language models, such as BERT <cit.>, GPT <cit.>, and T5 <cit.>. An evolving technique for training these models is called “prompting.” GPT-3 <cit.> is an example of a successful language model that was trained using this technique. Prompting refers to using language models to generate text by providing the model with a “prompt,” which is a short text that serves as a starting point for the model's generation. The idea behind prompting is to provide the model with a specific context or task, so that it can generate text that is more focused and coherent. Prompts can be used for few-shot learning. To be more specific, language models can be fine-tuned for specific tasks using a small amount of task-specific data, such as a few examples or instructions. These type of models are called instruction-tuned language models. They include T0 <cit.>, InstructGPT <cit.>, and Tk-Instruct <cit.>. Instruction-tuned models are promising in that they make it possible to fine-tune language models on new tasks with minimal data. InstructGPT <cit.> argues that it is more effective and truthful than GPT-3 at following user intention. In this context, the term “instruction” is distinct from “description” as used in this paper. In previous research, the term “instruction” has been used interchangeably with “intention” and is closely related to the concept of user intent in the field of IR. For example, it was found that if GPT-3 prompted to explain the moon landing to a 6-year-old, it outputs the completion of the prompt text, while InstructGPT generates a more accurate and appropriate response that actually explains moon landing with simple wording <cit.>. This is attributed to their training – GPT-3 predicts the next word, while InstructGPT employs techniques such as reinforcement learning from human feedback for fine-tuning the model to better align with user instructions. Other recent research has focused on fine-tuning language models to follow instructions using academic NLP datasets such as FLAN <cit.> and T0 <cit.>. However, all these instruction-based language models are currently limited in their ability to perform complex, multi-step tasks, as opposed to the high-level task-oriented approach used in this study. Instruction-tuned language models have been effectively applied to various NLP tasks, but have received less attention in the field of IR. This is due to the challenge of casting a retrieval task into the sequence-to-sequence format typically used by these models, as it requires encoding a large corpus of documents. Concurrent to our work <cit.> proposed a retrieval method that explicitly models a user's search intent by providing natural language instruction. They concatenated the query with the instruction, encoding it as the query embedding, and then computed the cosine similarity between query and document pairs. <cit.> used InstructGPT to encode a query with its instruction and generated a hypothetical document, which they later used as the query to improve dense retrieval. While we use both of these ideas in our baselines, our approach in defining the task differs, significantly. In both of the aforementioned papers, the authors simply concatenated the instruction to the query. However, this approach is limited to handling atomic commands that improve alignment with human intentions, such as “write an answer to this question.” These types of instructions are distinct from high-level overviews of complex tasks that require multiple steps to complete, such as our task. § METHODOLOGY In this section, we explain the problem formulation and a taxonomy of domain attributes that can be used to understand domain descriptions. Such domain understanding component can produce attribute values for a synthetic corpus construction model that uses a large language model to generate one seed document with these attributes and then performs an iterative retrieval process from a heterogeneous collection such as the Web for collection creation. The constructed collection will be then used to generate queries and pseudo relevance labels that are aligned with the properties of the target domain, as extracted by our domain understanding component. This pipeline leads to a synthetic training set that can be used to adapt a dense retrieval model to the target domain. §.§ Problem Formalization Let M be a retrieval model that is trained on the source domain D_1, and T be the textual description of the target retrieval domain D_2, where D_2 ≠ D_1. The goal is to adapt the retrieval model M to the target domain D_2 and obtain the retrieval model M' that performs effectively on D_2. Assume that W is a large-scale heterogeneous collection, such as a Web collection, that can be used as an external resource as required. This large-scale collection can be used for synthetic collection construction for any target domain description. §.§ A Taxonomy of Domain Attributes in IR The term “domain” is used quite loosely in NLP and IR and defined in myriad ways <cit.>. It is commonly used to describe a type of corpus that is “coherent”, such as a specific topic or linguistic register <cit.>. However, the concept of domain has evolved in recent years, leading to ongoing research in this area. For example, there is a distinction between “canonical” data (e.g., edited news articles) and “non-canonical” data (e.g., social media), and models trained on one type may not perform well on the other. There is an ongoing debate over what constitutes a “domain” in the field of information retrieval (IR), and whether subdomains exist within a larger domain. This uncertainty makes it difficult to tackle the domain adaptation problem and develop a universal algorithm, as domain shifts are specific to each case and models may not perform robustly when transferred from one case to another. In order to clarify the different stances on the definition of a “domain” we have developed a taxonomy for domains and their attributes in the context of IR. Therefore, we define a domain based on the set of attributes defined in our taxonomy. This taxonomy can be used to develop general-purpose domain adaptation solutions as it enumerates the possible ways in which two domains can be different. We argue that every retrieval task is composed of three variables: query, documents, and relevance notion. We propose that attributes related to these three categories together define a retrieval domain. In other words, for any domain D, we define a set of attributes {a_1, a_2, ⋯, a_n}, where each attribute a_i is either related to the properties of query, document, or relevance. Through careful exploration of many different retrieval tasks, including the ones in the BEIR benchmark <cit.> and the ones organized by TREC and CLEF evaluation campaigns over the last few decades, we compile a taxonomy that includes seven query-level attributes, seven document-level attributes, and one attribute denoting the relevance notion. The attributes, their definition, and examples are presented in Table <ref>. In the interest of space, do not list them here again. We argue that if the value of at least one attribute belonging to any of the three categories changes, a domain shift has occurred. We highlight the asymmetric nature of query and document attributes that presents unique challenges for domain adaptation in IR compared to NLP tasks. Finally, we note this taxonomy can be used to see what attributes differ between domains and that we can leverage those for effective adaptation. §.§ Domain Description Understanding As discussed in Section <ref>, clients may be reluctant to provide actual target domain data. However, providing a high-level description of the data is usually feasible. At the time of this research, no dataset that includes descriptions of retrieval tasks were known to us. Concurrently, <cit.> provided instructions for some IR test collections. However, we started this research prior to their work being submitted to arXiv (Dec 2022) and as noted in Section <ref>, the instructions they use provide more fine-grained information on human intentions, in line with what was referred to as “narratives” in the TREC 2004 Robust Track <cit.>. That being said, in our problem, we need a description of the retrieval task that includes information on the appearance of the corpus and queries, in addition to user intentions, and how relevance is defined for that task. To obtain these descriptions, we gave 15 diverse IR collections from the BEIR benchmark <cit.> to three IR experts (not the authors of this paper) and asked them to explain the retrieval task for each. We asked them to revise the differences of opinion during a brainstorming session; they shared their explanations and worked together to reach a single description for each collection, which we refer to as T in our formalization. After the descriptions are finalized, we provide the same people with the taxonomy we have defined in Table <ref>, and ask them to annotate the descriptions based on the taxonomy attribute. This annotation results in the gold labels of attribute values based on our taxonomy for each dataset. We provide one dataset description and its annotation in Table <ref> for the reference. We argue that a proper understanding of the description has a significant impact on adaptation. If the model understands the value of each attribute in the taxonomy, it knows when a domain shift has occurred and what attributes need to be adapted for the entire model to be adapted. Therefore, our domain description understanding component focuses on predicting the values of attributes defined in our taxonomy. Since the value of the attributes can be open-ended text rather than defined options, the best architectural choice is a text generation model that takes the domain description as input and generates the value of the attributes as output. Therefore, we adopt a state-of-the-art prompt-based text generation model F to perform the task, i.e., ChatGPT. We instruct the model to get the description of the domain and extracts the value of attributes introduced in the taxonomy.[After some rounds of trial and error, we landed on the following instruction, I as the best performing one for our task: “For each defined retrieval task in the Passage, find the values related to the relevance notion (e.g., topically relevant, contains the answer, references of a paper, paraphrase, evidence for the claim, etc.) as well as the following query and document attributes: query topic (e.g., medical, scientific, financial, mathematical, adult, etc.); query linguistic features (e.g., formal, informal, etc.); query language (e.g., english, french, etc.); query structure (e.g., unstructured, semi-structured, structured, etc.); query modality (e.g., text, image, video, etc.); query format (e.g., keyword query, tail query, question, claim, argument, passage, etc.); document topic (e.g., medical, scientific, financial, mathematical, adult, etc.); document linguistic features (e.g., formal, informal, etc.); document language (e.g., english, french, etc.); document structure (e.g., unstructured, semi-structured, structured, etc.); document modality (e.g., text, image, video, etc.); document format (e.g., passage, long document, question, etc.); document source (e.g., StackExchange, wikipedia, reddit, youtube, twitter, facebook, quora, etc.). If the value of each attribute cannot be inferred, return NA”] In addition to the instruction, we include up to three examples from the most similar collections to the target domain by retrieval augmentation. Let R(T,C') denote a retrieval model (SBERT in our case) that takes the target domain description and a collection of textual descriptions of different domains (C'). The description understanding function F takes the instruction I , the retrieved examples, and domain description T, and outputs the values of attributes introduced in taxonomy. Formally: F(I, T, R(T, C')) = {a'_1, a'_2, ⋯, a'_n} where n=15. Discussion One may argue that the taxonomy is easy to understand and interpret, therefore, users can directly identify these properties for the target domain and this bypasses the need to a Domain Description Understanding component. This argument is valid. In other words, the taxonomy we define in Table <ref> enables users of the system to directly identify the values of each attributes for the target domain. That being said, the Domain Description Understanding component enables the users to just describe their target domain in natural language. Similar to any semantic parsing task, such as text-to-SQL, this component creates a natural language interface for this task. Thus, studying it can shed light into how feasible it is to extract domain attributes from natural language. §.§ Synthetic Target Data Construction As depicted in Figure <ref>, once we identify the domain attributes of our taxonomy for the target domain (i.e., domain description understanding), we propose to build a synthetic training set based on the generated attribute values. This consists of three steps: synthetic document collection construction, synthetic query generation, and pseudo-labeling. In the following we describe each of these steps. Our data construction approach is presented in Algorithm <ref>. §.§.§ Synthetic Document Collection Construction One naive approach to synthesizing the collection is to generate documents one by one using sequencetosequence models. In preliminary experiments, we observed that many state-of-the-art and freetouse sequencetosequence models such as the latest version of TkInstruct <cit.>, are not sufficient to generate meaningful documents given our target domain descriptions. Instead, they generate passages containing words from our instructions, rather than generating a document with the provided attributes. It can be argued that with the rise of black-box generative language models like ChatGPT, this issue will be reduced. However, it is important to note that these models are not free to use. At the time of submitting this paper, ChatGPT was not yet available through an API, so we used the next best available large language model, , the latest version of GPT-3 from OpenAI. OpenAI charges customers based on the cumulative number of tokens in the input and output, at a rate of $0.02 per 1K tokens. If we consider an average passage to be 300 tokens, the minimum cost to generate a corpus like MS MARCO (consisting of 8M passages) would be $12,000. This assumes the model only takes the domain description with no example as input and generates one passage in line with the target domain description. It is worth mentioning that our preliminary experiments showed that the model was unable to generate a desired passage even with three examples in the prompt. We were able to generate good quality passages with ChatGPT, but it may be even more expensive once available through the API. Additionally, these models cannot perform a sequence of tasks step by step (e.g., curating a collection then queries, etc.). They may miss some parts of the sequence or do it all at once (generating documents and queries simultaneously), causing the automation of the training retrieval model to be difficult. To overcome all these obstacles, we propose an iterative document selection process (i.e., lines 7-14 in Algorithm <ref>). We first generate a document based on the domain attributes we extracted from the target domain description T. We call this generated document a seed document. We find that ChatGPT is the only language model that could successfully generate a related document given our document attributes. We tried T5, Tk-Instruct, and GPT-3 and they could not generate a document with the given attributes. Instead, they generate a text using the words in the given instruction which is not sufficient for effective domain adaptation. We then run an iterative retrieval process using BM25 and a BERT-based cross-encoder reranking model trained on the source domain <cit.>. It retrieves k documents (we empirically observe that k should be set to a small value often less than 50) in response to the seed document and then adds all the retrieved documents to the seed set. Again another document from the seed set is selected and another k documents are retrieved. This process repeats until we reaches a collection C with a desired synthetic collection size (N). §.§.§ Synthetic Query Generation In line 15 of Algorithm <ref>, we generate k' queries per document in the constructed document collection C. To this aim, we train instruction-based T5 on MS MARCO for query generation using the MS MARCO query and relevance attributes. It is similar to the docT5query <cit.>, but also takes query and relevance properties of the target domain as input. To be precise, we use form this input for the instruction-based T5 model: `Generate a query for the following Passage based on the given Attributes. Passage: ⋯. Attributes: ⋯.' We include the query and relevance attributes in the instruction. Therefore, it learns to generate queries with the given properties. The model is trained with a maximum likelihood objective as follows: -∑_k log P(q_k| q_i < k, q_attr, r_attr), where q_k is kth output query token, q_attr is the extracted values for query attributes in the taxonomy, and r_attr is the extracted values for relevance attribute. We use beam search with the size of k'. §.§.§ Pseudo Labeling Research on weak supervision by <cit.> showed that we can use existing retrieval models to annotate documents for a given query set and train student models based on the annotated data. More recently, this approach has been found effective in unsupervised domain adaptation <cit.>. We use a cross-encoder re-ranking model based on BERT <cit.> that is trained on MS MARCO (our source domain) as a teacher model and annotate documents through soft labeling: the input includes the query, the relevance notion, and a document, and the output scores are used as labels. Let D_q ⊂ C be a set of documents that should be annotated for query q by the pseudo-labeler. We construct D_q as follows: * D_q includes the document that q was generated from. * D_q includes 25 random documents from the top 100 documents retrieved by BM25.[We empirically observe that taking 25 random samples from the top 100 documents leads to more robust performance compared to using the top 25 documents.] * D_q includes 25 random documents from the top 100 documents retrieved by the dense retriever M_θ. §.§ Dense Retrieval Adaptation Given the constructed training set with pseudo-labels, we use the following listwise loss function for adapting the dense retrieval model M_θ to the target domain. We used Contriever <cit.> (an unsupervised dense retrieval model trained using contrastive learning) that is fine-tuned on MS MARCO as our M_θ. Let D_q⊂ C be the set of documents annotated for query q ∈ Q through pseudo-labeling. We use the following listwise loss function for each query q: ∑_d, d' ∈ D_q1{y^T_q(d) > y^T_q(d')} |1/π_q(d) - 1/π_q(d')| log ( 1+ e^y^S_q(d') - y^S_q(d)), where π_q(d) denotes the rank of document d in the result list produced by the student dense retrieval model, and y^T_q(d) and y^S_q(d) respectively denote the scores produced by the teacher and the student models for the pair of query q and document d. This knowledge distillation listwise loss function is inspired by LambdaRank <cit.> and is also used by <cit.> for dense retrieval distillation. In addition, we take advantage of the other passages in the batch as in-batch negatives. Although in-batch negatives resemble randomly sampled negatives that can be distinguished easily from other documents, it is efficient since passage representations can be reused within the batch <cit.>. § EXPERIMENTS This section describes our datasets, experimental setup, and results. §.§ Tasks and Data For evaluating our domain adaptation solution, we chose the target collections to be as diverse as possible within the public test collections in the BEIR benchmark <cit.>. Below we provide brief explanations of these collections. Source Domain As the source domain, we focus on passage retrieval provided by the MS MARCO collection <cit.>. As the standard practice on zero-shot learning offered by BEIR benchmark, most of baselines models have been pre-trained on this dataset, as our source domain. It contains 8.8 M passages and an official training set of 532,761 query-passage pairs collected from the Bing search log. Queries often have only one relevant passage per query, and the relevant label is binary. Target Retrieval Task 1: Bio-Medical IR Our first target retrieval task focuses on retrieving scientific documents for biomedical queries. We use the collection provided by the TREC Covid Track in 2020 (TREC-COVID) <cit.>, which is an ad-hoc retrieval task based on scientific documents related to the Covid-19 pandemic offered by the CORD-19 corpus <cit.>. Similar to <cit.>, we use the July 16, 2020 version of CORD-19 collection as the target corpus, and the final cumulative judgments with query descriptions from the original task as test queries. The test collection consists of 50 test queries and a corpus of 171K documents. Target Retrieval Task 2: Financial Question Answering Our second task studies answer passage retrieval in response to natural language questions in the financial domain. We use the FiQA-2018 Task 2 <cit.> (FiQA) that focused on answering questions based on personal opinions. The document collection was created by crawling posts on StackExchange under the Investment topic from 2009-2017, which serves as the corpus with 57K documents. The test set consists of 648 queries. Target Retrieval Task 3: Argument Retrieval This task explores ranking argumentative texts from a collection based on relevance to a given query on various subjects. We use the ArguAna dataset <cit.> which has passage-level queries. The goal is to retrieve the most suitable counterargument for a given argument. The collection was collected from online debate portals. There are 1,406 argument queries in the dataset and the corpus size is 8.67K. Target Retrieval Task 4: Duplicate Question Retrieval: The aim of duplicate question retrieval is to detect repeated questions asked on community question-answering (CQA) forums. We use the Quora dataset that consists of 522,931 unique questions in corpus and 10,000 test queries. Target Retrieval Task 4: Fact Checking Fact checking involves verifying a statement against a large pool of evidence. It requires knowledge of the statement and the ability to analyze multiple documents. In a retrieval setting, the query is a claim, and we attempt to retrieve documents that confirms or refutes the claim. We use the SciFact collection <cit.> that consists of 300 scientific claims as test queries and 5K paper abstracts as the corpus. Constructing the heterogeneous Collection W: As explained in Section <ref>, W is a heterogeneous collection of documents from which our model selects documents to synthesize the target retrieval corpus. To create this collection, we ensure that there is no document leakage between the target retrieval tasks and W.[Note that document leakage is not necessary an issue in this task. In real world, the Web contains various types of documents that can satisfy the attributes of each target domain (e.g., each BEIR collection). The main challenge is to identify and recover these documents from a large heterogeneous corpus.] We create W by putting together the documents from MS MARCO <cit.>, SciDocs <cit.>, NFCorpus <cit.>, Touche-2020 <cit.>, and CQADupStack <cit.>. This results in a collection with 9M+ documents. §.§ Experimental Setup and Evaluation Metrics We implemented and trained our models using TensorFlow. The network parameters were optimized using Adam <cit.> with linear scheduling and the warmup of 4,000 steps. The learning rate was selected from [1e-6, 1e-5] with a step size of 1e-6. The batch size was set to 128. We set k to 30, N to 10,000, and k' to 5 (see Algorithm <ref>). We use the BERT <cit.> with the pre-trained checkpoint made available from Contriever-FT <cit.> as the initialization. Hyper-parameter selection (for both BM25 and neural models) and early stopping was conducted based on the performance in terms of MRR on the MS MARCO validation set. For query generation we use the T5 model from <cit.>. As the re-ranking teacher model for pseudo labeling, we use a BERT cross-encoder, similar to <cit.>. For domain description understanding, we use three examples in the ChatGPT instruction. Following BEIR <cit.>, we use NDCG@10 and Recall@100 as evaluation metrics. We use a two-tailed paired t-test for identifying statistically significant performance differences using Bonferroni correction with p_value < 0.05. §.§ Results and Discussion We compare our method against the following baselines: * BM25 <cit.>: an effective term matching retrieval method that evaluates and ranks a group of documents based on the presence of query terms regardless of their position in each document. * ANCE <cit.>: a bi-encoder dense retrieval model that constructs hard negatives from an Approximate Nearest Neighbor (ANN) index of the corpus based on the model's representations. Consistent with previous works, we used RoBERTa <cit.> as the base language model that is trained on MS MARCO for 600K steps for our experiments. * SBERT <cit.>: another dense retrieval baselines that uses BERT that employs Siamese and Triplet network architectures to generate sentence embeddings. * Contriever <cit.>: an unsupervised dense retrieval model that learns adaptive representation via contrastive learning. * Contriever-FT <cit.>: the Contriever model that is fine-tuned on MS MARCO training set. * HyDE <cit.>: it utilizes GPT-3 to generate a hypothetical document. Then it uses Contriever to retrieve from the corpus with the hypothetical document as the query. This work has been proposed concurrent to our work. * ANCE - Cond Query: following <cit.>, which is another concurrent work to ours, we concatenate the domain description with the query in ANCE so the query encoder is aware of the domain description. * Contriever-FT - Cond Query: this is similar to the last baseline, but users Contriever-FT as the dense retrieval model. As a source of reference we compare against the following approaches:(1) Oracle: this is our proposed approach that, instead of document collection construction, uses the target domain collection for query generation; and (2) CE Reranker: this is a BERT-based cross-encoder reranker trained on MS MARCO, which reranks the top 100 documents returned by BM25. Since this is not a dense retrieval model, we only report its results as a point of reference. The results are reported in Table <ref>. We observe that dense retrieval baselines have difficulties surpassing the BM25 performance on TREC COVID, SciFact, and ArguAna datasets in terms of NDCG@10 in a zero-shot setting. This demonstrates the difficulty of dealing with distribution shift in neural information retrieval. HyDE that uses GPT-3 for generating hypothetical documents for test queries performs well in terms of Recall@100 on SciFact and ArguAna datasets. The proposed approach outperforms all dense retrieval baselines in terms of NDCG@10 in all collections. These improvements are statistically significant in all cases. It also better than its counterparts in terms of Recall@100 on FiQA and Quora. Interestingly, our approach is the only dense retrieval model that can beat BM25 on TREC COVID and ArguAna. This demonstrates the effectiveness of our data creation pipeline. The performance gap between the Oracle model and the baselines is often less than 10%, confirming the quality of the synthetic corpus our model creates. The Oracle model performs better than the proposed approach in all cases, except for Recall@100 on Quora. Note that the Oracle model does not necessarily provide upper-bound results, it just uses the target domain collection instead of synthetic collection construction. This results suggest that it is likely to construct a collection that dense retrieval models benefit from for adaptation, even more than the actual target collection. Our model outperforms the cross encoder reranker model in terms of Recall@100 in all cases, except for TREC COVID. Ablation Study To demonstrate the impact of each design decision we made in our pipeline, we ablate each major component in our model and report the results in Table <ref>. We first exclude the pseudo-labeling component (i.e., assuming that a document used for generating each query is relevant and any other document is non-relevant), and we observe statistically significant performance drop in nearly all cases. In the second ablation study, we exclude the seed document generation and use the domain instruction itself as the query to retrieve documents from W and construct the collection C. This leads to even larger performance drop. Our last ablation focuses on converting the iterative collection construction part to a single retrieval run (i.e., retrieving 10,000 documents in response to the seed document). We observe that in this case, some collections hurt more than others. For example, performance drop on Quora is more significant than FiQA and TREC COVID. But generally speaking, the iterative process leads to a better performance. Evaluating the Quality of the Synthetic Corpus Construction Approach To provide a deeper look into the quality of the corpus that we construct in our model, we take the union of W and all the target domain collections listed above. We then run our synthetic corpus construction experiment to see the accuracy of the model in retrieving the documents that actually belong to the target corpus. We report the average performance in Figure <ref>. In the left plot, we vary the number of generated seeds by ChatGPT and we observe that a single seed document is sufficient and including more documents degrades the accuracy of constructed collection. In the middle plot, we vary the number of retrieved documents per query (i.e., k in Algorithm <ref>) and observe that the model shows a relatively stable performance compared to various values of k, however, smallest value led to the poorest performance. In the last experiment, we increase the synthetic corpus size from 1,000 to 5,000 and observe that the accuracy of reconstructing document from the actual target domain decreases. However, this performance decrease is not substantial, and the accuracy is still higher than 48% when selecting 5,000 documents. This is another signal to show that the proposed approach for corpus construction performs effectively. Analyzing the Domain Description Understanding Component As described in Section <ref>, we provided three IR experts (not the authors of this work) with all 15 public collections in the BEIR benchmark, and asked them to come up with a description for each retrieval task associated with each collection in a collaborative session. We later presented them with the our taxonomy and asked them to annotate the descriptions accordingly. The input of the description understanding model is the task description, in addition to arbitrary choice of examples, and the output is expected to be the value of taxonomy attributes. Considering we cast the problem of description understanding to a sequence-to-sequence format, following the literature, we used ROUGE-L <cit.> and Exact Match as our evaluation metrics. ROUGE-L (Recall-Oriented Understudy for Gisting Evaluation) is a commonly used evaluation metric in NLP for summarization tasks, measuring overlap between n-grams in reference summaries and the generated summary. The "L" refers to the longest common subsequence. ROUGE-L scores range from 0 to 1, with 1 being a perfect match. Exact Match (EM) measures the percentage of predictions that exactly match the ground truth, with 1 being a perfect match and 0 no match. Since the task is generative, automatic metrics may not be sufficient, so three annotators manually labeled the outputs of each model, scoring 1 if desirable and 0 if not. Final labels were decided through majority voting. Table <ref> presents the results of ChatGPT for domain description understanding. We made sure that the model is not benefiting from any session data, by initiating a new session for each experiment. Each cell displays the average of scores for a particular attribute across 15 collections. The last row reflects the overall performance of each setting based on the average of all attributes. As expected, the highest performance is mostly achieved with Instruction and 3 examples is given. The reason is that the model receives more examples, thus has a better chance of encountering similar cases. As Table <ref> illustrates, the results of the manual metric highly correlate with the automatic metrics, except for the query and document modality attributes in the instruction only setting. We observe that in this setting, modality attributes resulted in 0.00 with the automatic metrics, but they resulted in 1 in manual annotation. After looking into results, we figured the disparity is because ground truth labels the modality feature as uni-modal, multi-modal, etc. but the sequence-to-sequence model labels it differently, e.g., text. This issue resolves after seeing one example in prompt. We also observe that query and document structure attributes result in a close-to-zero performance in the instruction-only setting. This may be due to the fact that in our instruction, we only provided the model with examples of values for these attributes. However, these attributes have been implicitly mentioned in the domain descriptions, and some in-domain knowledge is necessary to interpret the structure or modality of the task. Again, the performance would significantly improve after seeing only one example. Note that all datasets within BEIR are unstructured, so the model may repeat the only label it has given as example for structure and modality attributes. Further, we observe that relevance notion is one of the hardest attributes to predict. This makes sense because usually, understanding what constitutes relevance requires a deep understanding of the task, which these models currently lack. A deep dive into the results showed us that in many cases, the model generalizes the query attributes to the document attributes, especially in cases that are not explicitly described. For example, if the query topic attribute was predicted as “medical,” the model may generalize it to the document topic as well. However, we know that IR features are not necessarily symmetric. A medical query could request information from a heterogeneous corpus such as the Web, and the symmetric assumption makes data synthesis unrealistic. § CONCLUSIONS AND FUTURE WORK This paper introduced a new category of domain adaptation methods for neural information retrieval and proposed a pipeline that leverages target domain descriptions to construct a synthetic target collection, generate queries, and produce pseudo-relevant labels. The results of experiments conducted on five diverse target collections demonstrated that our proposed approach outperforms existing dense retrieval baselines in such a domain adaptation scenario. This work holds the potential for practical applications where the target collection and its relevance labels are unavailable, while preserving privacy and complying with legal restrictions. Future work involves incorporating additional domain-specific information, such as data source and language, and evaluating its conceptualizing ability with more implicit descriptions. Acknowledgements. This work was supported in part by the Center for Intelligent Information Retrieval and in part by a Bloomberg Data Science PhD Fellowship. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. ACM-Reference-Format
http://arxiv.org/abs/2307.00734v1
20230703034321
On the choice of training data for machine learning of geostrophic mesoscale turbulence
[ "F. E. Yan", "J. Mak", "Y. Wang" ]
physics.ao-ph
[ "physics.ao-ph", "cs.LG", "physics.flu-dyn" ]
F. E. Yan1, J. Mak1,2, Y. Wang1,2 1Department of Ocean Science, Hong Kong University of Science and Technology 2Center for Ocean Research in Hong Kong and Macau, Hong Kong University of Science and Technology Fei Er Yanfeyan@connect.ust.hk Julian Makjulian.c.l.mak@googlemail.com * Investigate dependence of convolution neural network's on choice of training data for geostrophic turbulence * Eddy force function used as a way to filter out dynamically inert eddy fluxes * Models trained on filtered eddy fluxes at least as accurate but more robust than models trained on the divergence of eddy fluxes `Data' plays a central role in data-driven methods, but is not often the subject of focus in investigations of machine learning algorithms as applied to Earth System Modeling related problems. Here we consider the case of eddy-mean interaction in rotating stratified turbulence in the presence of lateral boundaries, a problem of relevance to ocean modeling, where the eddy fluxes contain dynamically inert rotational components that are expected to contaminate the learning process. An often utilized choice in the literature is to learn from the divergence of the eddy fluxes. Here we provide theoretical arguments and numerical evidence that learning from the eddy fluxes with the rotational component appropriately filtered out results in models with comparable or better skill, but substantially improved robustness. If we simply want a data-driven model to have predictive skill then the choice of data choice and/or quality may not be critical, but we argue it is highly desirable and perhaps even necessary if we want to leverage data-driven methods to aid in discovering unknown or hidden physical processes within the data itself. § PLAIN LANGUAGE SUMMARY Data-drive methods and machine learning are increasingly being utilized in various problems relating to Earth System Modeling. While there are many works focusing on the machine learning algorithms or the problems themselves, there has been relative few investigations into the impact of data choice or quality, given the central role the data plays. We consider here the impact of data choice for a particular problem of eddy-mean interaction of relevance to ocean modeling, and provide theoretical arguments and numerical evidence to suggest that one choice (informed by our theoretical understanding of the underlying problem) is preferable over a more standard choice utilized in the literature. While the choice of data choice and/or quality may not be critical if we simply want a data-driven model to `work', we argue it is highly desirable (possibly even a necessity) if we want to go beyond having models that just `work', such as leveraging data-driven methods to help us in discovering unknown or hidden physical processes within the data itself. § INTRODUCTION Data-driven methods and machine learning algorithms are increasingly being utilized in problems relating to Earth system and/or climate modeling, and there is no doubt such methods have a strong potential in greatly enhancing model skill and/or reducing computation cost in various numerical models. Some examples of usage includes dynamical processes in the atmosphere <cit.>e.g.,>BrenowitzBretherton19, YuvalOGorman20, Mooers-et-al21, Connolly-et-al23, Sun-et-al23, climate modeling <cit.>e.g.,>Bescombes-et-al21, SonnewaldLguensat21, see ice prediction <cit.>e.g.,>Bolibar-et-al20, Andersson-et-al21, identification problems in oceanography <cit.>e.g.,>Jones-et-al19, Thomas-et-al21, Sonnewald-et-al19, Sonnewald-et-al23, and our primary focus here, on ocean mesoscale turbulence <cit.>e.g.,>BoltonZanna19, ZannaBolton20, GuillauminZanna21. We refer the reader to the works of Reichstein-et-al19, Irrgang-et-al21, Sonnewald-et-al21 and CampsValls-et-al23 for a more comprehensive review. One criticism of some data-driven methods and machine learning algorithms is the `black-box' nature of the resulting models. In general, for a problem with input x and output y, a focus of data-driven methods is to find some mapping f such that f(x) = y, where f could be deterministic or probabilistic depending on the algorithm used to obtain f. The lack of interpretability for f in certain instances brings into question several important issues with the use of data-driven methods. The first is robustness and applicability in different regimes: are the models doing the right things for the `right' reasons (or at least not the `wrong' ones)? If for the `wrong' reasons, then it is perfectly plausible that trained up models can behave erratically when taken outside the trained regimes and, given the nonlinear and convoluted nature of the model itself, generate subtly wrong results that might be close to impossible to check. The second relates to further utilities of the methods themselves: is it possible to use such methods to aid process discovery from the data itself? A lack of interpretability would suggest a negative answer to that question. With that in mind, there has been an increasing focus on physically constrained and/or interpretable/explainable models <cit.>e.g.,>ZhangLin18, Brenowitz-et-al20, ZannaBolton20, Beucler-et-al21, Kashinath-et-al21, SonnewaldLguensat21, Yuval-et-al21, Barnes-et-al22, Clare-et-al22b, LopezGomez-et-al22, Guan-et-al23. While the tools and algorithms do exist, this is a fundamentally harder problem, since the training step ultimate becomes one of a constrained optimization problem. While the algorithms and nature of the resulting model f (e.g. linear vs. nonlinear, generative vs. discriminative, model complexity) are important details, at the very base level we are really dealing with the problem of data regression. We would thus expect data choice and/or data quality to critically affect the training, the performance or the useful information that could be extracted/encoded by the model, but are issues that have not received much investigation. If we simply want a model that `works' in the sense of producing a `skilled' prediction in whatever metric we think is relevant, then the issue of data quality and/or content may not be critical, since we are simply looking for some optimal fit. If on the other hand we are interested in the harder problem of optimal fit with constraints, such as having a model that works for the `right' reasons (e.g. satisfying physical conservation laws), or using data-driven methods for process discovery (e.g. telling us about the underlying physics of a problem), then one might imagine the choice and quality of data exposed to the model should be important. Furthermore, certain data may be more accessible for the machine learning algorithms to extract/predict features from (e.g. smoothness and/or spatio-temporal scale of data), which has practical consequences for the optimization procedure at the model training and prediction step. To demonstrate that not all choices of data are equal, we consider in this work the problem of eddy-mean interaction in rotating stratified turbulence in the presence of boundaries, a setting that is particularly relevant to ocean modeling and parameterization of geostrophic mesoscale eddies. The problem relates to the presence of rotational fluxes <cit.>e.g.>MarshallShutts81, FoxKemper-et-al03, Maddison-et-al15, and we provide some theoretical arguments and evidence on why learning from the eddy force function, which is one method to deal with the presence of dynamically inert rotational fluxes, might be preferable to learning from the divergence of the eddy fluxes. We will largely leverage the experimental procedure of BoltonZanna19, albeit with important differences to be detailed. While the present investigation is largely empirical and relies on input of external knowledge that is somewhat specific to the present problem, the present work serves to open a discussion into data choice and/or quality, as well as probing the available information content in data in the general case, possibly in a more systematic and objective fashion than one performed here. The technical problem statement relating to rotational fluxes and its plausible impact on data quality for data-driven methods are outlined in <ref>. In <ref> we outline our experimental procedure, numerical model used and data-driven method. <ref> summarizes the impact of data choice on the skill of the trained models. <ref> considers the issue of model robustness via investigating the models' skill and their sensitivity to noise in the training data. We close in <ref> and provide outlooks, focusing particularly on further experiments to probe the information content of data being for use in data-driven methods of relevance to the present eddy-mean interaction problem. § ROTATIONAL FLUXES AND THE EDDY FORCE FUNCTION §.§ Formulation For this particular work we consider turbulent motion under the influence of strong rotation and stratification. Specifically, we consider the Quasi-Geostrophic (QG) limit <cit.>e.g.>Vallis-GFD, which is a widely-used and applicable limit for oceanic mesoscale dynamics where the motion is geostrophic at leading order. If we consider the standard Reynolds decomposition with A = A + A', A+B = A + B, A' = 0, where the overbar denotes a mean (with the projection operator assumed to commute with all relevant derivatives), and a prime denotes a deviation from the mean, the mean QG Potential Vorticity (PV) equation takes the form ∂q/∂ t + ∇·( uq) = -∇·u' q' + Q. Here, t denotes the time, ∇ denotes the horizontal gradient operator, so that the PV q is defined as q = ∇^2ψ + β y + ∂/∂ zf_0/N_0^2∂ b/∂ z, where ψ is the streamfunction, f = f_0 + β y is the Coriolis frequency (background value and leading order meridional variation), N_0 is the (static) buoyancy frequency related to the imposed background stratification, b = f_0 ∂ψ / ∂ z is the buoyancy, u = ∇^⊥ψ = (-∂ψ/∂ y, ∂ψ / ∂ x) is the non-divergent geostrophic velocity, and Q represents all forcing and dissipation. An aim in studies of eddy-mean interaction is to understand the inter-dependence of the nonlinear eddy flux terms on the right hand side of Eq. (<ref>) and the mean state. A particular goal with eddy parameterization is to relate the eddy flux term u' q' with some large-scale mean state, normally as u' q'∼ f(q, …; κ, …), where f is some mapping between mean states (such as q) and associated parameters (such as κ) to the eddy fluxes. Once such a relation exists, we take a divergence, from which we obtain the eddy forcing on the mean. A notable example would be PV diffusion <cit.>e.g.,>Green70, Marshall81, RhinesYoung82, where we directly postulate for the form of F as u' q' = -κ∇q ⇒ -∇·u' q' = ∇·(κ∇q). We emphasize the ordering of the operations here: we obtain a functional relation between the mean and eddy fluxes first, then we take a divergence to obtain the eddy forcing (cf. Fickian diffusion closures). §.§ The issue of rotational fluxes The form as given in Eq. (<ref>) is suggestive that data-driven approaches would be useful by either directly regressing/learning for the mapping f, or when a mapping (cf. parameterization) such as Eq. (<ref>) is given, to learn for the parameters such as κ. However, there is a subtlety involved here, arising from the fact that it is the divergence of the eddy fluxes that arises <cit.>and is generic beyond the QG system, where the eddy forcing arises from a divergence of the Eliassen–Palm flux tensor, with the eddy fluxes as the tensor components, e.g.>Young12, MaddisonMarshall13. A two-dimensional vector field such as u' q' can, via a Helmholtz decomposition, be written as u' q' = ∇Ψ̃ + ê_z ×∇Φ̃ + H̃, where ê_z is the unit vector pointing in the vertical, and the terms are respectively a divergent (vanishing under a curl), rotational (vanishing under a divergence), and harmonic component (vanishing under both a curl and divergence). Since the eddy forcing on the mean appears as a divergence, the rotational (and harmonic) eddy fluxes are dynamically inert, and one might expect that the presence of such dynamically inert fluxes is going to be detrimental to the regression/learning by data-driven methods. Similar issues arise for example arise in a diagnostic problem for the PV diffusivity κ, where rotational fluxes are known to severely contaminate the calculation <cit.>e.g. Fig.1>Mak-et-al16b. One way to get around this problem is to perform a Helmholtz decomposition as above and only perform learning/regression/diagnoses using only the divergent term ∇Ψ̃. This approach is however complicated by the issue of gauge freedom in the presence of boundaries <cit.>e.g.,>FoxKemper-et-al03, Maddison-et-al15, Mak-et-al16b. The standard Helmholtz decomposition as commonly employed (e.g. in electromagnetism problems) is unique because we have periodic or rapidly decaying boundary conditions. The non-uniqueness of the Helmholtz decomposition in the presence of boundaries arises from the fact that there is generically no inherited natural boundary condition for arbitrary choices of vector fields (although there may be ones that are physically relevant depending on the problem), and that the divergent term ∇Ψ̃ is unique only up to an arbitrary rotational gauge. One possibility might be to utilize the divergence of the eddy flux directly (e.g. ∇·u' q'). This is somewhat the approach taken for example in the works of BoltonZanna19 and ZannaBolton20, who considers applying data-driven methods to learn about sub-grid momentum forcing in an ocean relevant model. While they report positive results from data-driven methods in their work, there are some points that are worth revisiting, particularly regarding learning from the divergence of the eddy flux. One issue is the spatial resolution of data itself: the eddy flux data itself is already small-scale, and now we want its divergence, which is an even finer scale quantity, so there could be sensitivity of the data to the numerical model resolution itself. Following on from this point is the issue of robustness. The learning problem here is trying to find a mapping between very small-scale data and large-scale data (e.g., divergence of eddy flux and say some function of the streamfunction), and questions arise whether this leads to sensitivity to the training data, or whether such a choice is unnecessarily taxing on the machine learning algorithms. A final point is more subtle and more speculative, to do with commutativity, i.e. ordering of operations. Eddy parameterizations are usually formulated as in Eq. (<ref>): we learn a f(…) = u' q', from which we take a divergence of the learned f to get the eddy forcing. If we are learning from ∇·u' q', then the ordering is different, because we are really learning for some ∇·u' q' = f̂(…), where we would hope that f̂ = ∇· F. There is however no reason to expect such an equality in general, since the resulting mappings F or F̂ obtained from machine learning algorithms are nonlinear. If we are simply interested in something that just `works', then these aforementioned points may not actually matter. If, on the other hand, we are interested in learning about the underlying physics via data-driven methods, then it is not clear whether the aforementioned properties (or the lack thereof) become fundamental limitations in the applicability of the procedure. §.§ The eddy force function If we instead consider learning from data at the eddy flux level, then we probably want to filter out the rotational component in some way, ideally in a unique fashion. While the statement about the non-uniqueness of the Helmholtz decomposition holds for generic tracer fluxes in the presence of boundaries, it turns out, for the QG system and for the eddy PV flux, there is in fact a natural boundary condition that is inherited from the no-normal flow condition <cit.>. The decomposition u' q' = -∇Ψ^q_ eff + ê_z ×∇Φ^q_ eff + H^q, where Ψ^q_ eff denotes the eddy force function (note the extra minus sign on the gradient term compared to Eq. <ref>), and may be obtained from solving the Poisson equation ∇·u' q' = -∇^2 Ψ^q_ eff subject to homogeneous Dirichlet boundary conditions Ψ^q_ eff = 0. Such an object is uniquely defined (from fixing the gauge freedom via the naturally inherited boundary condition), and Ψ^q_ eff can be proved to be optimal in the H^1_0 sense, i.e. -∇Ψ^q_ eff is a minimizer in L^2, or that the dynamically active part of the eddy flux encoded by divergent part is as `uncontaminated' as possible, at least in a simply connected domain <cit.>see Appendix A of>Maddison-et-al15. Furthermore, via the linearity assumption of the eddy force function and boundary condition inheritance <cit.>, we can define an eddy force function for the components that contribute towards the definition of eddy PV flux: for example, from the definition of PV given in Eq. (<ref>), we can decompose u' ζ' = -∇Ψ^ζ_ eff + ê_z ×∇Φ^ζ_ eff + H^ζ, where ζ = ∇^2ψ is the relative vorticity, giving rise to a relative vorticity or momentum eddy force function Ψ^ζ_ eff <cit.>related to the Reynolds stress via the Taylor identity, e.g.>MaddisonMarshall13, computed via an analogous Poisson equation to Eq. (<ref>) also with homogeneous Dirichlet boundary conditions, and similarly for a buoyancy eddy force function Ψ_ eff^b. For concreteness, the discussion will focus on the PV eddy force function Ψ_ eff^q, but we document results from all three contributions in the later sections. The eddy force functions have been previously demonstrated to be a useful quantity for diagnoses problems <cit.>e.g.,>[in diagnosing eddy diffusivities via inverse approaches]Mak-et-al16b, and we might expect that it would be a useful quantity for data-driven methods applied to eddy parameterization of rotating stratified turbulence. To compare with the discussion above, the eddy force function is a larger-scale object, which might lead to weaker sensitivity during the training phase compared to training on ∇·u' q'. The gradient of the eddy force function -∇Ψ^q_ eff uniquely defines the dynamically relevant eddy flux, suggesting that -∇Ψ^q_ eff would serve as a better choice of data compared to training on u' q', since the latter contains dynamically irrelevant data. Additionally, given parameterizations are more naturally formulated as a relation between the eddy fluxes and the mean state (cf. Eq. <ref>), -∇Ψ^q_ eff avoids the possible issue with commutativity mentioned above. § MODEL DETAILS Taking into account the above discussion, we explore here whether the eddy force function serves as a potentially useful object for machine learning of ocean mesoscale turbulence. For a problem y = f(x), the focus here is principally on the skill of the models f, trained on various output data y for the same inputs x, where skill is to be measured by various mismatches between y_ data and y_ predict = f(x_ data). We detail here a set of experiments to test and explore the following hypotheses: * models trained upon the filtered eddy flux -∇Ψ^q_ eff would be more skillful than ones trained upon the full eddy flux u' q', * models trained upon the filtered eddy flux -∇Ψ^q_ eff would possibly be comparable in skill to ones trained upon the divergence of the eddy flux ∇·u' q', but the latter models might be more sensitive to data quality. The experimental approach will largely mirror that of BoltonZanna19. However, one important fundamental difference of our work is the choice of average, which impacts the definition of eddies from Eq. (<ref>). Where BoltonZanna19 take a low-pass spatial filter as the projection operator, here we employ a time-average and has the property that A' = 0 in line with properties of a Reynolds opeartor. Our eddy forcing then is in the more familiar form of a nonlinear eddy flux (e.g. ∇·u' q'), rather than as a difference between the spatially averaged quantities <cit.>e.g., S = ∇·(u q) -∇·(u q), Eq. 7 of>BoltonZanna19. The current definition of the eddy force function Ψ^q_ eff assumes a Reynolds average <cit.>, and while there are likely extensions and relaxation of assumptions possible, for simplicity we do not pursue this avenue and utilize time-averaging. §.§ Numerical ocean model setup The physical setup we consider is essentially the same three-layer QG square double gyre configuration as BoltonZanna19 <cit.>cf.>Berloff05a, Karabasov-et-al09, Marshall-et-al12, Mak-et-al16b, but solved with a pseudo-spectral method instead of using the finite difference CABARET scheme of Karabasov-et-al09. The numerical model () generating the data presented in this work utilizes the parameters detailed in Mak-et-al16b, with the stratification parameters chosen such that the first and second Rossby deformation radii are 32.2 and 18.9 km, with a horizontal grid spacing of Δ x = Δ y = 7.5 km (which is 512 by 512 in horizontal grid points), a horizontal viscosity value of ν = 50m^2 s^-1, and a time-step of Δ t = 30mins. A wind forcing with peak wind stress of τ_0 = 0.8N m^-2 is used <cit.>correcting a typo in Table 1 of>Mak-et-al16b. The model is spun up from rest for 20,000 days, and a further integration period of 5,000 days after this spin up is performed for computing time-averages. The accumulated time-averages of the eddy fluxes are used to compute the eddy force function Ψ_ eff via solving the Poisson equation in Eq. (<ref>) with homogeneous Dirichlet boundary conditions, performed per layer. For this procedure, we leverage the FEniCS software <cit.> following the previous works of Maddison-et-al15 and Mak-et-al16b, making use of the high level abstraction, automatic code generation capabilities and the numerous inbuilt solvers that are particularly suited to elliptic equations we have here. The data from each grid point of the numerical model are the nodal values on a regular structured triangular mesh, with a projection onto a piecewise linear basis (CG1). All derivative operations are performed on the finite element mesh, and the nodal values of the relevant fields are restructured into arrays for feeding into the machine learning algorithms. Fig. <ref> shows some sample output data in the surface layer. The two horizontal components of the time-averaged eddy PV fluxes in panels (b,c) are the datasets returned by the numerical model, which is sampled onto a finite element mesh as a vector object. The resulting object's divergence can then be computed, and the result is given in panel (a). As expected, the divergence of the eddy PV flux has more smaller-scale fluctuations and is less smooth than the eddy PV fluxes. Solving the relevant Poisson equation in FEniCS, the PV eddy force function Ψ_ eff^q is shown in panel (d). From Maddison-et-al15, the gradient of the eddy force function ∇Ψ^q_ eff has a physical interpretation when considered together with the time-mean streamfunction ψ <cit.>not shown, but see>Maddison-et-al15, interpreted as whether eddies are accelerating the mean-flow (if ∇Ψ^q_ eff·∇ψ > 0, interpreted as an input of energy into the mean by eddies) or decelerating the mean flow (if ∇Ψ^q_ eff·∇ψ < 0, interpreted as an extraction of energy from the mean by eddies). Here, the eddy force function can be shown to correspond to the regimes where the eddies are slowing down the mean-flow via baroclinic instability when the Western Boundary Current first separates (the first positive-negative pattern emanating from the western boundary, which is anti-correlated with ∇ψ), while the next dipole pattern (the first negative-positive patterns, which is correlated with ∇ψ) is an eddy forcing of the mean-flow corresponding to an eddy driven regime <cit.>cf.>WatermanJayne11, WatermanHoskins13. From this Ψ_ eff^q, the horizontal components of the gradient leads to the eddy PV fluxes with the rotational component removed, and are shown in panels (e,f). While not obvious at first sight, the divergence of the full eddy PV flux (panels b,c) and the divergence of the filtered eddy PV flux (panels e,f) are both equal to ∇·u'q' (panel a) up to numerical solver errors (here at least four orders of magnitude smaller than the data). In this instance, note also that the filtered eddy flux has qualitatively different spatial patterns to the full eddy flux, and that the filtered eddy flux is around an order of magnitude smaller than the full eddy fluxes. The behavior is consistent with observations that the rotational eddy fluxes can be large <cit.>e.g.>Griesel-et-al09, and suggests we probably do want to filter the dynamically inert component out should we utilize eddy flux data to learn about geostrophic turbulence. §.§ Model training procedure Following BoltonZanna19 we employ Convolutional Neural Networks <cit.>CNNs; e.g., 9,>Goodfellow-et-al-16 to map between the specified inputs and targets. In line with the intended investigation, the choice of parameters for training the CNNs are kept fixed and chosen as in BoltonZanna19, and the main quantity we vary is the choice of output data. The mappings that are returned as a CNN are denoted: * f_ div^q(…), with output data as the divergence of the eddy PV flux ∇·u'q', * f_ full^q(…), with output target data as the full eddy PV flux u'q', * f_ eff^q(…), with output data as the dynamically active eddy PV flux as defined through a gradient of the PV eddy force function (cf. Eq. <ref>) -∇Ψ^q_ eff. Note that f_ div^q(…) predicts a scalar field, while the f_ full/eff^q(…) returns a vector field. A possible choice could be to train a model from the eddy force function, and from the trained model's predicted eddy force function compute its Laplacian to obtain the divergence of the eddy flux. As mentioned above, this is an extremely difficult test for model skill since gradient operations amplify mismatches, and we comment on related results and observations are in the conclusions section. To train up these mappings in the present time-averaged case, we follow the schematic given in Fig. <ref>, partially inspired by the approach of BoltonZanna19. The model domain is partitioned into small overlapping boxes. The input and output data associated with each of these boxes are paired up, and the pairs are each assigned a number and randomly shuffled (i.e. sampling from a uniform probability distribution function) depending on a choice of a random seed, and subsequently assigned to the training set (for training up the model) and validation (for tuning the hyperparameters in order to minimize a specified loss function) set with a 80:20 ratio. A model is trained up, and the skill of the model is its ability to be able to predict the global field. In the 512 by 512 pixel domain, we take the small boxes to be 40 by 40 pixels, with a stride of six, resulting in a collection of 80^2 = 6400 images of the domain. For statistical significance, an ensemble of 20 such models were trained up, each ensemble member only differing in the choice of the random seed, and the same sets of random seeds are used for the ensembles to be compared against. The CNNs are built using the PyTorch platform <cit.>, where the CNN architecture consists of three hidden convolutional layers with square kernels (of size 8, 4 and 4 respectively), with a two-dimension max pooling layer with square kernel of size 2, and a fully-connected linear activation layer as the output. The CNNs are trained with a batch size of 64, using the Adam optimizer <cit.> with a mean squared error loss function. An early stopping criterion is used to monitor the loss function during the training to avoid over-fitting; for simplification, we use a constant learning rate of 10^-4 during training. § MODEL SKILL We first evaluate the predictive skill of the various models to the choice of target data. The skill of the models are judged by its ability to reduce mismatches of the divergence of the eddy PV flux, via repeated predictions of smaller patches (here taken with a stride of 2 pixels), with averages taken as necessary. Note that while f_ div^q(…) already predicts the divergence of the eddy PV flux, we will take a divergence of the outcome of f_ full/eff^q(…) to give the predicted divergence of the eddy PV flux. The normalized mismatch between data and prediction will be judged as ϵ^q_L^2(F^q_(·)) = ∇·u'q' - F^q_(·)(…)^2_L^2/∇·u'q'^2_L^2, where F_(·)^q denotes the divergence of the eddy PV flux from the models f^q_(·)(…), and the L^2 norm is defined as g^2_L^2 = ∫_A g^2 dA for some scalar field g. Each ensemble member will make a set of predictions with an associated mismatch, and the associated averages and standard deviations computed to gauge model skill. We note that the test for skill chosen here is inherently harder and biased against the models trained on the eddy PV fluxes (filtered or otherwise), since an extra divergence operation is required in computing the mismatches. The above choice to compare the divergence of the eddy PV flux was taken noting that we want a quantity that is comparable across the three sets of models, and there is a theoretical issue in comparing quantities at the eddy PV flux level (since that requires integrating the prediction of F^q_ div(…), which is then subject to a choice of boundary condition). One could argue whether it is the L^2 mismatches we are ultimately interested in, since we may for example be interested in the patterns of the forcing, rather than the exact locations of the forcing. As a compromise, we consider the Sobolev semi-norms <cit.>e.g.>Thiffeault12 given by g^2_Ḣ^p = ∫_A |(-∇^2)^p g|^2 dA = ∑_k^2 + l^2≠ 0 (k^2 + l^2)^p|ĝ_k,l|^2, where ĝ_k,l are the Fourier coefficients of g, (k,l) are the respective wavenumbers, and the link between integral and sum follows from Parseval's theorem (e.g. if p=0 then it is the L^2 norm above when the k=l=0 mode is included). Sobolev semi-norms with negative p will weigh the lower wavenumbers (i.e. the larger-scale patterns) more, and in this instance a lower normalized mismatch ϵ^q_Ḣ^p(F^q_(·)) = ∇·u'q' - F^q_(·)(…)^2_Ḣ^p/∇·u'q'^2_Ḣ^p indicates that the mismatches at the large-scales are smaller. Since we are dealing with finite approximations so that k^2 + l^2 < ∞, we can perform the computation, although the formal definition for the Ḣ^p semi-norms is generally for fields with zero mean and on a periodic domain and such that the infinite sum converges. For the work here we will focus on the case of p=-1/2, sometimes referred to as the mix-norm <cit.>e.g.>Thiffeault12; conclusions below are qualitatiely the same if p=-1 or p=-2 were chosen (not shown). §.§ Models trained on eddy PV fluxes We first focus on models trained up on the data based on the eddy PV flux u'q' with the time-mean streamfunction ψ as the input. Fig. <ref> shows the predicted divergence of the eddy PV flux F_ div/full/eff^q(ψ) as an output from one of the model ensemble members. Compared to the target given in Fig. <ref>(a), the predictions are more smooth with fewer small-scale features, arising from a combination of the fact that CNNs were used, and that our prediction step leads to some averaging of the overlaping regions. Visually, the predictions F_ div^q(ψ) and F_ eff^q(ψ) are almost indistinguishable (the latter having a slightly stronger signal downstream of the Western Boundary Current). On the other hand, the prediction F_ full^q(ψ) shows more fluctuation features than the other two cases. The larger amount of small-scale features in F_ full^q(ψ) likely arises because the model is predicting the eddy PV flux first, before taking a numerical divergence of the data, so any small fluctuations that arise from the prediction is amplified by the divergence operation. In that regard, the fact that the prediction F_ eff^q(ψ) is so similar to F_ div^q(ψ) is rather remarkable. Fig. <ref> shows the more quantitative measure of computing various mismatches in the L^2 norm and the Ḣ^-1/2 semi-norm given in Eq. (<ref>) and (<ref>) respectively. The results show that the models trained upon the filtered eddy PV flux -∇Ψ_ eff^q outperforms the models trained upon the full eddy PV flux u'q', and have a comparable or even better performance compared to the models trained up on the divergence of the eddy PV flux ∇·u'q'. The differences in skill are visually obvious between the models trained on the full eddy flux u'q' and the filtered eddy flux -∇Ψ_ eff^q. The difference between the models trained from the filtered eddy flux -∇Ψ_ eff^q and the divergence of the eddy flux ∇·u'q', while notable in the Ḣ^-1/2 measure, is too close to call in the L^2 measure (e.g. we do not have p<0.05 using the Student's t-test <cit.> under the null hypothesis that the means of F_ div^q(ψ) and F_ eff^q(ψ) are the same). The results here lend support to our expectation that the presence of rotational fluxes contaminate and degrade the accuracy of a trained up model, and that the eddy force function provides an viable alternative for use in machine learning approaches that addresses the problem of dynamically inert rotational fluxes, leading to at least comparable performance from a skill point of view (and some evidence to suggest it might be better, although that is dependent on the choice of metric). The observation that F_ eff^q(ψ) is comparable to F_ div^q(ψ) is all the more remarkable when we note that tests based on the models' ability in reproducing the divergence of the eddy flux is intrinsically harder and biased against models trained on -∇Ψ_ eff^q, since an additional divergence operation that is expected to amplify errors is required to produce F_ eff^q(ψ). §.§ Other choice of eddy fluxes and inputs By the linearity assumption in deriving the eddy force function and the definition of PV, analogous eddy force functions for momentum and buoyancy may be defined by a similar decomposition but using the eddy relative vorticity flux u'ζ' (related to the Reynolds stress via the Taylor identity) and u'b' (related to the form stress). Following the notation outline above, Fig. <ref> show the target data ∇·u'ζ' and ∇·u'b', and the analogous predictions of the divergence of the fluxes denoted by F_ div/full/eff^ζ/b(ψ). For the models trained on the data relating to the eddy PV flux shown in Fig. <ref>, the predictions are more smooth than the diagnosed target data, and is particularly noticeable for prediction of the divergence of the eddy relative vorticity flux in Fig. <ref>(b,c,d). For the eddy buoyancy case, the diagnosed target data is already relatively smooth. We note that, visually, F^b_ full(ψ) in Fig. <ref>(g) seems to be possess extra features particularly in the downstream region, while F^b_ eff(ψ) and F^b_ div(ψ) in Fig. <ref>(f,h) seems to be capturing the patterns in the target data well, with some visual hints that the prediction from F^b_ div(ψ) has slightly sharper features. For a more quantitative measure, we show in Fig. <ref> the L^2 and Ḣ^-1/2 mismatches in F^q/ζ/b_ div / full / eff(ψ / q / ζ), totaling the 3^3 = 27 possible combinations. The conclusions over all these possible choices are largely what was drawn from before but with minor differences. The models trained up on the filtered eddy fluxes outperform those trained upon the full eddy fluxes (except for the case of eddy relative vorticity fluxes), and are comparable or better than models trained on the divergence of the flux (except in the case of the eddy buoyancy fluxes). Noting that eddy PV fluxes have contributions from the eddy buoyancy as well as eddy relative vorticity fluxes, it is curious that while models trained on the filtered eddy fluxes compared with models trained on the divergence of the flux appear to perform worse for the eddy buoyancy flux (bottom row of Fig. <ref>), but has reasonable performance in the eddy relative vorticity flux case (middle row of Fig. <ref>) such that, together, the resulting skill in the eddy PV flux (top row of Fig. <ref>) still remains comparable (and possibly slightly better in the Ḣ^-1/2 semi-norm, indicating better matching in terms of large-scale patterns). One possible explanation for the degradation in performance for eddy buoyancy fluxes is that ∇·u'b' is already relatively smooth and larger-scale (Fig. <ref>e), which might be favorable for direct use as training data. On the other hand, the eddy relative vorticity fluxes are inherently smaller-scale (Fig. <ref>a), and the presence of small-scale fluctuation might be unfavorable for direct use as training data, but does not affect models trained on the filtered fluxes as such since the training data is by definition more smooth. The performance of models based on the full eddy relative vorticity fluxes is somewhat surprising, but may be to do with the smaller component of the rotational fluxes: examining the decomposition into divergent and rotational parts via the eddy force function (cf. Fig. <ref>b,c,e,f, not shown) it is found that the divergent component is smaller by about a factor of 2 in the eddy relative vorticity flux, but a factor of 10 in the eddy buoyancy and PV flux. The results seem to suggest that the main benefits of filtering dynamically inert rotational fluxes would be in the eddy buoyancy and PV. For completeness, we show in Fig. <ref> the analogous eddy force functions associated with the predictions from the trained models from one of the ensemble members (although observations detailed here are robust upon examining the outputs from other members); note the appropriate mismatches would be closely related to the Ḣ^-2 semi-norm as defined in Eq. (<ref>), but with a difference in the boundary conditions. The predictions from models trained on the filtered eddy fluxes (panels d,h,i) have patterns that are largely aligned with the diagnosed eddy force functions from the data (panels a,e,i) up to minor discrepancies (e.g. downstream patterns in panel d compared to panel a, and panel l compared to panel i). The predictions from models trained on the full eddy fluxes (panels c,g,k) show similar patterns although with somewhat more mismatches, particularly in the PV and buoyancy eddy force functions. By contrast, the predictions from the divergence of the eddy fluxes (panels b,f,j) show large-scale disagreements in all three variables, the mismatches being visually the gravest in the PV and buoyancy variables. Given that the eddy force function encodes the dynamically active eddy fluxes, and has an interpretation that ∇Ψ_ eff·∇ψ encodes the sign of energy exchange between the mean and eddy component <cit.>, the finding here suggests the predictions from models trained on the divergence of the eddy fluxes are very likely representing erroneous energy transfers, particularly for processes associated with eddy buoyancy fluxes. § MODEL ROBUSTNESS The above observations of model skill and its sensitivity to small-scale fluctuations brings into question the issue of robustness particularly for the models trained on the divergence of the eddy fluxes. To explore the sensitivity of skill to noise in the data, we consider a set of experiments where we add noise η(x,y) to the data at the training stage, and judge the models' performance on its ability in predicting the target data without noise. To make sure we are comparing models in a consistent manner, we add an appropriately scaled Gaussian distributed noise η(x,y) to the eddy fluxes (u'q', u'ζ', u'b'), from which we compute the divergence of the eddy flux as well as the eddy force function from the noisy data, and train up the models using the procedure outlined above. In that sense the whole set of models are exposed to the same choice of noise, since 1 unit of noise at the divergence level is not necessarily the same as 1 unit of noise at the streamfunction level. The noise level here is measured in units of the standard deviation of the eddy flux data. The hypothesis is that the models trained on the filtered eddy fluxes are more robust than those trained on the divergence of the eddy fluxes, and able to maintain model skill with increased levels of noise. A note to make here is that the stochastic noise η(x,y) in this regard is formally non-differentiable in space, so that the divergence operation on it is not well-defined. In terms of numerical implementation, however, the random numbers sampled from the appropriately scaled Gaussian distribution are the nodal values of the finite element mesh used in FEniCS, and there is a projection onto a linear basis, so that a derivative operation on the projected η(x,y) is allowed within FEniCS, though the operation may be numerically sensitive. An approach we considered is filtering the noise field. We consider solving for some η̃(x,y) satisfying (1 - L^2∇^2)^2η̃ = η with no-flux boundary conditions, and it is the resulting η̃(x,y) that is added to the training data. The resulting η̃ is by construction differentiable at least once so that a divergence is well-defined. For the operator (1 - L^2∇^2)^2, the associated Green's function has a characteristic length-scale L that can be interpreted as a filtering length-scale where the radial spectral power density decreases significantly after L <cit.>closely related to the Matérn auto-covariance, e.g.>Whittle63, Lindgren-et-al11. Note that `noise level' here refers to the magnitude of η(x,y), and that max|η̃(x,y)| < max|η(x,y)| by construction. The L^2 and Ḣ^-1/2 mismatches of F^q/ζ/b_ div / full / eff(ψ) to the data as a function of noise level for the ensemble of models is shown in Fig. <ref>, and consistently we find that the models trained up on the eddy force function out-perform the models trained upon the divergence of the eddy flux. The former shows a relative insensitivity to noise level, while the latter shows a rapid degradation in skill with noise level. It would seem that the use of eddy force function data alleviates the sensitivity to small-fluctuations in data, at least in the present measure and approach. The reduced sensitivity to noise might have been anticipated, since the eddy force function is a result of an elliptic solve of a Poisson equation, where the noisy data is acted upon by an inverse Laplacian operator that leads to substantial smoothing. We would however argue that the relative insensitivity to noise is somewhat surprising, since there is no guarantee the presence of even reduced fluctuations at the streamfunction level would stay small after spatial derivatives operations, since we are using the divergence of the eddy flux as the target for the measure of skill. While one could also argue that the present robustness test is inherently a hard test for models trained upon the divergence of the eddy flux, we argue the conclusions are robust regardless of whether the noise is added at the flux, divergence of flux or streamfunction level. In fact, the use of the divergence of a flux as training data is likely the cause for sensitivity to noise: a inherently small-scale field is sensitive to the presence of noise in data, so is likely going to lead to issues with robustness. The conclusions in the above are qualitatively robust for different choices of the filtering length-scale L: with reduced L, the degradation of skill in models trained on the divergence of the eddy fluxes is more rapid with noise level, but the skill of models trained on the filtered eddy fluxes is still relatively insensitive to noise level, and consistently more skillful than models trained on the divergence of the eddy fluxes. The conclusions are also robust for different choices of inputs (ζ and q), and with sample calculations employing other choices of smoothing, coarse-graining <cit.>e.g.,>Aluie19 or filtering <cit.>e.g.,>Grooms-et-al21 of the noise field η(x,y). § CONCLUSIONS AND OUTLOOKS Data-driven methods are increasingly being employed in problems of Earth System Modeling, and there is no doubt that such methods provide a powerful tool that can in principle be leveraged to not only improve our modeling efforts, but also deepen our underlying understanding of the problems. Most works in the literature thus far has focused on demonstrating the efficacy of the machine-learning methods and algorithms. Here we take a complimentary line of investigation in considering the choice and quality of data itself being fed into the algorithms, for a case where we have some theoretical understanding to inform our choice of data. While one could argue this is not entirely necessary if we just want something that `works' in the relevant metric(s) for the problem, we argue it is incredibly useful and if not necessary if we want to be leveraging data-driven methods to learn about the underlying physical problems, and/or have beyond `black-box' models. Furthermore, the choice of data can in principle improve the training and/or the performance of the data-driven models themselves, so there is a need for such an investigation into data quality and information content. For this work we focused on the problem of eddy-mean interaction in rotating stratified turbulence in the presence of boundaries, relevant to the modeling and parameterization of ocean dynamics. In such systems it is known that the large-scale mean affects and is affected by the small-scale eddy fluxes, and while we might want to leverage data-driven methods to learn about the relationship between the mean and the eddy fluxes, it is known that in the presence of boundaries the eddy feedback onto the mean is invariant up to a rotational gauge <cit.>e.g.>MarshallShutts81, FoxKemper-et-al03, Eden-et-al07. In practice the dynamically inert component could be quite large <cit.>e.g.,>[, Fig. <ref> here]Griesel-et-al09, and its presence might be expected to contaminate diagnoses and/or performance of data-driven models. One possible way round is to train models based on its divergence <cit.>e.g.,>BoltonZanna19, ZannaBolton20. Here we propose that data with the dynamically inert eddy fluxes filtered out could be used instead. The approach outlined here we argued here may have the advantage in that the resulting field is inherently larger-scale, which would help with model training and sensitivity, and be theoretically more appropriate to use if we want to learn about the underlying physics of the problem, because we do not expect the operations to be commutative (i.e. given the nonlinearity, learning from the divergence is not guaranteed to be the same as the divergence of the learned result). The experimental approach here largely follows that of BoltonZanna19, where we diagnose the various data from a quasi-geostrophic double gyre model to train the model, and compare the model's performance in its prediction. For filtering the eddy flux we employ the eddy force function <cit.>e.g.>MarshallPillar11, Maddison-et-al15, Mak-et-al16b, which in the present simply connected quasi-geostrophic system is provably optimal in the L^2 norm <cit.>and thus unique; see Appendix of>Maddison-et-al15. We made the choice here to measure a model's skill in being able to reproduce the divergence of the eddy fluxes over an ensemble of models with 20 members and over a variety of input choices. The findings here are that the models trained on the eddy force function are (a) more skillful than those trained on the full eddy flux (except for the relative vorticity eddy fluxes), (b) at least comparable in skill than models trained on the divergence of the eddy fluxes (except for the buoyancy eddy fluxes), and on occasion better, especially in the Ḣ^-1/2 semi-norm compared to the L^2 norm, where the former biases matching of the large-scale patterns of the resulting predictions, and (c) more robust in that the models are less sensitive to noise in the training data. The first finding is perhaps not unexpected. The latter two findings we argue are not entirely obvious, given divergence operations acting at various steps. For example, sample calculations where a model is trained on the eddy force function directly (and then taking a Laplacian to obtain a prediction of the divergence of eddy flux) leads to larger mismatches, which we attribute to the fact that any mismatches in the predicted eddy force function is significantly amplified by the two derivative operations. With that in mind, the fact that models trained on the filtered flux reported here leveraging the eddy force function as a way to filter data leads to models with comparable or better skill and superior robustness is a non-trivial result. Exceptions to the above conclusions are that models trained on the divergence of the eddy buoyancy flux are more skillful (bottom row of Fig. <ref>), and models trained on the eddy relative vorticity flux appear comparable whether the rotational component is filtered out or not (middle row of Fig. <ref>). The former might be justified in that the eddy buoyancy flux is already relatively smooth and somewhat larger-scale, so that training on its divergence is not such an issue; however, we also note that the buoyancy eddy force functions associated with the predictions of model trained on the divergence of the eddy buoyancy flux seems to perform the worse (bottom of Fig. <ref>), implying erroneous predictions of eddy energy pathways. The latter behavior is possibly to do with the observation that the dynamically inert rotational component is comparable to the dynamically active divergent component in the eddy relative vorticity flux (as opposed to the rotational component being a factor of ten smaller in the eddy PV and buoyancy flux; see Fig. <ref>b,c,e,d for eddy PV flux), so the effect of filtering is somewhat marginal. One saving grace is that, in the quasi-geostrophic system, the potential vorticity (with contributions from relative vorticity and buoyancy) is the master variable, and that while models trained up on the relative vorticity or buoyancy fluxes perform better separately, the models trained up on the eddy force function has skill and robustness in the master variable. We note that the conclusions reported here appear to be robust even if we use data with only some of the rotational component filtered out in sample calculations (e.g., solving for ψ̃ in Eq. <ref> with no normal flux boundary conditions, not shown), although we lose a little bit of skill and the physical interpretation associated with the eddy force function. One thing we caution here is drawing a one-to-one comparison of the present work with that of BoltonZanna19 and ZannaBolton20. While it is true those works utilize a similar model, experimental procedure and data, the main theoretical difference is that the choice of average is different: their work utilizes a spatial average, and the eddy flux data there is defined as the difference between the filtered divergence and the divergence of the filtered field (if making an assumption of the zero divergence condition on the resulting velocities). Here we utilize a time average, which is in line with the definition of the eddy force function in Maddison-et-al15, which requires a Reynolds average. While we have not attempted a similar investigation in the case of spatial averaging, it is not implausible that there is an analogous object to the eddy force function when a spatial average is employed, or that a simple Helmholtz-type decomposition could yield the desired filtering of the dynamically inert rotational component, but is beyond the scope of the present investigation. Because of the choice of time average, we have limited data in time, and one could wonder whether our conclusions are simply to do with the limited data availability. This is unlikely the case: we also carried out an analogous investigation with rolling time averages as well as ensemble averages (not shown), and the conclusions drawn from those results are essentially identical to those here. This is perhaps not surprising noting that the rolling time averages for a long enough window and the ensemble averages shown no strong deviations from each other, but we note this is likely only true for a sufficiently simple system with no strong evidence of internal modes of variability, such as the one employed here. The main intention of the present work is to demonstrate that not all data choices are equal when fed to data-driven methods, and it is not always advisable throwing all the available data at the machine and trust that the machine will figure out what to do with it (although one could argue that might reduce the inherent biases). For the case of rotating stratified turbulence, the eddy force function is potentially a useful quantity if we aim to leverage data-drive methods for model skill or for learning about the underlying physics of the problem, given the various theoretical expectations highlighted in this work. Other choices may be possible: in a periodic domain often used in rotating turbulence studies <cit.>e.g.,>Frezat-et-al22, Ross-et-al23, a standard Helmholtz decomposition could be used to solve for the divergent component, although the eddy force could still be used for physical interpretation. We note that while skill in reproducing eddy forcing is one target, we have not examined here on the ability of the model to reproduce the mean state, and the present procedure might be termed an `offline' approach. Learning `online' <cit.>e.g.,>Frezat-et-al22 may be more appropriate for parameterization purposes to improve on the mean response, and it would be of interest to see whether filtering of the eddy flux as discussed here would confer any benefits to model learning. The present work also highlights questions relating to information content of data. While quantifying absolute data information content is likely quite difficult, it should be at least possible to compute a relative measure, even if empirically. Preliminary investigation indicates that as the amount of data exposed to the machine learning algorithm is reduced, the accuracy of models trained upon the full eddy flux or the divergence of the eddy flux degrades much faster than models trained upon the eddy force function. One might ask an analogous question of the input data. The work of BoltonZanna19 suggests for example that training with data from regions with higher eddy kinetic energy leads to better model performance in terms of accuracy, suggestive of higher information content in said region. Within the present experimental framework, instead of training using all the data and performing a random sampling of the sub-regions considered in this work, we could consider instead not using all the data, and perform training based on a biased sampling that favor regions with higher eddy energy content, with the hypothesis that the latter case leads to models with higher accuracy from a statistical point of view. Further, we could investigate the case of multiple inputs, where we hypothesize that eddy energy and a mean state variable as inputs might lead to improved performance compared to say two mean state variables: in the current quasi-geostrophic setting, the mean state variables are functionally related to each other, possibly leading to redundant information, while the eddy energy might be dependent on the mean state, but is capturing eddy statistics instead and providing complementary information. This investigation is ongoing and will be reported elsewhere in due course. § DATA AVAILABILITY STATEMENT This work utilizes FEniCS () that is available as a Python package. The source code for the model (, from James Maddison), sample model data and scripts used for generating the plots in this article from the processed data are available through <http://dx.doi.org/10.5281/zenodo.8072817>. This research was funded by both RGC General Research Fund 16304021 and the Center for Ocean Research in Hong Kong and Macau, a joint research center between the Qingdao National Laboratory for Marine Science and Technology and Hong Kong University of Science and Technology. We thank James Maddison and Liiyung Yeow for various scientific and technical comments in relation to the present investigation, and the former for providing the code for use in the present work.
http://arxiv.org/abs/2307.02127v1
20230705090656
Leveraging Denoised Abstract Meaning Representation for Grammatical Error Correction
[ "Hejing Cao", "Dongyan Zhao" ]
cs.CL
[ "cs.CL" ]
Proton irradiation of plastic scintillator bars for POLAR-2 [ =========================================================== Grammatical Error Correction (GEC) is the task of correcting errorful sentences into grammatically correct, semantically consistent, and coherent sentences. Popular GEC models either use large-scale synthetic corpora or use a large number of human-designed rules. The former is costly to train, while the latter requires quite a lot of human expertise. In recent years, AMR, a semantic representation framework, has been widely used by many natural language tasks due to its completeness and flexibility. A non-negligible concern is that AMRs of grammatically incorrect sentences may not be exactly reliable. In this paper, we propose the AMR-GEC, a seq-to-seq model that incorporates denoised AMR as additional knowledge. Specifically, We design a semantic aggregated GEC model and explore denoising methods to get AMRs more reliable. Experiments on the BEA-2019 shared task and the CoNLL-2014 shared task have shown that AMR-GEC performs comparably to a set of strong baselines with a large number of synthetic data. Compared with the T5 model with synthetic data, AMR-GEC can reduce the training time by 32% while inference time is comparable. To the best of our knowledge, we are the first to incorporate AMR for grammatical error correction. § INTRODUCTION Nowadays, high performance of grammatical error correction model mainly depends on data augmentation <cit.>. According to the type of additional information, grammatical error correction models can be divided into data-enhanced models and knowledge-enhanced models. Data-enhanced models require millions of synthetic data, which is obtained by back-translation or directly adding noise. Training on these synthetic datasets is very time-consuming, which is unacceptable in some application scenarios. Knowledge-enhanced model is to artificially design a large number of grammatical rule templates, and add the templates as external knowledge to GEC model. This external knowledge is language-dependent and it requires the intervention of human grammar experts. Abstract Meaning Representation (AMR) is a type of rooted, labeled graph which contains semantic structures with fine-grained node and edge types. AMR breaks through the limitations of the traditional syntax tree structure and supports reentrancy. Figure <ref> is a graph of sentence "I don't want to go to school on Sunday.". In AMR, :arg0 is typically the agent, :arg1 is typically the patient, and other arguments do not have standard definitions and may vary with the verb being annotated. Negative meaning is denoted as "-". Special keywords such as entity types, quantities and logical conjunctions are supported by AMR. AMR obtains a simple representation from natural language sentence and it is suitable for GEC as extra knowledge. A non-negligible concern is that AMRs of errorful sentences may not be exactly reliable. If these AMRs with errors are directly introduced into the GEC model as additional information, it may confuse the model. We use a pre-trained AMR parser to predict AMR of erroneous sentences and corrected sentences separately on the BEA-19 development set. If two AMRs are completely consistent, we assume that the AMR of errorful sentences is reliable. After statistical analysis, we found that about half of the graphs are reliable. We designed a denoising semantic aggregated grammatical error correction model. Specifically, we added a graph aggregation encoder based on a sequence-to-sequence model. The graph encoder aims to update the representation of the sequence encoder by AMR semantic structure. Besides, we designed two mask strategies to reduce the dependence on the model graph information. We designed these mask strategies by granularity: node/edge level mask and subgraph level mask. Experiments have proved that the denoising semantic aggregated grammatical error correction model significantly improved the error correction accuracy. § RELATED WORKS Data-enhanced GEC models. Lots of works have found their way to incorporating additional data into GEC model. <cit.> uses a pre-trained mask language model in grammatical error correction by using the output of BERT as additional features in the GEC model. <cit.> and <cit.> explore methods of how to generate and use the synthetic data and make use of Gigaword to construct hundreds of millions of parallel sentence pairs. Some works (, , ) give a strong baseline by finetuning BART (), T5 () on a GEC corpus. <cit.> casts GEC as a text editing task. <cit.> and <cit.> propose a copy-augmented architecture for the GEC task by copying the unchanged words and spans. Knowledge-enhanced GEC models. <cit.> use dependency tree as syntactic knowledge to guide the GEC model. <cit.> adds part-of-speech features and semantic class features to enhance the GEC model. <cit.> design thousands of custom token-level transformations to map input tokens to target corrections. <cit.> proposes a multi-stage error correction model based on the previous model. Applications of AMR. <cit.> and <cit.> incorporate AMR in neural machine translation. <cit.> makes use of AMR by abstracting the propositional content of an utterance in dialogue. <cit.> constructs a dynamic semantic graph employing AMR to cope with Multi-hop QA problems. § MODEL We add a graph encoder based on Transformer to aggregate denoised semantic information. The architecture of AMR-GEC is shown on Figure <ref>. §.§ Semantic Aggregated Encoder Transformer is an attention-based encoder-decoder model, where the encoder encodes the input sentence into a context vector, and the decoder converts the context vector into an output sentence. Formally, we denote the tokens of the sentence is T_n={t_1,t_2,...,t_n}. Vinilla encoder-decoder model works as follows: h_1,h_2,...,h_n = Enc(t_1,t_2,...,t_n) y_1,y_2,...,y_m = Dec(h_1,h_2,...,h_n) We then designed a semantic graph encoder based on a graph attention network to incorporate semantic graph information. To preserve the information of the sequence encoder, we use a residual connection to combine the outputs of two encoders. ŷ_1,ŷ_2,...,ŷ_m = GNN(h_1,h_2,...,h_n) y'_i = y_i ⊕ŷ_i,  i=1,2,...,m §.§ Denoising Function Masked Language Modeling (MLM) is a classic pre-trained model modeling method. The task of MLM is to mask some tokens with a special token and train the model to recover them. This allows the model to handle both the left and right context of the masked token. MLM can divided into five types: single word masking, phrase making, random span masking, entity masking, whole word masking. Referring to <cit.>, we use the mask strategy on AMR. We used two ways to add masks: node/edge level mask and sub-graph level mask. Node/edge level mask refers to mapping the nodes/edges in the AMR graph using a noise function to generate a graph with noise. Sub-graph level mask means randomly removing subgraphs and replacing them with a mask label. §.§ Sequence-AMR Graph Construction In this section, we will show details about the graph encoder module. To preserve sequence information, we design a graph that fuses sequence and AMR. We first use the alignment tool JAMR to get the mapping from AMR node to sequence token. First connect the sequences through the special labels forward-label and backward-label respectively, and then map the edges of AMR to the sequence-AMR graph. § EXPERIMENTS §.§ Dataset CoNLL-2014. The CoNLL-2014 shared task test set contains 1,312 English sentences with error annotations by 2 expert annotators. Models are evaluated with M2 scorer () which computes a span-based F_0.5-score. BEA-2019. The BEA-2019 test set consists of 4477 sentences and the outputs are scored via ERRANT toolkit (, ). The released data are collected from Write & Improve and LOCNESS dataset. §.§ Baseline Model Following <cit.>, we use T5 as the baseline model for GEC. §.§ AMR Parsing and Alignment We adopt SPRING () as our AMR parsing model. SPRING performs nearly state-of-the-art AMR parsing by linearizing AMR to sequence and converting text-to-amr task to seq-to-seq task. It obtained 84.5 Smatch F1 points on AMR 2.0 dataset.We use JAMR () to align the AMRs to sentences. JAMR is an alignment-based AMR parsing model that finds a maximum spanning, connected subgraph as an optimization problem. We use the alignment for graph information aggregation. §.§ Others Our models were trained on a single GPU (GeForce GTX 1080), and our implementation was based on publicly available code[<https://github.com/huggingface/transformers>]. we set the batch_size to 6 and the learning_rate to 2e-5. We use pytorch_geometric[<https://github.com/pyg-team/pytorch_geometric>] to implement the semantic aggregated encoder. § RESULTS AND ANALYSIS §.§ Results Table <ref> shows the results of the BEA-test and CoNLL-2014 dataset. 1) Compared with the model without synthetic data, the single model of AMR-GEC is 2.8 points and 1.8 points higher in BEA-19 and CoNLL-14, respectively. Ensemble models give similar results. 2) Compared with models using synthetic data, AMR-GEC gives comparable or even higher F-score, except for GECToR <cit.>, which uses both synthetic data and human knowledge. For example, our single model achieves 68.4 on BEA-19, higher than the models by <cit.>, <cit.>, and <cit.>. This shows that semantic graphs, as additional knowledge for GEC, have a comparative advantage over synthetic data. Our ensemble model does not show significant improvements over the single model, probably because more optimal ensemble strategies are needed: averaging generation probabilities (), ensemble editings (), etc. §.§ Advantages of AMR We compared the most common error types in BEA-test (except for OTHER) between T5-GEC and AMR-GEC. As shown in Table <ref>, the F-scores of PUNCT and PREP in AMR-GEC is 4-6 points higher than T5-GEC. AMR dropped prepositions, tense, and punctuation to obtain simple and base meanings, and exactly these error types are the most common errors in GEC scenarios. With such error ignored in AMR, sentences generated from AMR are more likely to get correct results. Besides, graphs are good at solving the long sentence dependency problem. The pain point of the sequence model is that it is difficult to pay attention to long-distance dependent information. In AMR, associative concept nodes are explicitly connected with edges, making it easier for the model to focus on long-distance information. § ABLATION STUDY §.§ Graph Neural Networks Ablation Results Graph neural networks have been proven effective in dealing with unstructured data problems. However, few studies have analyzed the effect of different GNN-encoded AMRs for natural language generation tasks. To study the differences of graph neural networks of encoding AMR, we carry on a set of experiments. We select different graph encoders of GCN, GAT, and DeepGCN as variables, and conduct experiments on BEA-2019 dataset while ensuring the same amount of model parameters. We do not use the denoising method in this ablation study. Table <ref> shows the results of BEA-test with different graph encoders. We can draw these conclusions: 1) Even if the AMRs of the errorful sentences are not reliable, they still benefit GEC. Compared with T5-GEC, AMR-GCN and AMR-GAT are about 0.2 and 0.4 points higher respectively. This shows that the model makes use of the semantic information and connection relationship of reliable AMR. 2) AMR-GCN gives the best performance among the three models. When picking a graph encoder, the GCN model is sufficient to encode the semantic structure information of AMR. It is worth noting that GAT and DeepGCN have high recall value and low precision. In the grammatical error correction task, precision measures the error correction result. Generally speaking, precision is more important than recall. In the grammatical error correction task, most of the errors are local errors, and the semantic information required for grammatical error correction in AMR can be captured without a deeper graph convolution model. §.§ Denoise method ablation study Table <ref> shows the results of BEA-test with node/edge and subgraph denoising methods. The node/edge level denoising strategy and the subgraph level denoising strategy increased by 1.57 and 1.03 points, respectively. Node level mask strategy performs better because the subgraph may mask too much information. § CONCLUSION In this paper, We propose a denoising semantic aggregated grammatical error correction model, AMR-GEC, leveraging AMR as external knowledge to the GEC. We believe it gives a strong baseline for incorporating AMR in GEC. § LIMITATIONS In this paper, we leverage AMR to the GEC model as external knowledge, and achieve a high F-score on single model. However, we do not use R2L reranking, model ensemble and other methods to ensemble single model and compare them with state-of-the-art ensemble models. Our aim is to provide a strong baseline for incorporating AMR in GEC, so it is easy to generalize AMR-GEC to ensemble models. § ETHICS STATEMENT The training corpora including the Lang-8, NUCLE and the BEA-2019 test data and CoNLL-2014 test data used for evaluating our framework are publicly available and don’t pose privacy issues. The algorithm that we propose does not introduce ethical or social bias. § ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their constructive comments. We would like to express appreciation to Yansong Feng for his insightful suggestions on the algorithm framework. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106600). acl_natbib
http://arxiv.org/abs/2307.01055v1
20230703143516
Effects of spin-orbit coupling on gravitational waveforms from a triaxial non-aligned neutron star in a binary system
[ "Wen-Fan Feng", "Tan Liu", "Jie-Wen Chen", "Yan Wang", "Soumya D. Mohanty" ]
gr-qc
[ "gr-qc", "astro-ph.HE" ]
MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, Department of Astronomy and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China lewton@mail.ustc.edu.cn MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, Department of Astronomy and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China School of Physics, Hubei University, Wuhan 430062, China MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, Department of Astronomy and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China National Time Service Center, Chinese Academy of Sciences, Xi’an 710600, China Key Laboratory of Time and Frequency Primary Standards, Chinese Academy of Sciences, Xi’an 710600, China ywang12@hust.edu.cn MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, Department of Astronomy and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China Department of Physics and Astronomy, University of Texas Rio Grande Valley, Brownsville, Texas 78520, USA Department of Physics, IIT Hyderabad, Kandai, Telangana-502284, India Spinning neutron stars (NSs) will emit continuous gravitational waves (GWs) that carry a wealth of information about the compact object. If such a signal is detected, it will provide us with new insight into the physical properties of the matter under extreme conditions. According to binary population synthesis simulations, future space-based GW detectors, such as LISA and TianQin, can potentially detect some double NSs in tight binaries with orbital periods shorter than 10 minutes. Targeted searches for continuous GWs from the spinning NS in such a binary system identified by LISA/TianQin will be possible with the proposed next-generation ground-based GW observatories, such as Cosmic Explorer and Einstein Telescope. Searching for continuous GWs from such a tight binary system requires highly accurate waveform templates that account for the interaction of the NS with its companion. In this spirit, we derive analytic approximate GWs emitted by a triaxial non-aligned NS in a binary system in which the effects of spin-orbit coupling have been incorporated. The difference with the widely used waveform for the isolated NS is estimated and the parameter estimation accuracy of the signals using Cosmic Explorer is calculated. For a typical tight double NS system with a 6 min orbital period, the angular frequency correction of the spinning NS in this binary due to spin precession is ∼ 10^-6  Hz, which is in the same order of magnitude as the angular frequency of orbital precession. The fitting factor between the waveforms with and without spin precession will drop to less than 0.97 after a few days (∼ 10^5  s). We find that spin-orbit coupling has the potential to improve the accuracy of parameter estimation, especially for the binary inclination angle and spin precession cone opening angle, by up to 3 orders of magnitude. Effects of spin-orbit coupling on gravitational waveforms from a triaxial non-aligned neutron star in a binary system Soumya D. Mohanty August 1, 2023 ===================================================================================================================== § INTRODUCTION Rapidly spinning neutron stars (NSs) are promising sources of a long-lasting form of gravitational waves (GWs), namely continuous waves (CWs) <cit.>. The detection of these potential CWs will allow us to solve the mysteries in NS physics, such as NS's equation of state, deformability, and magnetic field <cit.>. There are two types of simplified waveforms that are commonly used in current searches for CWs emitted by NSs with advanced LIGO <cit.> and advanced Virgo <cit.>. One is the mass quadrupole mode with a frequency at twice the rotation frequency of the pulsar, which comes from a triaxial rigid body rotating about one of its principal axes with assumed principal moments of inertia I_1<I_2<I_3 (hereafter referred to as the triaxial aligned waveforms, e.g., <cit.>); the other is the mode with frequencies at both once and twice the rotation frequency, which comes from an axisymmetric freely precessing rigid body with assumed I_1=I_2 I_3 (hereafter referred to as the biaxial waveforms, e.g., <cit.>). Similar two-frequency mode searches are also performed for a superfluid and fluid star model <cit.>. So far, no credible detection has been reported in these searches <cit.>. The more general waveforms that come from a freely precessing triaxial rigid body (hereafter referred to as the triaxial non-aligned waveforms) were first calculated by Zimmermann <cit.>. The dominant waveform components are obtained by expanding the quadrupole moment formula in terms of small parameters, such as wobble angle, oblateness, and non-axisymmetry parameters. These waveforms have been extended to include higher than first-order expansion terms of the wobble angle and non-axisymmetry <cit.> in order to extract more physical information. In addition to the waveform modeling of isolated NS discussed above, there are also considerations about the NS located in a binary system since the electromagnetic observations show that nearly half of the known pulsars within the most sensitive band of the ground-based GW detectors belong to binary systems <cit.>. Some search schemes are proposed for this type of CWs <cit.>. However, the waveform model used in these searches is obtained by simply incorporating the Doppler frequency modulation into the phase of the triaxial aligned waveforms emitted by the isolated NS. For future space-borne GW detectors, the detectability and parameter estimation accuracy of double NS systems that will merge within the next 10 Myr have been studied for LISA <cit.> and TianQin <cit.>. Based on the merger rate density (920  Gpc^-3 yr^-1) inferred from GWTC-1 <cit.>, about 300 double NS systems are expected to be detected in the mHz band during the 4-year observation period, including binaries with orbital periods shorter than 10 minutes. Proposed next-generation ground-based GW observatories, such as Cosmic Explorer <cit.> and Einstein Telescope <cit.>, are expected to operate concurrently with LISA and TianQin in the 2030s. Searching for CWs from the spinning NS in such a tight system identified by LISA and/or TianQin requires consideration of the interaction of the rapidly spinning NS with its companion. In this paper, we incorporate the effects of spin-orbit coupling to the GWs emitted by the spinning NS in a circular orbital binary and extend the triaxial aligned NS to a general triaxial non-aligned NS. Other effects, such as magnetic dipole field <cit.>, tidal interaction <cit.>, and radiation reaction <cit.> are neglected in the current work. Spin-orbit coupling causes spin precession and orbital precession around the total angular momentum <cit.>. Similar to the treatment in previous works, such as that of LIGO <cit.>, the GWs emitted by the spinning NS in a binary are obtained by simply incorporating the Doppler frequency modulation (including the effects of orbital precession) into the phase of the triaxial non-aligned waveforms of the NS with spin precession. In contrast to the isolated case, the spin angular momentum of the NS is no longer conserved in the binary. Instead, it will be precessed due to the spin-orbit coupling. We analytically solve the spin precession equation for the NS using the perturbation method to obtain the spin angular frequency evolution and calculate the waveforms based on the quadrupole moment formula. Next, the waveforms are expanded into some simple components in the small parameter case for the subsequent analysis of CW detection. Finally, using these easy-to-use waveform components, we investigate the impact of spin-orbit coupling on the parameter estimation accuracy of the spinning NS. Calculations along these lines yield the following results: (i) The waveforms of the NS undergoing spin precession will deviate from the isolated ones after a few days (∼ 10^5  s) when the fitting factor between the two waveforms drops to less than 0.97. (ii) Spin-orbit coupling has the potential to improve the parameter estimation accuracy, specifically for the binary inclination cosι and spin precession cone opening angle θ_S, by up to 3 orders of magnitude. The rest of this paper is organized as follows. In Sec. <ref>, we briefly review the mathematical formalism for GWs from an isolated spinning NS, which will be used for subsequent calculations for NS in a binary system. Analytical approximations for the GWs from a spinning NS in a binary system, taking into account spin-orbit coupling effects, are given in Sec. <ref>. The comparison of results derived using waveforms with and without spin precession is given in Sec. <ref>. The parameter estimation accuracy of the waveforms with and without spin-orbit coupling using Cosmic Explorer <cit.> are given in Sec. <ref>. Our conclusions are discussed in Sec. <ref>. Some details of our calculation have been relegated to the appendix in order to keep the main ideas of the paper as clear as possible. § GRAVITATIONAL WAVEFORMS FROM ISOLATED NS Since the waveforms emitted by spinning NS undergoing spin-orbit coupling are based on the waveforms emitted by the isolated NS, we will first discuss the case for the isolated NS. Following the conventions of Landau and Lifshitz <cit.> and Zimmermann <cit.>, in Fig. <ref>, the inertial coordinate system is denoted as (X, Y, Z) with basis vectors (e_x,e_y,e_z) and e_z along the body's angular momentum, and the body coordinate system (x_1,x_2,x_3) with basis vectors (e_1,e_2,e_3) parallel to the eigenvectors of the body's moment of inertia tensor and satisfying I_3>I_2 ≥ I_1. The origins of the two systems are placed at the center of mass of the NS. The Euler angles (θ,ϕ,ψ) describe the orientation of the body coordinate system with respect to the inertial coordinate system. We use the Latin subscripts (e.g., x,y,z) for components evaluated in the inertial coordinate system, and the Greek ones (e.g., μ,ν) in the body coordinate system. The metric perturbation under the transverse-traceless gauge can be written in terms of two GW polarizations, h_j k^TT=h_+(ê_+)_j k+h_×(ê_×)_j k, with the polarization tensors defined as ê_+≡v̂⊗v̂-ŵ⊗ŵ , ê_×≡v̂⊗ŵ+ŵ⊗v̂ , where v̂ and ŵ are the transverse basis vectors perpendicular to the wave's propagation direction, and ⊗ denotes the tensor product. Without loss of generality, we assume that the observer is located in the Y-Z plane with colatitude i from the Z axis and distance D=|D|. In this configuration, v̂≡ê_ycos i-ê_zsin i , ŵ≡ -ê_x , and the two GW polarizations can be written as <cit.> h_+ = -G/c^4 D [(R_yμcos i - R_zμsin i) × (R_yνcos i - R_zνsin i)-R_xμ R_xν] A_μν , h_× = 2 G/c^4 D (R_yμcos i - R_zμsin i) R_xν A_μν , where the (1, 1) and (1, 2) components of A_μν are A_11 = 2(Δ_2Ω_2^2-Δ_3Ω_3^2) , A_12 = (Δ_1-Δ_2) Ω_1Ω_2+Δ_3Ω̇_3 , and the rest of the components can be obtained by symmetry and cyclic index permutation with Δ_1≡ I_2-I_3, Δ_2≡ I_3-I_1, Δ_3≡ I_1-I_2. Here, (Ω_1,Ω_2,Ω_3) denote the angular frequency of NS in the body coordinate system. The rotation matrix that transforms from the body coordinate system to the inertial coordinate system (e.g., R_x2 denotes the entry in row x=1 and column μ=2) is R= ( [ cosψcosϕ-cosθsinψsinϕ -cosθcosψsinϕ-sinψcosϕ sinθsinϕ; cosθsinψcosϕ +cosψsinϕ cosθcosψcosϕ -sinψsinϕ -sinθcosϕ; sinθsinψ sinθcosψ cosθ; ]) . According to Euler’s equations of free rotation of a rigid body and the initial conditions (Ω_1(0) = a, Ω_2(0) =0, Ω_3(0) = b), the angular frequency in the body coordinate system are <cit.> Ω_1 = a cn(τ, m) , Ω_2 = a [I_1(I_3-I_1)/I_2(I_3-I_2)]^1 / 2sn(τ, m) , Ω_3 = b dn(τ, m) , where cn, sn and dn are Jacobian elliptic functions <cit.> with the parameters τ = b t[(I_3-I_2)(I_3-I_1)/I_1 I_2]^1 / 2 , m = (I_2-I_1) I_1 a^2/(I_3-I_2) I_3 b^2 . The Euler angles can be expressed in terms of Jacobian elliptic functions and the fourth theta functions <cit.> (ϑ_4 and its derivative ϑ_4^'): cosθ = I_3 b/Sdn (τ, m) , tanψ = [I_1(I_3-I_2)/I_2(I_3-I_1)]^1 / 2cn(τ, m)/sn(τ, m) , ϕ = ϕ_1+ϕ_2 , with S being the magnitude of the spin angular momentum of the NS and ϕ_1,2 given by <cit.> exp[2 iϕ_1(t)]=ϑ_4(2 π t/T+iπα, q )/ϑ_4(2 π t/T-iπα , q ) , ϕ_2=2 π t/T^'=[S/I_1+2 πi/Tϑ_4^'(iπα, q)/ϑ_4(iπα, q)]t . Here, T is the period of the angular frequency in the body coordinate system, T =4 K(m)/b[I_1 I_2/(I_3-I_2)(I_3-I_1)]^1 / 2 . α satisfies sn[2 iα K(m)] =i I_3 b/(I_1 a). i is the imaginary unit. q =exp [-π K(1-m)/K(m)], where K(m) is the complete elliptic integral of the first kind <cit.>. Since cosϕ_2 has a period T^' which is generally not commensurate with T, the motion of the NS is usually nonperiodic. When the NS becomes axisymmetric, T^'→ 2π I_1/S <cit.>. § GRAVITATIONAL WAVEFORMS FROM SPINNING NS IN A BINARY Following the treatment in previous works, such as <cit.>, the GWs emitted by the spinning NS in a binary can be obtained by incorporating the Doppler frequency modulation (modulated by orbital precession) into the phase of the triaxial non-aligned waveforms of the NS with spin precession. First, we calculate the GWs emitted by a spinning NS undergoing spin precession. Consider a binary system consisting of a spinning NS with spin angular momentum S and a nonspinning NS (or a slowly spinning NS of which the spin effects can be ignored). This is consistent with the standard evolution scenario of the double NS formed in an isolated system, in which one of the NS is a rapidly spinning millisecond pulsar and the other is a normal pulsar <cit.>. For a binary system in which only one of the bodies has spin, the precession equations <cit.> for the spin of the body and the orbit of the binary show that to a reasonable approximation the total angular momentum J maintains its direction, S keeps its magnitude constant and precesses around J with d S/d t= Ω_ pre×S , where Ω_ pre = G/2 c^2 r^3(1+3 M/m_1) J is the angular frequency of spin precession and orbital precession, M=m_1+m_2 is the total mass of the binary, and r is the orbital separation. Although the decreasing r and the magnitude of J due to the radiation reaction cause the magnitude Ω_ pre of Ω_ pre to vary with time, for a typical double NS system with a merger time ∼ O(10^4 yr) and a half-year observation time (see Sec. <ref>), the relative variation of Ω_ pre is ≲ 10^-5, so we can assume that Ω_ pre remains approximately constant in the case considered. As shown in Fig. <ref>, (X_J, Y_J, Z_J) is the coordinate system with the origin placed at the center of mass (O_1) of the spinning NS and Z_J axis parallel to J (hereafter referred to as the J-aligned coordinate system), in which the distant observer is assumed to be in the Y_J-Z_J plane with the inclination ι and the position vector D. The opening angle of S precession cone is θ_S. The coordinate system (X_S, Y_S, Z_S) constructed with Z_S axis aligned with S is referred to as the S-aligned coordinate system. Without loss of generality, we assume that at initial time, the X_S axis coincides with the X_J axis, and S is in the Y_J-Z_J plane. The evolution of S over time is represented by S(t) with the precession angle α = Ω_ pre t measured in the X_J-Y_J plane. To simplify the calculation of waveforms, we use a similar convention for two polarization tensors (cf. Eq. (<ref>) and Eq. (<ref>)) as in the calculation of the isolated NS in Sec. <ref>, so that the waveforms in Eqs. (<ref>) also apply to the triaxial non-aligned NS in a binary, but with (i, R, A) in Eqs. (<ref>) replaced by (ι, ℛ, 𝒜), which are the quantities calculated in a coordinate system at rest with respect to the center of mass of the spinning NS, i.e., the J-aligned coordinate system shown in Fig <ref>. i →ι , R →ℛ , A →𝒜 . Explicit waveforms are usually expressed in a series expansion of some small parameters <cit.>. To facilitate the following calculation, we define the spinning NS's free precession angular frequency and rotation angular frequency Ω_ p≡2π/T , Ω_ r≡2π/T^'-2π/T , and three parameters that characterize NS's properties ϵ≡I_3-I_1/I_3 , κ≡1/16I_3/I_1I_2-I_1/I_3-I_2 , γ≡aI_1/bI_3 , where ϵ is called the oblateness (or poloidal ellipticity <cit.>) of the NS, κ describes the (I_2-I_1) with respect to the axisymmetric non-sphericity (I_3-I_2), while γ is called the wobble angle. Their characteristic values are discussed in <cit.>. For the small quantities above, the expansions of the sines and cosines of the Euler angles (cf. Eqs. (<ref>)) in S-aligned coordinate system up to terms of O( κ^n1γ^n2) (n1+n2 ≤ 2) are <cit.> cosθ = 1-γ ^2/2 , sinθ = γ +8 γκsin ^2(t Ω_ p) , cosϕ = cos [t(Ω_ r+Ω_ p)] , sinϕ = sin [t(Ω_ r+Ω_ p)] , cosψ = sin(t Ω_ p) + 8 κsin(t Ω_ p) cos ^2(t Ω_ p) +8 κ ^2 (3 sin(3 t Ω_ p)-13 sin(t Ω_ p)) cos ^2(t Ω_ p) , sinψ = cos(t Ω_ p) -8 κsin^2(t Ω_ p) cos(t Ω_ p) + 96 κ ^2 sin ^4(t Ω_ p) cos(t Ω_ p) . We first calculate ℛ which represents the rotation matrix from the body coordinate system of the spinning NS to J-aligned coordinate system. It can be obtained by the following rotation transformations: first, from the body coordinate system to the S-aligned coordinate system by R (cf. Eq. (<ref>)), and then from the S-aligned coordinate system to the J-aligned coordinate system by 𝒯_S→ J. Thus, ℛ = 𝒯_S→ J· R , where 𝒯_S→ J = ( [ cos (Ω_ pre t) -sin (Ω_ pre t) 0; sin (Ω_ pre t) cos (Ω_ pre t) 0; 0 0 1; ]) ( [ 1 0 0; 0 cosθ_S sinθ_S; 0 -sinθ_S cosθ_S; ]) . After inserting Eqs. (<ref>) into Eq. (<ref>), ℛ can be expanded as Eqs. (<ref>) given in Appendix <ref>. Next, we use ℛ to calculate 𝒜. Since in the body coordinate system S=S_μe_μ=I_1 ω_1 e_1 + I_2 ω_2 e_2 + I_3 ω_3 e_3 and Ω_ pre = Ω_ pree_z = Ω_ preℛ_zμe_μ, then Eq. (<ref>) can be expressed as follows dS_μ/dte_μ + S_μω_νe_ν×e_μ = Ω_ preℛ_zμ S_νe_μ×e_ν , with components dω_1/d t = Δ_1/I_1ω_2ω_3 + K_1/I_1 , dω_2/d t = Δ_2/I_2ω_3ω_1 + K_2/I_2 , dω_3/d t = Δ_3/I_3ω_1ω_2 + K_3/I_3 . Here, spin-orbit coupling terms are K_1/I_1 ≡Ω_ pre/I_1( ℛ_z2I_3ω_3-ℛ_z3I_2ω_2 ) , K_2/I_2 ≡Ω_ pre/I_2( ℛ_z3I_1ω_1-ℛ_z1I_3ω_3 ) , K_3/I_3 ≡Ω_ pre/I_3( ℛ_z1I_2ω_2-ℛ_z2I_1ω_1 ) . We expect the difference between the angular frequency of the spinning NS in an isolated case and that in a binary system to be small (see Fig. <ref>), since the spin-orbit coupling is a 1.5-order post-Newtonian effect. It is accurate enough to replace ω_i by Ω_i in Eqs. (<ref>), if only for the leading order of K_i/I_i (i=1,2,3). Although the general solution of the above precession Eqs. (<ref>) can be obtained numerically, the analytic solution is more favorable for GW detection and parameter estimation because it can be incorporated directly into search algorithms and is more manageable and efficient in data analysis. In this sense, we use the perturbation method to solve Eqs. (<ref>) analytically by assuming ω_i = Ω_i + δΩ_i (i=1,2,3) . Inserting the above expression into Eqs. (<ref>) yields the linearized evolution equations ([ d δΩ_1/d t; [2mm] d δΩ_2/d t; [2mm] d δΩ_3/d t ])=([ 0 Δ_1 Ω_3-Ω_ preR_z3I_2/I_1 Δ_1 Ω_2+Ω_ preR_z2I_3/I_1; [2mm] Δ_2 Ω_3+Ω_ preR_z3I_1/I_2 0 Δ_2 Ω_1-Ω_ preR_z1I_3/I_2; [2mm] Δ_3 Ω_2-Ω_ preR_z2I_1/I_3 Δ_3 Ω_1+Ω_ preR_z1I_2/I_3 0 ])([ δΩ_1; [1mm] δΩ_2; [1mm] δΩ_3 ])+([ K_1/I_1; [2mm] K_2/I_2; [2mm] K_3/I_3 ]) . According to Appendix <ref>, δΩ_3 ≪δΩ_1,2, so the contribution of δΩ_3 can be ignored when solving for δΩ_1,2. With the coefficient matrix in Eq. (<ref>) to O(ϵ, γ, κ), we find that d δΩ_1/d t = b ϵ (-1+16κ) δΩ_2 + b Ω_ presinθ_ Ssin(tΩ_ r) , d δΩ_2/d t = b ϵδΩ_1 + b Ω_ presinθ_ Scos(tΩ_ r) . Setting the initial conditions (δΩ_1(0),δΩ_2(0),δΩ_3(0)) = (0,0,0), the integration of Eqs. (<ref>) yields δΩ_1 ≃Ω_ presinθ_ S[cos (t bϵ√(1-16κ))-cos (t Ω_ r)] , δΩ_2 ≃Ω_ presinθ_ S[sin (t bϵ√(1-16κ))+sin (t Ω_ r)] . Using Eqs. (<ref>), the integration of the third row in Eq. (<ref>) leads to δΩ_3 ≃γΩ_ presinθ_S (cos (t(Ω _ p+Ω_ r))-1) . The error of this analytic approximation is shown and discussed in Appendix <ref>. Now, 𝒜 can be calculated by replacing Ω_i with ω_i in Eqs. (<ref>), keeping to O( κ^n1γ^n2Ω_ pre^n3) (n1+n2+n3 ≤ 2) and ignoring O(Ω_ pre^2). The final expression of 𝒜 is given in Eqs. (<ref>) of Appendix <ref>. With ℛ and 𝒜, the waveforms emitted by the spinning NS undergoing spin precession can be expressed in terms of the series expansion as follows h_+ = h_+^(1) + h_+^(2) + h_+^(3) + ⋯ h_× = h_×^(1) + h_×^(2) + h_×^(3) + ⋯ where h_+^(1) = Gb^2I_3ϵγ/4c^4D(cos(t(Ω _p + Ω _r))(sin (2θ _S)( 6sin ^2ι - (3 + cos (2ι ))cos (2tΩ _pre))+ 4cos (2θ _S)cos (tΩ _pre)sin (2ι )) + 2( - 2cosθ _Ssin (2ι )sin (tΩ _pre)+ (3 + cos (2ι ))sinθ _Ssin (2tΩ _pre))sin (t(Ω _p + Ω _r))) , h_ × ^(1) = Gb^2I_3ϵγ/c^4D(cos (t(Ω _p + Ω _r))(2sinιcos (2θ _S)sin (tΩ _pre) - cosιsin(2θ _S)sin (2tΩ _pre)) + 2(sinιcosθ _Scos (tΩ _pre)- cosιcos (2tΩ _pre)sinθ _S)sin (t(Ω _p + Ω _r))) , h_ + ^(2) = - 16Gb^2I_3ϵκ/c^4D((3 + cos (2ι ))(cos^4 ( θ _S/2)cos (2t(Ω _pre + Ω _r))+ cos (2t(Ω _pre - Ω _r))sin ^4( θ _S/2)) + cos (2tΩ _r)(3sin ^2θ _Ssin ^2ι + cos (tΩ _pre)sin(2θ _S)sin (2ι ))- 2sinθ _Ssin (2ι )sin (tΩ _pre)sin (2tΩ _r)) , h_ × ^(2) = 16Gb^2I_3ϵκ/c^4D(cos (2tΩ _r)( - 2sinιsin (2θ _S)sin (tΩ _pre)- cosι (3 + cos (2θ _S))sin (2tΩ _pre)) - 4(cosιcosθ _Scos (2tΩ _pre)+ sinιcos (tΩ _pre)sinθ _S)sin (2tΩ _r)) , h_ + ^(3) = Gb^2I_3ϵγ^2/c^4D((3 + cos (2ι ))(cos^4 ( θ _S/2)cos (2t(Ω _pre + Ω _r+Ω _p))+ cos (2t(Ω _pre - Ω _r-Ω _p))sin ^4( θ _S/2)) + cos (2t(Ω _r+Ω _p))(3sin ^2θ _Ssin ^2ι + cos (tΩ _pre)sin(2θ _S)sin (2ι ))- 2sinθ _Ssin (2ι )sin (tΩ _pre)sin (2t(Ω _r+Ω _p))) , h_× ^(3) = Gb^2I_3ϵγ^2/c^4D(cos (2t(Ω _p+Ω _r))(2sin (2θ _S)sinιsin (tΩ _pre)+ (3 + cos (2θ _S)) cosιsin (2tΩ _pre)) + 4(cosθ _Scos (2tΩ _pre)cosι +cos (tΩ _pre) sinθ_Ssinι) sin (2t(Ω _p+Ω _r))) . Here, h_+,×^(1) are of order O(γ), h_+,×^(2) are of order O(κ), h_+,×^(3) are of order O(γ^2). Note that only the waveform components up to terms of O(γ), O(κ) and O(γ^2) are shown here, according to the discussion of the characteristic value of γ and κ in <cit.>. The O(Ω_ pre) components (see Appendix <ref>), and higher-order O(γκ), and O(κ^2) components can be ignored for the parameter values we adopt in the following analysis. Then, the waveforms of the spinning NS in a binary system with spin-orbit coupling effects considered are completed by incorporating the Doppler frequency modulation of this NS around the binary barycenter (BB) into the phases of waveforms Eqs. (<ref>), which is done by the second term on the right-hand side of Eq. (<ref>) in Sec. <ref>. We leave this for further discussion in Sec. <ref>, where this Doppler modulation and the Doppler modulation due to the motion of the GW detector around the solar system barycenter (SSB) are combined, as in <cit.>. § COMPARISON WITH WAVEFORMS FROM ISOLATED NS As a limiting case, when the spinning NS is isolated and spin S is along the z-axis of the coordinate system, i.e., Ω_ pre=θ_S=0, the waveforms in Eqs. (<ref>) will reduce to those given in <cit.>: h_+^(1) = G b^2 I_3 ϵγsin (2 ι ) cos (t (Ω_ p+Ω_ r))/c^4 D , h_×^(1) = 2 G b^2 I_3 ϵγsinιsin (t (Ω_ p+Ω_ r))/c^4 D , h_+^(2) = -32 G b^2 I_3 ϵκ(1+cos^2ι) cos (2 tΩ_ r)/c^4 D , h_×^(2) = -64 G b^2 I_3 ϵκcosιsin (2 tΩ_ r)/c^4 D , h_+^(3) = 2 G b^2 I_3 ϵγ^2 (1+cos^2ι) cos (2t (Ω_ p+Ω_ r))/c^4 D , h_×^(3) = 4 G b^2 I_3 ϵγ^2 cosιsin (2t (Ω_ p+Ω_ r))/c^4 D . For small equatorial ellipticity ε≡ |I_1-I_2|/I_3 ≃ 16 ϵκ and γ = 0, h_+,×^(1)=h_+,×^(3)=0 and h_+,×^(2) will reduce to the triaxial aligned waveforms (e.g., Eq. (4.223) of <cit.>). If κ=0 and γ 0, h_+,×^(2)=0, h_+,×^(1) and h_+,×^(3) will reduce to the biaxial waveforms (cf. Eq. (1) of <cit.>). The values of the parameters used in the following analysis are intended to make the signal amplitude as large as possible under current observational and theoretical constraints. The NSs, measured by pulsar timing <cit.> or GW observation <cit.>, can have masses up to about two solar masses. The widely accepted range of the moment of inertia for NSs resides in 1-3 × 10^38  kg m^2 (see <cit.> and references therein). The parameters that characterize the properties of the spinning NS are in accordance with the discussion in <cit.>, in which ϵ≪κ≪γ_ max and κ∼ O(γ^2). Recent observations from Advanced LIGO and Advanced Virgo constrain two recycled pulsars (PSR J0437-4715 and PSR J0711-6830) to have equatorial ellipticities (ε≡ |I_1-I_2|/I_3 ≃ 16 ϵκ) of less than 10^-8 <cit.>. An ellipticity of 10^-8 is also a typical value used in searching for CWs from small-ellipticity sources <cit.>. Population syntheses of Galactic disk double NS systems detectable by LISA <cit.> and TianQin <cit.> suggest that the existence of double NSs with orbital periods as low as 6 minutes. We assume that the orbital period under consideration is 6 minutes, which maximizes the strength of the orbital precession. Based on the current observations of 22 double NSs <cit.>, we select 10  ms as the typical spin period of the rapidly spinning NS <cit.>, and 1  kpc as the typical distance since about half of the known double NSs are located near that value <cit.>. As an example, we assume the component masses of a double NS system m_1 = m_2 = 2.0 M_⊙, the orbital period P_ b=6  min, and the opening angle of S precession cone is θ_ S=5π/12. The spinning NS's characteristic parameters are I_3=2.0× 10^38  kg m^2, ϵ=3.6× 10^-6, κ=1.75× 10^-4, γ=5.0× 10^-2 (equatorial ellipticity ε≡ |I_1-I_2|/I_3 = 1.0× 10^-8). From Eq. (<ref>) and Eq. (<ref>), one can obtain Ω_ r=628.32  Hz, Ω_ p=2.26× 10^-3  Hz, and Ω_ pre=7.50× 10^-7  Hz. During a free precession period (T=2π/Ω_ p=2785  s), Fig. <ref> shows the approximate solution for δΩ_i in Eqs. (<ref>) and (<ref>), which can reach up to 1.45× 10^-6  Hz for δΩ_1,2 and 3.62× 10^-8  Hz for δΩ_3. Note that the jagged profiles in this figure are due to a reduced sampling rate over a long spin precession period, as in the following figures. Fig. <ref> shows the different waveform components with spin precession incorporated during a spin precession period (T_ pre=2π/Ω_ pre=8.382 × 10^6  s). The values of the parameters used here are the same as in Fig. <ref>, and we set the spin period of the NS P_ s=10  ms, the inclination angle of J with respect to the line of sight ι=π/4, and the distance to the observer D=1  kpc. The modulated amplitude profiles for a binary depend on ι, θ_S, and Ω_ pre, which can be several times larger or smaller than the isolated case for different waveform components. The amplitude modulations shown here are only due to the NS's spin precession caused by spin-orbit coupling, the Doppler modulation will be included in Sec. <ref> (see the discussion at the end of Sec. <ref>). According to Eqs. (<ref>), the GW angular frequency components of an isolated NS are Ω_+,×^ iso = Ω_ r+Ω_ p, 2Ω_ r, and 2(Ω_ r+Ω_ p) for both + and × polarizations. Spin-orbit coupling in the binary can split these frequencies in the following way Ω_+^ iso ⟶Ω_+^ iso± n Ω_ pre (n=0,1,2) , Ω_×^ iso ⟶Ω_×^ iso± n Ω_ pre (n=1,2) . Similar to the analysis of the GW spectrum of isolated systems <cit.>, the spectral analysis of the above frequency components can be used to infer the orbital period of the binary and the characteristic parameters of the NS. In order to quantitatively measure the degree of matching between these two types of waveforms, one can calculate the fitting factor (FF) <cit.> between the genuine GW waveforms generated by the spinning NS in a binary (denoted as h_ b) and the ones by an isolated NS (denoted as h_ i), given the latter has been used in the matched filtering of CW data analysis FF≡max _λ(h_ b, h_ i)/√((h_ b, h_ b)(h_ i, h_ i)) , where λ is the set of the parameters that characterize the waveforms. For a quasi-monochromatic signal, the inner product (h_ b, h_ i) can be simplified as (h_ b, h_ i) ≡∫_0^T_ obs h_ b(t) h_ i(t) dt , where T_ obs is the observation time. Fig. <ref> shows the FFs of different waveform components in h_ b and h_ i as a function of T_ obs. As we can see, the two waveforms only match well (FF>0.97) within roughly a few days (∼ O(10^5  s)) and then start to diverge rapidly. § DETECTING GW FROM SPINNING NS IN A BINARY §.§ Signal model in detector coordinate system The GW strain signal from a spinning triaxial non-aligned NS in a binary can be written as a sum of different waveform components h_n(t) (n=1,2,3) as follows <cit.> h(t) = ∑_n=1^3 h_n(t) = ∑_n=1^3 (F_+(t) H_+^(n)(t) + F_×(t) H_×^(n)(t) ) , where H_+,×^(n)(t) are the Doppler-modulated waveforms (cf. Eqs. (<ref>) below), and F_+,×(t) are the antenna pattern functions of GW detector F_+(t) =sinζ[a(t) cos 2 ψ_ p+b(t) sin 2 ψ_ p] , F_×(t) =sinζ[b(t) cos 2 ψ_ p-a(t) sin 2 ψ_ p] . Here, a(t) = 1/16sin 2 γ_ o(3-cos 2 λ)(3-cos 2 δ) ×cos[2(α-ϕ_ r-Ω_ Er t)] +3/4sin 2 γ_ ocos ^2 λcos ^2 δ -1/4cos 2 γ_ osinλ(3-cos 2 δ) sin[2(α-ϕ_ r-Ω_ Er t)] +1/4sin 2 γ_ osin 2 λsin 2 δcos[α-ϕ_ r-Ω_ Er t] -1/2cos 2 γ_ ocosλsin 2 δsin[α-ϕ_ r-Ω_ Er t] , b(t) = cos 2 γ_ osinλsinδcos[2(α-ϕ_ r-Ω_ Er t)] +1/4sin 2 γ_ o(3-cos 2 λ) sinδsin[2(α-ϕ_ r-Ω_ Er t)] +cos 2 γ_ ocosλcosδcos[α-ϕ_ r-Ω_ Er t] +1/2sin 2 γ_ osin 2 λcosδsin[α-ϕ_ r-Ω_ Er t] , where γ_ o characterizes the orientation of the detector with respect to the local geographical directions, ζ denotes the angle between the interferometer arms, λ is the geographical latitude of the detector's site, (α,δ) are the right ascension and declination of the source, ψ_ p is the GW polarization angle, Ω_ Er is the rotational angular frequency of the Earth, and ϕ_ r is the initial phase of the Earth's diurnal motion. Below we will consider how to incorporate Doppler modulation into the phases of the waveforms from the spinning NS undergoing spin precession (cf. Eqs. (<ref>)). Fig. <ref> shows the binary coordinate system (X_b, Y_b, Z_b) with Z_b axis aligned with the binary's total angular momentum J and the motion of spinning NS within it. (X_L, Y_L, Z_L) is the coordinate system with Z_L axis aligned with the binary's orbital angular momentum L (hereafter referred to as L-aligned coordinate system), in which X_L-Y_L represents the orbital plane. The origins of these two frames are both placed at the BB (O). The opening angle of L precession cone is θ_L. Suppose that at initial time, S and L are both in the Y_b-Z_b plane and the spinning NS sits on the X_b axis. After a period of time t, the orbital plane precessed by ϕ_ N, the spinning NS's position vector and the orbital longitude are r_1 and ψ_1, and its spin is S(t). (The Y-axes not drawn in Fig. <ref> are determined by the right-hand rule.) In the L-aligned coordinate system, the position vector of the spinning NS r_1L = (r_1 cos(ω_ bt),r_1 sin(ω_ bt),0). By two rotation transformations (first rotates by θ_L clockwise about the X_L axis, and then rotates by ϕ_N clockwise about the Z_b axis), the position vector r_1 in the binary coordinate system can be given by r_1 = r_1 ( [ cosψ _1 cosϕ _N - cosθ _L sinψ _1 sinϕ _N; cosθ _L sinψ _1 cosϕ _N + cosψ _1 sinϕ _N; sinθ _L sinψ _1; ]) with ψ _1=ω_ bt and ϕ _N=Ω_ pret. In the detector coordinate system, the Doppler shift of the GW frequency f_0 from the spinning NS in a binary system can be expressed as the combination of detector Doppler shift around the SSB and source Doppler shift around the BB Δ f_D = f_0 ( n· dr_ d/ dt/c+ n_ b· dr_1/ dt/c) , with <cit.> n·r_ d = R_E[cosλcosδcos (α-ϕ_r-Ω_ Er t) +sinλsinδ] +R_ES[cosαcosδcos (ϕ_ o+Ω_ Eo t) +(cosε_ esinαcosδ+sinε_ esinδ) sin (ϕ_ o+Ω_ Eo t) ] being the projection of the detector's position vector r_ d along the spinning NS's line of sight n in SSB coordinate system, where R_ E and R_ ES are the mean radius of the Earth and the mean distance from the Earth’s center to the SSB, Ω_ Eo is the mean orbital angular frequency of the Earth, ϕ_ o is the initial phase of the Earth's annual motion, and ε_ e is the ecliptic obliquity. - n_ b=(0,sinι,cosι) is the SSB's location in the binary coordinate system, orbital radius r_1=G^1/3m_2/(ω_ bM)^2/3 and angular frequency ω_ b = 2π /P_ b for a circular orbit. If there is no orbital precession, i.e., Ω_ pre=0, then the Doppler shift in Eq. (<ref>) reduces to the simple case as Eq. (6) in <cit.>. Since the waveform components in Eqs. (<ref>) can be decomposed into a series of sine and cosine functions in which the frequencies are linear combinations of Ω_ r, Ω_ p, and Ω_ pre, the Doppler-modulated waveforms H_+,×^(n)(t) in Eq. (<ref>) can be obtained by H_+^(n)(t) = h_+^(n)(Ω_l→Ω_l + Δ f_D) , H_×^(n)(t) = h_×^(n)(Ω_l→Ω_l + Δ f_D) , with l∈{ r, p, pre}. The binary is assumed to follow an invariant circular orbit when calculating the Doppler shift of the spinning NS. In fact, the orbit is constantly shrinking due to gravitational radiation. According to the orbital velocity evolution equation (d(v/c)/dt ∝ (v/c)^9, cf. <cit.>), its relative variation is ≲ 10^-5 for a half-year observation, resulting in a relative variation for the Doppler shift of ≲ 10^-5. Thus, we can ignore the reaction of gravitational radiation on the orbit for the observation time under consideration. §.§ Effects of spin-orbit coupling on parameter estimation In addition to spin precession, spin-orbit coupling also causes orbital plane precession, which is expected to introduce additional information into the Doppler-modulated waveforms H_+,×^(n)(t) (cf. Eqs. (<ref>)). We use the Fisher information matrix (FIM) to obtain a quantitative assessment of the parameter estimation accuracy for GW detection (e.g., see <cit.>). For GW signal h(t) (cf. Eq. (<ref>)) with parameter set λ, FIM is defined as Γ^ij≡( ∂ h/∂λ_i,∂ h/∂λ_j) . For a monochromatic signal of frequency f, the noise-weighted inner product (a, b) ≃2/S_n(f)∫_0^T_ obs a(t) b(t) dt <cit.>, where S_n(f) is the power spectral density of the instrumental noise at frequency f, and T_ obs is the observation time. The optimal signal-to-noise ratio (SNR) for signal detection is defined as SNR≡ (h,h)^1/2. The root-mean-square (RMS) error of parameter λ_i is estimated as Δλ_i = √(Σ_ii), where the covariance matrix Σ is the inverse of the FIM, i.e., Σ = Γ^-1. Assuming λ=(lnh_10,lnh_20,lnh_30,α,sinδ,lnP_ b,cosι,θ_S,ψ_ p,lnΩ_ r,lnΩ_ p), in which the amplitudes for the different waveform components are defined as h_10=2Gb^2I_3ϵγ/c^4D, h_20=64Gb^2I_3ϵκ/c^4D, h_30=4Gb^2I_3ϵγ^2/c^4D . Similar to the sky localization error defined in <cit.>, we define the corresponding one for a source located at (α,δ), ΔΩ = 2 π√(Σ_ααΣ_sinδsinδ -Σ_αsinδ^2). In the following analysis, since the sensitivity of Einstein Telescope <cit.> in the frequency band of interest is not as good as that of Cosmic Explorer, we use Cosmic Explorer, which consists of two facilities (one 40 km on a side and one 20 km on a side), each with a single L-shaped detector <cit.>. Various angular parameters are taken as ζ=π/2, λ=0.764, γ_ o=1.5, ϕ_ r=ϕ_ o=0, α=1.209, δ=1.475, ψ_ p=1.0. Assuming a 40 km arm length with a low-frequency optimized sensitivity is used, the amplitude spectral density of the instrumental noise √(S_n(f_0)) = 1.36× 10^-25  Hz^-1/2 and √(S_n(2f_0)) = 1.66× 10^-25  Hz^-1/2 for f_0=100  Hz, the observation time T_ obs=2T_ pre=1.676 × 10^7  s. After the FIM calculation, the RMS errors of the estimated parameters for a typical spinning NS in a binary (the parameters used here are the same as in Fig. <ref> and Fig. <ref>) are shown in Fig. <ref> as red downward-pointing triangles. The results without spin-orbit coupling are represented by blue open squares for comparison. For the first three Doppler-modulated signal components h_n(t) (n=1,2,3) in Eq. (<ref>), their SNRs are 104, 8.5, and 7.6, respectively. In comparison, the corresponding SNRs are 74.3, 12.6, and 11.3 for the signals without spin-orbit coupling. The spin and orbital precession modulated signal h_1(t) increases its SNR by 40% compared with the triaxial aligned case, while the other two signals h_2(t) and h_3(t) both decrease by 33%. The fractional estimation errors for three amplitudes are inversely proportional to their SNRs. As one can see, the precession improves the sky localization by a factor of two and slightly improves the estimation of the orbital period. The most significant improvement comes from the estimation of the angles cosι and θ_S, both of which are improved by about 3 orders of magnitude. This is because these two angles are encoded in the amplitudes of the waveforms (cf. Eqs. (<ref>)), and they modulate the profiles of the waveforms in Fig. <ref>. § CONCLUSIONS In this work, we calculate the gravitational waveforms of a triaxial non-aligned NS in a compact binary system in which the effects of spin-orbit coupling have been incorporated. Then, we compare our waveforms with the ones commonly used in current CWs searches. Finally, we evaluated the parameter estimation accuracy for the signal detected by the proposed next-generation GW detector using the Fisher information matrix method. For a tight double NS system with a 6-min orbital period, by solving the precession equation with the perturbation method, we find that spin precession-induced correction to the spin angular frequencies of NS is in the same order of magnitude as the angular frequency of orbital precession. The fitting factor between the waveforms with and without spin precession will drop to less than 0.97 after a few days. The analytic waveforms show that spin-orbit coupling introduces additional modulation information that will help in improving the accuracy of parameter estimation in CW detection. The double NS system (consisting of a rapidly spinning NS and a nonspinning NS) considered in this work can be seen as a dual-line GW source, in which the orbital motion of the binary will emit low-frequency GWs in the mHz band in addition to the high-frequency GWs from the spinning NS. This dual-line GW source is of astrophysical interest, such as constraining the NS's moment of inertia and ellipticity using the ratio of the strain amplitudes of the low- and high-frequency GWs <cit.> or combining the angular momentum loss of the NS <cit.>. Since the angular frequency of the orbital precession contains information about the orbital period and mass of the binary, we can use it to infer binary parameters by combining information from the emitted GWs of the dual-line sources, and such studies are currently under our investigation. Y.W. gratefully acknowledges support from the National Key Research and Development Program of China (No. 2022YFC2205201 and No. 2020YFC2201400), the National Natural Science Foundation of China (NSFC) under Grants No. 11973024, Major Science and Technology Program of Xinjiang Uygur Autonomous Region (No. 2022A03013-4), and Guangdong Major Project of Basic and Applied Basic Research (Grant No. 2019B030302001). T. L. is supported by NSFC Grant No. 12003008 and the China Postdoctoral Science Foundation Grant No. 2020M682393. J.-W. C. acknowledges the support from China Postdoctoral Science Foundation under Grant No. 2021M691146. S.D.M is supported by U.S. National Science Foundation (NSF) grant PHY-2207935. § EXPRESSIONS FOR AND The symmetric matrix 𝒜 used in calculating the waveforms (cf. Eqs. (<ref>)) can be explicitly expressed as follows 𝒜_11 = 2 b I_3 ϵ (16 b (1-16 κ ) κ +γsin (t Ω_ p)(b γsin (t Ω_ p) +2 Ω_ presinθ_S sin (t Ω_ r))) , 𝒜_22 = 2 b I_3 ϵ (16 b κ (16 κ -1)+γcos (t Ω_ p)(b γcos (t Ω_ p) +4 Ω_ presinθ_S sin^2 (t Ω_ r/2))) , 𝒜_33 = -2 b γ I_3 ϵ (b γ +4 Ω_ presinθ_S sin (t Ω_ r/2)sin(t (2Ω_ p+Ω_ r)/2)) , 𝒜_12 = -b γ I_3 ϵ (b γsin (2 t Ω_ p)+2 Ω_ presinθ_S (sin (t Ω_ p)-sin (t (Ω_ p-Ω_ r)))) , 𝒜_23 = b I_3 ϵ (b γ (16 κ +1) sin (t Ω_ p) + 4 κΩ_ presinθ_S (cos (t Ω_ r) sin(2tΩ_ p)+ 8 sin (t Ω_ r))) , 𝒜_31 = b I_3 ϵ (b γ (1-32 κ ) cos (t Ω_ p) +Ω_ presinθ_S +4 κΩ_ presinθ_S (-8 +8cos (t Ω_ r) + sin (2t Ω_ p)sin (t Ω_ r))) . The transformation matrix ℛ used in calculating the waveforms (cf. Eqs. (<ref>)) can be explicitly expressed as follows ℛ_x1 = sin (tΩ _p)(cos (tΩ _pre)cos (t(Ω _p + Ω _r)) - cosθ _Ssin (tΩ _pre)sin (t(Ω _p + Ω _r)))+ 1/2cos (tΩ _p) × (cos(tΩ _pre)(16κ (cos(tΩ _r) + κ (3cos (t(2Ω _p - Ω _r))- 8cos (tΩ _r) + cos (t(2Ω _p + Ω _r))))sin(tΩ _p) + (γ ^2 - 2)sin (t(Ω _p + Ω _r)))+ sin (tΩ _pre)( - 2γsinθ _S + cosθ _S((γ ^2 - 2)cos (t(Ω _p + Ω _r)) + 16κsin (tΩ _p)(3κsin (t(2Ω _p - Ω _r)) + (8κ - 1)sin(tΩ _r)- κsin (t(2Ω _p + Ω _r)))))) , ℛ_x2 = 1/2sin (tΩ _p)(((γ ^2 - 2)cosθ _Scos (t(Ω _p + Ω _r))- 2γ (1 + 8κ )sinθ _S)sin (tΩ _pre) + (γ ^2 - 2)cos(tΩ _pre)sin (t(Ω _p + Ω _r)))- cos (tΩ _p)(cosθ _Ssin (tΩ _pre)(8κ (cos (tΩ _r) + κ (3cos (t(2Ω _p - Ω _r)) - 8cos (tΩ _r) + cos (t(2Ω _p + Ω _r))))sin(tΩ _p) - sin (t(Ω _p + Ω _r))) + cos (tΩ _pre)(cos (t(Ω _p + Ω _r)) + 8κsin(tΩ _p)(sin(tΩ _r)+ κ ( - 3sin (t(2Ω _p - Ω _r)) - 8sin(tΩ _r) + sin (t(2Ω _p + Ω _r)))))) , ℛ_x3 = γcosθ _S(1 + 4κ - 4κcos (2tΩ _p))cos (t(Ω _p + Ω _r))sin (tΩ _pre) + 1/2(γ ^2 - 2)sinθ _Ssin (tΩ _pre) + γcos (tΩ _pre)(1 + 8κsin ^2(tΩ _p))sin(t(Ω _p + Ω _r)) , R_y1 = 1/2(2(cos (t(Ω _p + Ω _r))sin(tΩ _p) + 4κ (cos (tΩ _r) + κ (3cos (t(2Ω _p - Ω _r))- 8cos (tΩ _r) + cos (t(2Ω _p + Ω _r))))sin(2tΩ _p))sin (tΩ _pre)+ cos (tΩ _p)(2γcos (tΩ _pre)sinθ _S + (γ ^2 - 2)sin (tΩ _pre)sin (t(Ω _p + Ω _r))) + cosθ _Scos (tΩ _p)( - (γ ^2 - 2)cos (tΩ _p)cos (t(Ω _p + Ω _r))+ 2sin (tΩ _p)sin (t(Ω _p + Ω _r)) + 8κsin(2tΩ _p)(sin(tΩ _r) + κ ( - 3sin (t(2Ω _p - Ω _r)) - 8sin(tΩ _r) + sin (t(2Ω _p + Ω _r)))))) , ℛ_y2 = 1/2(2γ (1 + 8κ )cos(tΩ _pre)sinθ _Ssin (tΩ _p) + cosθ _Scos (tΩ _pre)( - (γ ^2 - 2)sin (tΩ _p)cos (t(Ω _p + Ω _r)) + 8κ (cos (tΩ _r) + κ (3cos (t(2Ω _p - Ω _r)) - 8cos (tΩ _r) + cos (t(2Ω _p + Ω _r))))sin (2tΩ _p) - 2cos (tΩ _p)sin (t(Ω _p + Ω _r))) + sin (tΩ _pre)( - 2cos(tΩ _p)cos(t(Ω _p + Ω _r)) + (γ ^2 - 2)sin (tΩ _p)sin (t(Ω _p + Ω _r))+ 8κsin (2tΩ _p)(3κsin (t(2Ω _p - Ω _r)) + (8κ - 1)sin (tΩ _r) - κsin (t(2Ω _p + Ω _r))))) , ℛ_y3 = γcosθ _S( - 1 - 4κ + 4κcos (2tΩ _p))cos (t(Ω _p + Ω _r))cos (tΩ _pre) - 1/2(γ ^2 - 2)sinθ _Scos (tΩ _pre) + γsin (tΩ _pre)(1 + 8κsin ^2(tΩ _p))sin(t(Ω _p + Ω _r)) , ℛ_z1 = γcosθ _Scos (tΩ _p) + 1/2sinθ _S((γ ^2 - 2)cos (t(Ω _p + Ω _r))cos (tΩ _p) - 2sin(tΩ _p)sin(t(Ω _p + Ω _r)) + 8κsin (2tΩ _p)(3κsin(t(2Ω _p - Ω _r))+ (8κ - 1)sin(tΩ _r) - κsin(t(2Ω _p + Ω _r)))) , ℛ_z2 = γ (1 + 8κ )cosθ _Ssin (tΩ _p) + 1/2sinθ _S((γ ^2 - 2)cos (t(Ω _p + Ω _r))sin (tΩ _p)+ 2cos (tΩ _p)sin(t(Ω _p + Ω _r)) - 8κ (cos (tΩ _r) + κ (3cos (t(2Ω _p - Ω _r)) - 8cos (tΩ _r) + cos (t(2Ω _p + Ω _r))))sin(2tΩ _p) ) , ℛ_z3 = 1/2(2 - γ ^2)cosθ _S + γcos (t(Ω _p + Ω _r))sinθ _S(1 + 8κsin^2(tΩ _p)) . § EXPRESSIONS FOR WAVEFORM COMPONENTS The waveform components to the order O(Ω_ pre) can be expressed as follows h_ + ^( pre) = GbI_3ϵΩ _presinθ _S/4c^4D(cos(tΩ _r)(sin (2θ _S)(-(3 + cos (2ι ))cos(2tΩ _pre)+6 sin ^2ι) + 4cos (2θ _S)cos (tΩ _pre)sin(2ι)) + 2(-2cosθ _Ssin(2ι )sin(tΩ _pre) + (3 + cos (2ι))sinθ _Ssin (2tΩ _pre))sin (tΩ _r)) , h_× ^(pre) = GbI_3ϵΩ _presinθ _S/c^4D(cos (tΩ _r)(2cos (2θ _S)sinιsin (tΩ _pre)- cosιsin (2θ _S)sin (2tΩ _pre)) + 2( - cosιsinθ _Scos (2tΩ _pre) + cosθ _Scos (tΩ _pre)sinι )sin (tΩ _r)) . Since h_ +,× ^( pre)/h_ +,× ^(1)∼Ω_ presinθ_S/(b γ),  h_ +,× ^( pre)/h_ +,× ^(2)∼Ω_ presinθ_S/(16 b κ), for typical parameters used as in Fig. <ref>, then h_ +,× ^( pre)∼ 0.01 Ω_ preh_ +,× ^(1)∼Ω_ preh_ +,× ^(2). Thus, these h_ +,× ^( pre) components are also negligible compared to h_+,×^(1) and h_+,×^(2). § THE RESIDUAL OF TWO SOLUTIONS To confirm the fidelity of the approximate calculation, we use Euler angles to accurately calculate the angular frequency. Although this method can give the exact solution ω_i^E, it is too complicated to give a simple analytical waveform like Eqs. (<ref>). They can be calculated as (an overdot represents d/dt) <cit.> ω_1^E = ϕ̇_ bsinθ_ bsinψ_ b + θ̇_ bcosψ_ b , ω_2^E = ϕ̇_ bsinθ_ bcosψ_ b-θ̇_ bsinψ_ b , ω_3^E = ϕ̇_ bcosθ_ b + ψ̇_ b , with the Euler angles derived from the primitive (no approximation) rotation matrix (cf. Eq. (<ref>)), θ_ b = arccos(ℛ_z3) , ϕ_ b = arctan(-ℛ_x3/ℛ_y3) , ψ_ b = arctan(ℛ_z1/ℛ_z2) . The absolute errors between the approximate angular frequency ω_i and the exact angular frequency ω_i^E are shown in Fig. <ref>. During two orbital precession periods, the relative deviation of the analytic approximation from the exact solution is ≲ 10^-9 for ω_1,2 and ≲ 10^-13 for ω_3. Therefore, the solution of the angular frequency is accurate enough for the calculation of the waveforms. § EFFECTS OF ORBITAL PRECESSION ON DOPPLER SHIFT The Doppler shift correction is defined as δ (ΔΩ_n) = ΔΩ_n - ΔΩ_n(Ω_ pre=0) with n={ r, p, pre}. It measures the effects of spin-orbit coupling on the phase of GWs of the spinning NS in a binary. As seen in Fig. <ref>, the deviation of the GW frequency can reach ∼ 0.5% for f_0 and ∼ 1% for 2f_0 if we do not consider the orbital plane precession.
http://arxiv.org/abs/2307.02624v1
20230705195430
Inferring microbial interactions with their environment from genomic and metagenomic data
[ "James D. Brunner", "Laverne A. Gallegos-Graves", "Marie E. Kroeger" ]
q-bio.QM
[ "q-bio.QM" ]
Inferring microbial interactions with their environment from genomic and metagenomic data James D. Brunner1,2*, Laverne A. Gallegos-Graves1, Marie E. Kroeger1¤, 1 Biosciences Division, Los Alamos National Laboratory, Los Alamos, NM, USA 2 Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM, USA ¤Current Affiliation: In-Pipe Technology, Wood Dale, IL, USA * jdbrunner@lanl.gov § ABSTRACT Microbial communities assemble through a complex set of interactions between microbes and their environment, and the resulting metabolic impact on the host ecosystem can be profound. Microbial activity is known to impact human health, plant growth, water quality, and soil carbon storage which has lead to the development of many approaches and products meant to manipulate the microbiome. In order to understand, predict, and improve microbial community engineering, genome-scale modeling techniques have been developed to translate genomic data into inferred microbial dynamics. However, these techniques rely heavily on simulation to draw conclusions which may vary with unknown parameters or initial conditions, rather than more robust qualitative analysis. To better understand microbial community dynamics using genome-scale modeling, we provide a tool to investigate the network of interactions between microbes and environmental metabolites over time. Using our previously developed algorithm for simulating microbial communities from genome-scale metabolic models (GSMs), we infer the set of microbe-metabolite interactions within a microbial community in a particular environment. Because these interactions depend on the available environmental metabolites, we refer to the networks that we infer as metabolically contextualized, and so name our tool MetConSIN: Metabolically Contextualized Species Interaction Networks. § AUTHOR SUMMARY We present a method for analysis of community dynamic flux balance analysis by constructing an interaction network between microbes and metabolites in a microbial community. To do so, we reformulate community wide dynamic flux balance analysis as a sequence of ordinary differential equations, which can in turn be interpreted as networks. We then provide the sequence of interaction networks which depend on and dynamically alter the available metabolite pool, as well as the time-averaged network over the course of simulated growth on a finite resource medium. § INTRODUCTION Microorganisms have profound impacts on ecosystems ranging from the human gut to forest soils to plant root systems. In humans, recent advances in technology have created a plethora of works describing the differences in microbial community, composition, and function between diseased patients and healthy controls <cit.>, clearly demonstrating that microbial communities play an important role in human health. Likewise, environmental microbial communities have been found to affect biogeochemical cycling in soil<cit.>, leading to changes in plant decomposition and soil carbon sequestration that effect the amount of greenhouse gases in the atmosphere. Even in plants, rhizosphere microbial communities affect growth and resilience<cit.> as well as response to drought <cit.>. To understand and predict the effects of microbial communities on their environment, we must first understand how these communities assemble and interact. Biotic interactions between microbes are a driving force in community assembly, with both positive and negative interactions between microorganisms creating different community compositions. Moreover, the ability for non-resident microorganisms to invade the community is also largely controlled by biotic interactions<cit.>, which makes it critical to understand these interactions to accurately predict treatment success for microbiome manipulation . It is well established that community structure is important in determining the impact of the microbiome on its host environment. For example, disease-free asymptomatic individuals will have pathogenic bacteria in their microbiome<cit.>, suggesting that community structure and microbial interactions affect the host-microbe relationship. The advent of modern sequencing and metabolic pathway analysis has led to an effort to organize this data into useful models of microbes and microbial communities. These models, which represent mathematically the internal network of chemical reactions within a cell's metabolism are called genome-scale metabolic models (GSMs)<cit.>. GSMs and the constraint-based reconstruction and analysis (COBRA) methods that make use of them have shown growing promise in predicting and explaining the structure and function of microbial communities <cit.>. However, the complexity of these models means that analysis is often based only on simulation, and is very sensitive to parameters and other assumptions. For example, many modern community metabolic modeling methods seek to predict co-culture growth or biomass at chemostatic equilibrium using artificial community-wide constraints <cit.>. On the other hand, dynamic methods that use predictions about growth rate and metabolite consumption to construct a dynamical system suffer from dependence on unknown metabolic parameters and initial conditions, as well as heavy computational cost <cit.>. In fact, most tools for community modeling only provide predictions of species growth rates and metabolite consumption, without providing an understanding of the fundamental interactions that lead to these predictions <cit.>. Some qualitative insight into the systems is possible using simulated knock-out experiments <cit.> or simplifying the system <cit.>. However, new methods for qualitative analysis of community metabolic models are needed. An interaction network provides an interpretable object that can be used to characterize a microbial community in more depth than composition alone <cit.>, and suggest keystone taxa and other functional properties of the community <cit.>. These advantages, and the apparent importance of microbial interactions, have led to the use of network inference and analysis for understanding important phenomena including disease treatment <cit.> and human impact on the climate <cit.>. The most commonly used method for network inference involves computing the propensity for microbes to appear together in a sample, most commonly defined by co-occurrence frequency, correlation, or covariance <cit.>. More sophisticated methods for inferring associations between microbes include the use of regression-based and probabilistic models <cit.> or fitting to time-longitudinal data <cit.>. Additionally, some modern methods have sought to combine mechanistic hypotheses with statistical network building using machine learning approaches by incorporating “background knowledge" of known microbial interactions <cit.> or using simple microbial characteristics along with a set of known interactions<cit.>. GSMs and COBRA modeling provide an attractive avenue for a “bottom up" approach to network building from underlying metabolic mechanism <cit.>. This can be done using simulated knock-out experiments <cit.>, but this approach suffers from a focus on direct microbe-microbe interactions, which lead to models that lack the complexity of full metabolite mediated networks <cit.>. While differences in networks across meta-groups may provide insight, these networks in general provide few avenues for prediction and design. Patterns in network structure cannot be directly related to function without further study, and networks built in this way cannot account for dynamically changing interactions across perturbations in the environment. In this manuscript, we present a method for inferring interactions between microbes and metabolites within a microbial community by leveraging genome-scale metabolic models (GSMs). This method requires only some method of constructing GSMs as well as an estimate of the metabolic environment of the community. GSMs can be built as long as a genome can be assigned to each member of the community, using automated construction methods such as CarveME <cit.> or modelSEED <cit.>. Assigning genomes to community members in a sample can be done with genomic or metagenomic data, or if that data is not available, a less accurate assessment can be done by matching amplicon sequence data with previously characterized genomes. Our method is based on Flux balance analysis (FBA), which allows us to infer microbial growth and exchange of metabolites with the environment. These can be combined into a dynamical system, called dynamic flux balance analysis (DFBA) which in turn can be represented as a sequence of networks. Simulation of dynamic flux balance analysis requires the solution to a linear optimization problem at each time-step. These solutions can be found without repeated optimization by using a basis for an initial solution, which allows us to find new solutions as the problem constraints change simply by solving a linear system of equations. This means that we can reformulate the dynamical system as an ordinary differential equation (ODE) that has solutions that match the solution to the DFBA problem for some time interval. Finally, this ODE system can be naturally interpreted as a network of interactions between microbes and metabolites, and also provides a network of interactions between the metabolites that is mediated by the microbial metabolisms of the community. <Ref> provides a graphical summary of the method. § BACKGROUND §.§ Dynamic flux balance analysis Advances in genetic sequencing have led to the construction of genome scale models (GSMs) of the metabolic pathways of microbial cells, and to methods to analyze and draw insight from such large scale models <cit.>. Constraint based reconstruction and analysis (COBRA) is used to model steady state fluxes ψ_i through a microorganism's internal metabolic reactions under physically relevant constraints <cit.>. Flux balance analysis (FBA) is a COBRA method that optimizes some combination of internal reaction fluxes which correspond to increased cellular biomass, subject to the constraint that the cell's internal metabolism is at equilibrium. Precisely, flux balance analysis assumes that cell growth and metabolic flux can be determined by solving the following linear program <cit.>: {[ max(ψ̱·γ̱); Γ^†ψ̱ = 0; c̱^1(y̱) ≤Γ^* ψ̱≤c̱^2(y̱); ḏ^1 ≤ψ̱≤ḏ^2 ]} where the matrices Γ^*,Γ^† together represent the stoichiometry of the cell's metabolism, the vector ψ̱ represents the flux through the cell's internal reactions, the objective vector γ̱ encodes the cell's objective, exchange constraints c̱^1,c̱^2 are determined in part by available external metabolites and internal constraints d^1,d^2 are known. Exchange rates v_j of metabolite j between the cell and its environment are in turn determined by internal flux according to v̱ = Γ^*ψ̱. For convenience, we define a vector c̱ = (c̱_1,ḏ_1,c̱_2,ḏ_2,0̱) to be the vector of all of the problems constraints. Solutions to FBA provide a rate of increase of biomass which can be interpreted as a growth rate for a cell. Furthermore, FBA solutions allow us to compute the vector v̱, which represents metabolite exchange between the cell and an external metabolite pool. By assuming that constraints on nutrient exchange reactions within the metabolic network are functions of the available external metabolites, the coupled system of microbe and environment can be modeled. For a community of microbes x̱ = (x_1,...,x_p) in an environment defined by the concentration of nutrients y̱ = (y_1,...,y_m) this model has the form <cit.>: dx_i /dt= x_i (γ̱_i·ψ̱_i) dy_j/dt = -∑_i=1^p x_i (Γ^*_i ψ̱_i)_j with ψ̱_i determined separately for each organism according to a linear program of the form <ref>. This system is referred to as dynamic flux balance analysis (DFBA). Note that this is a metabolite mediated model of the community, meaning that the coupling of the growth of the separate microbes is due to the shared pool of metabolites y̱. §.§ Piece-wise smooth representation Simulation of the dynamical system given by <ref> can be accomplished by leveraging the fundamental theorem of linear programming<cit.>, which states that if <ref> has an optimal solution, then it has an optimal solution that can be represented as the solution to an invertible system of linear equations<cit.>. This means that there is some invertible matrix B and index set such that ψ̱=B^-1c̱ is an optimal solution to the linear program (where for ease of notation we substitute c̱ = c̱_). The key observation allowing efficient forward simulation of <ref> is that as the constraints c̱^1(y̱),c̱^2(y̱) vary, the matrix B does not change. In other words, there is some time interval such that we can replace the linear program <ref> with the linear system of equations B_iψ̱_i=c̱_i(y̱) for some time-interval, where c̱_i is a subset of the bound functions c̱_i. At the end of this time interval, the solution to <ref> stops obeying the problem constraints, and new B_i must be chosen. Putting this together, we can define a sequence of time intervals [t_0,t_1),[t_1,t_2),...,[t_n-1,T) such that solutions to the system defined by dynamic FBA for a community (<ref>) are solutions to the system of ODEs dx_i /dt= x_i (γ̱_i· (B^k_i)^-1c̱^k_i(y̱)) dy̱/dt = -∑_i=1^p x_i Γ^*_i (B^k_i)^-1c̱^k_i(y̱). on the interval [t_k,t_k+1) for some invertible matrices B^k_i. The challenge of efficient forward simulation of DFBA is then in finding the matrices B_i^k, which may be non-unique. In previous work, we presented a method for choosing the set of B_i^k that allow forward simulation so that t_k+1 > t_k, and created a python packaged called SurfinFBA for simulation. Furthermore, we have recently improved this method to increase the length of the time intervals [t_k,t_k+1) (see appendix). This improvement is packaged with the MetConSIN package, which includes SurfinFBA. § METHODS §.§ MetConSIN network construction The dynamical system defined by DFBA (<ref>) can be simulated but is difficult to interpret and analyze, especially when accounting for uncertainty in initial conditions and bound functions c̱^1_i(y̱),c̱^2_i(y̱). However, <ref> suggest that the system can be interpreted as a network of interactions between microbes and metabolites on the time interval [t_k,t_k+1) with only mild assumptions on c̱^1_i(y̱),c̱^2_i(y̱). DFBA therefore implies a sequence of interaction networks representing the dynamics of a microbial community. Furthermore, for a fixed metabolic environment (i.e. fixed y̱) DFBA provides an interaction network representing a snapshot of community metabolic activity. Without loss of generality, we may construct <ref> so that the forward direction (i.e. positive flux) of each of the first m reactions transports one of the m environmental metabolites into the cell. Then we assume that c̱^2(y̱) has the form c̱^2_i(y̱) = (c^2_i1(y_1),...,c^2_im(y_m)) with non-decreasing c_ji^2(y_j), and that c̱^1_i(y̱) = c̱^1_i is constant. In plain language, we assume that the fluxes of reactions that transport environmental metabolites into the cell are bounded by the availability of the corresponding environmental metabolites, and the other reactions have constant bounds. Under this assumption, for time-interval k, <ref> can be rearranged into the form dx_i/dt =C_i^kx_i + ∑_j=1^m a_ij^k x_i c^2_ij(y_j) = x_i (C_i^k + ∑_j=1^m a_ij^k c^2_ij(y_j)) where C_i^k is a constant that we refer to as intrinsic growth, and the a_ij^k are combinations of entries in γ_i and (B^k_i)^-1. Likewise, <ref> can be rearranged into the form dy_l/dt = -∑_i=1^p(D_il^kx_i + ∑_j=1^m b_ijl^kx_i c^2_ij(y_j)) where D_il^k is a constant, and the b_ijl^k are entries of the matrix Γ_i^*(B_i^k)^-1. We may now interpret these ODEs as networks of interactions term by term. <Ref> can be interpreted as growth of a microbial population proportional to the population biomass, with growth rate modified by the environmental metabolites y_j. This is similar to a generalized Lotka-Volterra model <cit.>, and can be naturally represented by a set of network edges pointing from a metabolite to the microbe with weights a_ij^k. The terms in <ref> are slightly more complicated to interpret. The terms D_il^kx_i represent some effect of the microbe i on the available biomass of j over the time interval [t_k,t_k+1) which only changes with the biomass x_i of microbe i over this interval. This effect is the result of growth pathways that do not depend on metabolite availability, and may be 0. Additionally, when j=l, the term b^k_illx_i c^2_il(y_l) can be interpreted as pairwise interactions between microbe i and metabolite l, e.g. consumption of a carbon source. These two sets of terms can be represented by a set of network edges pointing from the microbe to the metabolite, with weights D_il^k + b_ill^k. The remaining terms represent interactions that involve two metabolites and one microbe. In the formalism of interaction network theory (see <cit.>), these remaining terms can be interpreted as reactions of the form X_i + Y_j X_i+Y_j + Y_l if b_ijl < 0 (and so the interaction increases the available biomass of metabolite l), meaning that microbe i and metabolite j interact to form metabolite l. This means, e.g., that metabolite l is created as a byproduct of the metabolism of metabolite j by microbe i. In this case, we again represent the interaction as a network edge from microbe i to metabolite l, but now annotate the edge with the information that this interaction is mediated by metabolite j. Finally, we may do the same if b_ij >0, although we note now that this represents a non-autocatylitic interaction, meaning that the available biomass of metabolite l is reduced independent of the current available biomass. While this seems counter-intuitive, it arises when metabolite l is consumed in some metabolic pathway but it is not the rate-limiting external metabolite for that pathway. In fact, when enough of the biomass is consumed so that metabolite l becomes rate-limiting, the system will transition to the next interval [t_k+1,t_k+2) and the ODEs <ref> will change. This transition ensures that the non-negative orthant is forward invariant for the DFBA system, meaning the system will non achieve a non-physical state of negative biomass. The mapping from the ODEs to a microbe-metabolite network is summarized in <ref>. §.§ Metabolite interaction network construction In addition to the microbe-metabolite interaction network described above, the last set of interactions suggest a second network can be formed which includes only the metabolites. In the microbe-metabolite network, the terms b_ijl^k x_i c^2_ij(y_j) in <ref> describe how microbe i effects the available biomass of metabolite l as mediated by metabolite j. We may instead interpret this as the metabolite j effecting the available biomass of metabolite l through reactions carried out by microbe i. This interpretation suggests a network of edges Y_j Y_l labeled by the microbe whose metabolism contributes the edge. §.§ Microbe interaction network construction While dynamic FBA can be written as a series of microbe-metabolite interaction networks, researchers are often interested in the emergent interactions between microbes themselves. MetConSIN provides a simple heuristic for inferring these interactions based on the competitive and cross-feeding interactions of the microbe-metabolite network. The heuristic is as follows: to determine the effect of microbe X_i on the growth of microbe X_j, we find the set of all paths of length two of the form X_i Y_l X_j with weights in the microbe-metabolite network w_il and w_lj. If w_il < 0, meaning that X_i consumes or otherwise depletes Y_l, while w_lj >0, e.g. Y_l is a limiting resource for the growth of X_j, then this pair of interactions can be interpreted similarly to competition between X_i and X_j, although this competition does not need to be symmetric. Conversely, if w_il > 0 and w_lj >0, then the presence of X_i will increase the growth of X_j through cross-feeding. We therefore take as the composite edge weight w̃_ij of the emergent interaction X_i X_j to be the sum over all such paths of the products of the weights of the edges in the path: w̃_ij = ∑_l:X_i → Y_l → X_j w_ilw_lj. §.§ Sequencing of Soil Isolates §.§.§ Bacterial Isolation Ten bacterial isolates were originally isolated using serial dilution from soils collected in Utah<cit.> (38.67485 N, 109.4163 W, 1310 elevation), New Mexico (35.4255167 N, 106.6498 W, 5405 elevation), and Colorado (37.23081667 N, 107.8599667 W, 6484 elevation). These bacterial isolates were grown on either Caulobacter medium, 1/10 Tryptic soy Medium (TSB), or Nutrient medium at 30oC for 24-72 hours depending on the strain. Single colonies were then transferred in their respective growth medium and grown for 24-72 hours at 30oC while shaking at 250rpm. Bacterial biomass was harvested from overnight cultures by centrifugation and High Molecular Weight (HMW) DNA extractions were completed using the Qiagen MagAttract HMW DNA Kit (Qiagen, Hilden, Germany) following manufacturer's protocol. Two of the bacterial isolates required an additional clean-up after DNA extraction which was completed using the Qiagen PowerClean Pro Clean-up Kit (Qiagen, Hilden, Germany) following manufacturer's protocol. §.§.§ Library Preparation DNA library preparation and sequencing was completed at the LANL Genomics Facility as described in detail below. DNA initial quantification was done using Qubit High sensitivity ds DNA kit (Invitrogen). DNA integrity was assessed on Tape Station using gDNA Screen tape (Agilent Technology). DNA purity ratios were determined on NanoDrop 1 spectrophotometer (ThermoScientific). 1ug of genomic DNA for each sample was sheared using g-Tubes (Covaris, USA). Shearing parameters were chosen according to the DNA integrity of a particular isolate. All but two samples were sheared the following way: shear 2 min at 7,000 rpm, flip the tube, and shear 2 min at 7,000 rpm. More fragmented samples were sheared using the following parameters: shear 2 min at 3,500 rpm, flip the tube, and shear 2 min at 3,500 rpm.Sheared DNA was collected and purified using AMPure PB beads (Pacific Bioscience, USA) as per PacBio protocol. The quality and quantity of the purified DNA were assessed using the TapeStation and Qubit as described above. SMRT bell templates were constructed according to the PacBio protocol using Express Template Prep Kit 2.0. First, DNA underwent damage repair, end repair and A-tailing. It was followed by the barcoded overhang adapter ligation and purification with 0.45X volume of AMPure PB beads (Pacific Bioscience, USA). The barcoded samples were pooled in the equimolar amount according to the volumes provided in the PacBio Microbial Multiplexing Calculator. The pooled SMRT bell library was quantified using the Qubit DNA HS kit (Invitrogen) and the average fragment size was determined on Bioanalyzer using the DNA HS kit (Agilent). The conditioned sequencing primer v. 4 was annealed and Sequel II DNA Polymerase 2.0 was bound to the SMRT bell library. The template/ DNA polymerase complex was diluted and purified with 1.2X volume of AMPure PB beads (Pacific Bioscience, USA). The complex was sequenced on PacBio Sequel II instrument, using 1 SMRT cell 8M and Sequencing chemistry 2.0, 30 hour movies were recorded. §.§.§ Sequencing The raw PacBio reads were converted to PacBio HiFi reads using the “CCS with Demultiplexing” option in SMRTLink 11.0.0.146107. This resulted in a total of 1,532,731 HiFi reads for a total yield of 8.3 Gbp. The median read quality was Q37 with a mean read length of 5,409 bp. The reads were assembled using Flye v.2.9-b1768. Putative number of plasmids were estimated by looking at the files output by Flye. This file indicates if a contig is circular and/or a repeat. Contigs that were indicated as circular but not a repeat, as well as under 500 Kbp were assumed to be plasmids. Contigs that had the same attributes but were over 500 Kbp were assumed to be a complete bacterial chromosome. Then, the assemblies were annotated using Prokka v.1.14.6. The taxonomy of the genomes were derived using gtdbtk v.1.5.0. §.§.§ Genome-Scale Model Reconstruction & MetConSIN Simulation Genome-scale models for the 10 bacterial genomes were created using modelSEED<cit.> within the KBase computational platform <cit.>. The models were gap-filled with a complete media. The resulting models were used to test the MetConSIN simulation method, with models labeled according to the genome ID of the corresponding bacterial genomes. For the clarity of the network figures, we label the nodes corresponding to each model with the unique 1- or 2-digit integer that appears in the genome ID. <Ref> lists the IDs, classification, and node labels for the 10 models, and the supplemental file S2 Table contains details of the sequencing results. § RESULTS & DISCUSSION MetConSIN provides analysis of the dynamic flux balance analysis (DFBA) system by inferring a set of interaction networks from that system. To demonstrate this utility, we simulated the growth of 10 taxa isolated from soil using DFBA, and used MetConSIN to construct the series of interaction networks that the community behaved according to over the course of the simulation. <Ref> shows the simulated growth of genome-scale models of all 10 taxa in the simulated community on a finite media in an aerobic environment, all of which reached stationary phase when glucose was depleted. The community grew through a set of 3 distinct time-intervals, each with a corresponding species-metabolite and metabolite-metabolite network. These networks, as well as the time-weighted variance between them are shown in <ref> and <ref>. The two major transitions in the simulation both involved a series of basis-changes, meaning that one or more microbes altered their connectivity in the network. The first transition occurred when the model of genome bc1012 altered its connectivity three times in rapid succession. In the second transition, models of genomes bc1010, bc1002, bc1015, bc1003, bc1001, bc1009, and bc1012 all altered their network connectivity, with bc1015 and bc1001 doing so twice. We display which microbe changed its network connectivity at each transition with the style and color of the dashed lines in <ref> that indicate the time-points at which the transitions occurred. The two sets of networks provide a mechanistic explanation of the microbial growth and metabolic activity of the community. These networks tell us which microbes are consuming and producing environmental metabolites, as well as how the environmental metabolites effect cell growth. For any time interval, an edge from a metabolite to a microbe has a non-zero edge weight if and only if the simulated growth rate of the microbe is a function of the concentration of the metabolite during that interval. Inspection of the network reveals that only a few such edges exist, even though many metabolites are depleted by microbes. This is because only a subset of the constraints of flux balance analysis determine the growth rate, as indicated by the basic index set that is used to solve DFBA. In other words, only rate-limiting metabolites appear as source nodes in the network. Inspecting the network can reveal interesting time-dependent interactions. For example, we notice in <ref> that for t∈ [0.24,0.42), community metabolism of D-Glucose causes consumption of Fumarate and production of Succinate. This interaction is very strong during this interval, which lies between two time-points at which the model of genome bc1012 changes its network connections, so we might guess that this interaction is mediated by that model. MetConSIN provides edge data for each edge in the network, including in the case of the metabolite-metabolite networks which microbe mediated the interaction. Inspection of this output reveals that, indeed, the model for genome bc1012 mediates the interactions between D-Glucose, Fumarate, and Succinate. MetConSIN's analysis provides an avenue for using dynamic FBA to infer how microbes interact and how these interactions vary with community composition and over time. For example, we can infer from MetConSIN that the ten taxa whose genomes we isolated from soil behave antagonistically due to competition for resources, as seen in <ref> (a) and (b). MetConSIN's microbe-microbe interactions are based on a simple heuristic meant to identify competition and cross-feeding. This works well if an interaction between two microbes is based on a single metabolite, but simply summing the interactions is likely not the best approximation. In future work, we plan to define a more rigorous simplification of the metabolite-mediated system as a direct microbe interaction system and characterize the error of this simplification. Our ten-member community showed only negative interactions in part because the genome-scale models that we used include the core metabolism of each taxa, making competition easy to identify, but do not include many details on the production of secondary metabolites. Secondary metabolites are compounds produced by bacteria that do not have a direct role in cell growth, but can have a profound impact on community organization <cit.>. Genome-scale modeling often focuses on the core metabolism and growth of an organism, meaning that these metabolites are often missing. This omission is a major challenge for any method that seeks to use GSMs to study microbial ecology. For MetConSIN to incorporate interactions mediated by secondary metabolites, the GSMs used must already include pathways that produce these metabolites. Furthermore, FBA constraints must be carefully chosen so that models do not simply ignore secondary metabolites in favor of immediate growth. As genome-scale models improve to include secondary metabolite production, MetConSIN can likewise be improved to infer interactions from secondary metbaolites. We observe antagonistic interactions in all of the subsets of the community that we simulated in isolation, but the strength of the competition may vary in with different community composition. Indeed, <ref> (c) shows that the implied relationships emerging from competition for resources are not the same in a five-model subset of the community as when these five models are simulated as part of the larger community of 10 models. The species-metabolite and metabolite-metabolite networks provided by MetConSIN offer mechanistic insight into the metabolic activity of microbial communities, including identification of how metabolic connections change with community composition. In <ref>, we investigate how the strengths of the various connections one model, bc1001, had in networks produced by MetConSIN for various communities involving bc1001. For example, when grown in simulated coculture with bc1016 and bc1009, bc1001 tended to form weaker network connections than when grown in other combinations. Interestingly, when bc1001 was grown in simulated coculture with bc1015 and bc1009, it formed stronger connections compared to when simulated with bc1016 and bc1009, even though switching bc1015 for bc1016 had little effect in other combinations. These connection differences are a possible mechanistic explanation for differential metabolic activity between communities, and suggest a course of further investigation into the metabolic impact of the various combinations of modeled organisms. DFBA provides a model for the population dynamics of microbial communities by leveraging genomic data. This means that dense time-longitudinal data is not required for simulation. Despite this important advantage, the usefulness of the DFBA model is limited. This is because a thorough qualitative analysis of the resulting dynamical simulation is often impractical due to the system's complexity. MetConSIN achieves an important step forward in analyzing DFBA simulations by organizing the complexity of DFBA into a sequence of interaction networks, which are more familiar and readily understood. This tool therefore gives researchers the power to infer important characteristics of the dynamic metabolic activity of a microbial community from genomic data. MetConSIN depends on dynamic flux balance analysis and the genome-scale metabolic models that define that system. While this does mean that MetConSIN is essentially limited by the quality of the GSMs used, it also means that MetConSIN provides a method by which to assess the quality of these models. With high-quality GSMs, MetConSIN provides the ability to create qualitative predictions about community metabolic activity which can be used to generate testable hypotheses. MetConSIN can created testable hypotheses about the (1) resource competition and (2) community assembly dynamics in our synthetic communities. Furthermore, with MetConSIN, the accuracy of these hypotheses can be used to judge the usefulness of and ways to improve the underlying GSMs. Ultimately, MetConSIN provides a rigorous interpretation of DFBA that emerges directly from the dynamics of the system. This tool is an important step in increasing the utility of genomic data and COBRA methods in the study of microbial communities and their impact on their environment. § ACKNOWLEDGEMENTS The authors would like to acknowledge the technical assistance of Thomas C. Biondi in this work. This work was supported by the U.S. Department of Energy Biological System Science Division, through a Science Focus Area Grant (2019SFAF255). § SUPPORTING INFORMATION *S1 Code. MetConSIN repository. MetConSIN is available on github at <https://github.com/lanl/metconsin>. *S2 Table. Details of soil isolate sequencing experiments. File name . *S3 Algorithm Details. Technical details of SurfinFBA. File name .
http://arxiv.org/abs/2307.03197v1
20230704003712
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks
[ "Aysha Thahsin Zahir Ismail", "Raj Mani Shukla" ]
cs.LG
[ "cs.LG", "cs.AI" ]
=1 Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks Aysha Thahsin Zahir Ismail and Raj Mani Shukla Computing and Information Science, Anglia Ruskin University, Cambridge, UK {az303, raj.shukla}@aru.ac.uk ======================================================================================================================================================================================== Distributed Collaborative Machine Learning (DCML) is a potential alternative to address the privacy concerns associated with centralized machine learning. The Split learning (SL) and Federated Learning (FL) are the two effective learning approaches in DCML. Recently there have been an increased interest on the hybrid of FL and SL known as the SplitFed Learning (SFL). This research is the earliest attempt to study, analyze and present the impact of data poisoning attacks in SFL. We propose three kinds of novel attack strategies namely untargeted, targeted and distance-based attacks for SFL. All the attacks strategies aim to degrade the performance of the DCML-based classifier. We test the proposed attack strategies for two different case studies on Electrocardiogram signal classification and automatic handwritten digit recognition. A series of attack experiments were conducted by varying the percentage of malicious clients and the choice of the model split layer between the clients and the server. The results after the comprehensive analysis of attack strategies clearly convey that untargeted and distance-based poisoning attacks have greater impacts in evading the classifier outcomes compared to targeted attacks in SFL. Federated Learning, SplitFed Learning, Data poisoning § INTRODUCTION Artificial Intelligence (AI) and Machine learning (ML) are being deployed by a wide range of organizations worldwide, from governments and massive tech companies to small internet retailers. 83% of the tech industry utilizes AI-powered technologies for developing applications <cit.>. With significant improvement in productivity and performance, ML can impart efficiency in several domains such as product recommendation, biomedical image classification, computer vision, and natural language processing <cit.>. Most ML applications employ supervised ML model <cit.>. The performance of ML models in actual application scenarios depends on the quality of training data. To achieve improved model performance and accuracy, machine learning systems require a huge amount of quality training samples which might be split among various groups <cit.>. In addition, it is often difficult to obtain labelled training samples<cit.>. Further, gathering all the training samples to a centralized server has several privacy concerns, especially during the presence of sensitive information. Several privacy governing regulations such as the General Data Protection Regulation (GDPR) must be complied with while aggregating private data into a central server. Distributed Collaborative Machine Learning (DCML) is a potential alternative that enables multiple participants to collaboratively train a shared global model while locally keeping their training data. This technique allows the participants to share the updates with the global model analysts without any access to local training data <cit.>. Federated learning <cit.> and Split learning <cit.> are DCML approaches that resolve the privacy issues in centralized ML. In Federated Learning (FL), multiple clients train an entire machine learning model with their local training samples and further, the locally trained models of all clients are aggregated to obtain a global model at the server. Though FL prevents sharing of local data, it is not viable when clients have limited resources for computing large ML models. In addition, both the server and clients can access local and global models affecting the privacy of clients training data and model parameters of the server. Additionally, communication delays, the presence of heterogeneous systems in distributed learning, and data dynamism are other challenges experienced by FL with multiple clients. Split learning (SL) was introduced to overcome these issues such that resource constraints and model privacy by splitting the ML model between the client and the server. SL ensures that the client and server will have access to a portion of their split of the whole ML model <cit.>. However, SL is not ideal in the presence of many clients as it can train only one client at an instance which eventually idles other clients and leads to longer training time <cit.>. SplitFed Learning (SFL) is an advanced DCML paradigm that resolves the issues caused by FL and SL. SFL has a hybrid architecture where the model is split as in SL which overcomes limited client resources followed by parallel computation as in FL to mitigate the training overhead that occurs during the presence of a large number of clients <cit.>. Various studies have evaluated the security of FL. FL is prone to model poisoning attacks that manipulate gradients to minimize accuracy <cit.>; <cit.>. Inference attacks, which attempt to recreate private data from the client or server, have a substantial impact on the security of SL <cit.>. However, there exists minimal research analyzing the susceptibility of SFL against adversarial attacks where training data is spread among multiple clients. This work examines how a malicious client can initiate data poisoning attacks in the SFL system. Data poisoning attacks attempt to manipulate training data that eventually influences the learning output of the trained model. Data poisoning attacks broadly take the form of clean label poisoning and dirty label poisoning where the former injects tampered data into the train set and the latter manipulates the training labels such as label flipping <cit.>. Accordingly, the major contribution of this paper can be summarized as follows: * This research proposes targeted, untargeted and distance-based data poisoning attacks on SFL to evade the aggregated model outcomes. * The research tests the proposed targeted, untargeted, and distance-based data poisoning attack strategy for two case studies – 1) hand-written digit classification (standard MNIST dataset) and 2) a novel application of ECG signal classification for arrhythmia heartbeat type using SFL. * This paper conducted an extensive study on the proposed attacking strategies on SFL varying the proportion of model split and malicious client percentage in MNIST and health care ECG signal datasets. To the best of our knowledge, this paper is the first attempt – i) to employ privacy-preserving SFL for the automated classification of ECG signals, ii) in attacking SFL using targeted and untargeted data poisoning strategies for the proposed ECG classification problem iii) in assessing SFL's sensitivity to a novel distance-based attacks. The rest of this paper is organized as follows: Section <ref> presents a comprehensive discussion regarding the existing literature. Section <ref> provides a quick overview of the different DCML techniques. Section <ref> presents the proposed attack techniques in SplitFed Learning. Section <ref> discusses the implementation specifics, including the system architecture, datasets, and set-up for a poisoning attack, while section <ref> outlines the results and the performance of the poisoning attacks with respect to two case studies. Finally, the section <ref> concludes this paper. § RELATED WORK Numerous adversarial attacks are experienced by federated learning mainly poisoning attacks and information extractions <cit.>. Model poisoning and data poisoning attacks are the major security threats encountered by FL. In Data poisoning attacks, the attacker introduces malicious data samples into training data changing their primary meaning before the training phase leading to incorrect results. In contrast, Model poisoning attacks manipulate the machine learning model rather than the data changing overall learning outcomes <cit.>. Most of the adversarial backdoor attacks in FL manage to manipulate the local client update or the training data among the edge devices <cit.>. Lyu, et al. (2020) classified malicious actors in FL into 3, namely malicious server, insider, and outsider adversary. The impact of poisoning attacks by untrusted participants critically damaged the performance of FL <cit.>. Besides the poisoning attacks, the security of FL is challenged by inference attacks induced by dishonest or malicious servers. These servers are capable of learning and extracting clients’ private data using their gradient updates <cit.>. Similar to FL, there are several privacy threats to SL mainly due to training data inference from the intermediate representation generated by smashed data, label leakage of client data, and client model inversion <cit.>. Pasquini et al. (2021) implemented a Feature-Space Hijack Attack (FSHA) on the SL model in which an untrusted server retrieves the private data of a client that is used to train the model. The hijack occurs in two phases: the setup phase, during which the server seizes the client training process, and the inference phase, during which the server obtains the client training data using the smashed data received from the client. Erdoğan, et al. studied the possibility of model stealing in SL and formulated a stealing attack that can cause client model inversion <cit.>. In a Two-Party Split Learning, there will be a data owner (client) and label owner (server), and private label leakage attacks take place in this setting when any external adversary or clients attempt to infer the private labels <cit.>. Li, et al. presented a label leakage attack by analysing the gradient norm of imbalance classes present in the training set <cit.>. Several distance correlations and differential privacy strategies were implemented to improve the security in SL <cit.>; <cit.>. Hence, combining FL and SL may fully leverage the strengths of both learning approaches while minimising their individual limitations. SFL is a hybrid DCML architecture that is a combination of FL and SL where it combines the parallel training/testing of client-side models as observed in FL and the model split between client and server as performed in SL. The SFL systems consist of client and server segments along with an additional server called the fed server on the client side. The fed server is used to perform FedAvg aggregation algorithm on the updates provided by the client and is responsible for synchronising the global model updates of multiple clients <cit.>. Each client in SFL parallelly performs forward propagation on the client-side model split with their local training data until the cut layer, server then proceeds with the forward and backward propagation as in SL and sends the updated gradients to all of its clients in parallel. Further, each client completes the backward pass on their client-side model, and updates are forwarded to the fed server. Fed server conducts FedAvg on the updates from all clients resulting in a client-side global model and the parameters redirected to all clients <cit.>. SFL effectively addresses the difficulties that FL and SL encounter and provides greater privacy than FL. Yet, there is a major possibility of data poisoning attacks while collaboratively training among distributed clients and a server with SFL. Malicious participants can induce poisonous data during the training process that is difficult to detect by the aggregator. Motivated by the aforementioned analysis, this research introduces data poisoning attacks against SFL and tries to fill the research gap that studies the robustness of SFL. This paper proposes targeted, untargeted along with a novel distance-based attack strategy and performs a comparative analysis of the proposed attacks on MNIST and healthcare ECG signal datasets. Our work is the closest to <cit.>. However, in contrast to <cit.>, we analyzed the robustness of splitFed learning for the novel ECG classification problem. We also proposed a new distance-based attack technique in our work in contrast to <cit.>. § BACKGROUND This section provides a basic background of Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL). §.§.§ Federated Learning The FL is pictorially represented in Figure  <ref>. The fundamental idea behind FL is collaborative training of ML models among distributed data holders. In a decentralized setting with multiple clients, each client has its local data and trains the complete ML model. After each training iteration, all clients transfer the updated weights obtained while computing forward and backward passes on its local model to a central server. FedAvg, is a commonly used aggregation algorithm that is employed by the server to achieve a global update for the ML model. This global update is passed on to the clients for the subsequent iteration. Instead of sharing raw client data for training as is seen in traditional centralized ML, FL only shares the parameters of the model with the server or other clients. Following that, FL lowers communication costs and eases the networking overhead involved in Internet of Things (IoT) services with various entities and limited resources <cit.>. §.§.§ Split Learning Figure <ref> presents the basic architecture of SL. SL <cit.> is a DCML approach that divides the ML/DL model between the client and server. The model layer at which the split occurs is referred to as the cut layer and the output generated is termed smashed data. Computations on initial model layers are performed by the client and the later layers are handled by the server, thereby local training data is kept private similar to FL. In SL sequential training is performed where each client performs forward propagation with its own model segment until the cut layer which is the last layer of the client-side model split. The smashed data (activations from the cut layer) from the cut layer is received by the server which continues to propagate forward with the model’s remaining layers <cit.>. Once the forward propagation is completed, the server determines the loss and begins backpropagation. The gradient calculated up to the cut layer is passed on to the client to continue its backpropagation. This entire process is one training round and the updates are sent to the next client <cit.>. §.§.§ SplitFed Learning The basic architecture of SFL is represented in the Figure <ref>. SFL-based distributed client environments include a main server, a fed server, and a group of clients as represented in the Figure. The full model N is split into client-side model N^C and server-side model N^S. At each global epoch, all clients interact with the server in parallel and the main server aggregates the parameters to generate a global server-side model. The client model synchronization is carried out in parallel at the fed server. Considering k clients at time instance t, the client-side model of each client can be represented as N_k,t^C. The smashed data of each client at t is S_k,t. At t=0, each client k performs forward propagation of its model split and sends the activations S_k,t along with the true labels to the server <cit.>. The server on receiving the smashed data carries out forward propagation with its model split, computing the prediction, and loss calculation with actual labels and predicted labels ŷ. Further, the server executes the global server-side model update and propagates the gradient back to the client. Simultaneously, when each client receives the back propagated smashed data from the server, it is sent to the fed server to aggregate and generate a global client-side model update that is sent back to all k the clients <cit.>. Table <ref> presnets the basic notations of the SFL. § METHODOLOGY: DATA POISONING ATTACKS IN SPLITFED LEARNING This section discusses the proposed methodology of data poisoning attacks on SFL. We discuss the threat model and algorithms used to attack SFL-based DCML. §.§ Proposed threat Model in SplitFed System In SFL, participating clients share the smashed data with the server segment of the model which ensures the privacy of the client-training data. Hence, none of the functional components in the framework verifies the quality and security of the training data. Due to the introduction of these vulnerability i.e. privacy and security of client-training data, the server with the split of the global model is now prone to data poisoning attacks from malicious clients in the client group. This paper's threat scenario considers the presence of a subgroup of malicious participants or a percentage of participants who are either malicious or under the control of a malicious adversary. The main objective of the malicious client or the adversary is to poison the training data and compromise the training efficiency. This is carried out by manipulating training by label perturbation. Figure <ref> illustrates the data poisoning attacks by label flipping in the SFL model. In which one malicious client perturbs the label “circle” of the private training sample to the label “square” and thus infecting the local model. In this paper, the training data is perturbed by novel targeted, untargeted, and distance-based label-flipping attack algorithms that evade the classifier to produce incorrect results. This work considers the following realistic assumptions for the data poisoning attacks: * This paper considers a realistic scenario where only a percentage of clients are considered malicious or controlled by an external adversary. Given a group of X clients, the adversary can take control of y% out of X clients. In this research, we evaluate the performance of SFL under varying percentages of malicious clients that match the practicalities of real-time distributed learning scenarios. * The paper assumes a realistic scenario where each malicious client can only manipulate its private training data. The adversary or the malicious client cannot influence the aggregation operation of the fed server to produce a global client-side update and does not have access to benign participants’ training data * This paper assumes an honest main server. The assumption of an honest main server is similar to studies that conducted client-side inference attacks <cit.>. §.§ Proposed Data Poisoning Attacks strategies In this work, the data poisoning attack strategy is implemented by poisoning the labels, that is perturbing the class labels that penultimately causes the trained model to generate incorrect predictions. In SFL, the malicious clients or the adversary trains the client-side model with poisoned training data and transmits the model parameters to the server subsequently influencing the training of the server-side model. Suppose that the given classification task contains L classification labels and l be the label that is targeted and replaced by label l'. Taking the case of this scenario, the attacks introduced in this paper are defined as follows: §.§.§ Targeted Poisoning Attacks In the proposed targeted attacks, the adversary selects the labels l of source class S_c that the adversary attacks and replaces it with labels l’ of a target class T_c provided (l, l’) ∈ L. Here only the label of S_c is manipulated, and the remaining class labels remain the same. Targeted poisoning attacks aim to reduce the accuracy of the classifier for the targeted source class and the accuracy of remaining non-targeted samples is not affected. Algorithm <ref> represents scenario of the targeted poisoning attacks on SFL. §.§.§ Untargeted Poisoning Attacks Proposed untargeted attacks do not target the label of a specific source class, instead, randomly flip a selected set of labels l with l’ where (l, l’) ∈ L. Untargeted attacks also flip all class labels to one random class label which drastically reduces the accuracy of the classifier. Untargeted attacks initiated by a set of malicious participants have a greater impact compared to targeted attacks due to the iterative submission of malicious parameters to the server. Algorithm <ref> depicts the untargeted poisoning attack. Untargeted attack attempts to degrade the performance of the classifier on the whole rather than the accuracy of a specific class. §.§.§ Distance-based Poisoning Attacks In the proposed distance-based poisoning attacks, the adversary optimizes and improves the efficiency of targeted attacks by careful selection of target class T_c, and the label of the selected T_c is used to replace the labels of source class S_c. To implement a distance-based attack the adversary initially selects a source class S_c with the label l and calculates the Euclidean distance between samples of S_c and other training samples where l and l’ are not equal. The Euclidean distance calculates the distance between input samples, provided the input samples are real-valued vectors. Further, the training sample with maximum distance is selected and its corresponding class is chosen as the target class T_c. The label of this selected target class is used to poison the training samples of S_c. The usage of maximum distance is to increase the success rate of the attack. When the source class label is replaced with the label of the sample having a maximum distance, the impact of a poisoning attack is increased. In a multi-class classification problem, the adversary can select different source class S_c in different trials and initiate attacks by computing the Euclidean distance and identifying target class T_c with maximum. Algorithm <ref> represents distance-based data poisoning attacks that enhance the risk of the ML model since they have a higher impact than targeted poisoning attacks. This type of attack represents a potential threat to SFL in real-world applications. § IMPLEMENTATION This section discusses the implementation details of the proposed attack methods. We describe the datasets involved used for the research, the model architecture used for training, the experiment setup for SFL, and data poisoning attacks. §.§ Dataset We test the proposed methodology using two differnt case studies as mentioned below: §.§.§ Case study 1 - Automatic handwritten digit recognition (MNIST Dataset) The MNIST dataset is a benchmark dataset for ML and DL classifiers made up of handwritten digits. The dataset consists of 60,000 grayscale images for training and 10,000 grayscale images for testing where the images belong to 10 different classes labelled as ‘0’-'9' Each image is of size 28x28 pixels or 784 features in total <cit.>. §.§.§ Case Study 2 - ECG signal classification (ECG Dataset) Automatic classification of ECG signals to detect arrhythmia types rules out the need for manual signal analysis by physicians and enables easy monitoring of heart conditions. For this application, the MIT-BIH Arrhythmia dataset that consist of of ECG signals to classify ECG signals for arrhythmia heartbeat types is used <cit.>. This standard database contains 48 records, where each record has ECG signals obtained from two separate channels. Each record lasts 30 minutes selected from 24 hours. Following the analysis of ECG signal processing (<cit.>; <cit.>; <cit.>), in this study, 26,490 samples were gathered, and samples also complied with the classification criteria as defined by Association for the Advancement of Medical Instrumentation (AAMI) <cit.>. The collected samples represent 5 different classes of heartbeat types provided in Table <ref>. Of the total samples half of them are selected randomly to train the model and the remaining are used for the testing process. §.§ Model Architecture We use a 1-Dimensional fully connected dense neural network for the MNIST dataset and a convolutional neural network (1D-CNN) for the ECG dataset. Table <ref> presents the details of the model architectures. It consists of four convolutional layers with a ReLU activation function, two max-pooling layers, two fully connected dense layers, and a SoftMax activation function that classifies the outputs into one of five categories of arrhythmia heartbeat types. To train the MNIST dataset, a deep feed-forward network is employed which contains an input layer and 10 dense layers. The final classification layer has a ReLU activation function associated with it to classify the input sample to one of 10 classes in the dataset. The input layer in the deep feed-forward network is similar to the input layer in 1D-CNN, it receives the input sample from the dataset. The size of the input received in this work for the MNIST dataset is 784 as the images are of size 28x28 pixels. §.§ SFL Setup For the MNIST dataset, the SFL scenario is defined to have one server and ten clients. The 60,000 training images of the MNIST dataset are partitioned equally among ten clients, where each client has 5000 training records and 1000 validation and testing records. The remaining 10,000 test images are unseen and used for evaluation purposes. The total number of training epochs was finalized as 40 after observing the model convergence rate for different epoch values. The experimental setup of SFL for the ECG dataset has one server and five clients. Each client receives distinct and equal batches of data from the train set. The data in the test set is excluded from the training data and it is used for model performance evaluation. The total training epochs are set to 50 as the model convergence is observed in fewer rounds than 50 training epochs. §.§ Data Poisoning Attack Setup In this paper, to introduce data poisoning attacks only y% of X clients are assigned malicious or controlled by an external adversary. The proposed targeted, untargeted, and distance-based attacks were proposed with different percentages of malicious clients for both datasets. For untargeted attacks, all the labels of malicious clients were manipulated and replaced with a class label that has the highest test accuracy in the SFL system. In the case of targeted and distance-based attacks, the selection of source class S_c depends on the success of the poisoning attack. In the SFL setting, compromised clients have access to global client-side model updates from the fed server. The malicious client can initiate a poisoning attack for different source classes in a multi-class classification problem and evaluate the impact of the attack that degrades the performance of the classifier. In the proposed targeted poisoning attack, the source class S_c is selected as the class that has the highest percentage of correctly identified samples by the classifier. By manipulating the labels of that class with the target class T_c that has the second highest percentage of correctly classified samples. The source class S_c is chosen for distance-based poisoning attacks in an analogous way to targeted attacks. Euclidean distance is computed between inputs that have the label as source class S_c and other inputs. After measuring the distance, the input that has the label as source class S_c is replaced with the label of the input that has a maximum distance. In order to increase the impact of the attack, experiments were carried out with different model splits between the client and the server. In the 1D-CNN model for the ECG dataset, the model was split in two positions. At first, the model was split at the second convolutional layer forming two layers for the client segment and four layers for the server segment. Secondly, the model splits at the third convolutional layer forming three layers for the client and three layers for the server segment. The first and the second model splits are called ECGv1 and ECGv2 respectively. Similarly, the deep feed-forward network for the MNIST dataset was also split at two positions, at the second dense layer termed MNISTv1 and at the fourth dense layer referred to as MNISTv2. In the former split, the first two layers form the client-side model, and the remaining eight dense layers belong to the server segment. In the latter split, there will be four layers on the client-side model and six layers on the server-side model. § RESULTS AND DISCUSSION This section examines the effect of data poisoning attacks on MNIST and ECG datasets and the impact of varying the cut-layers. §.§ Effects of Data Poisoning Attacks This section describes the results of data poisoning attacks on two independent case studies. The effects of targeted, untargeted, and novel distance-based poisoning attacks were examined for each of them. Table <ref> presents the accuracy and accuracy drop (A_d) in percentage for the two case studies and under different percentages of malicious clients. As seen in the Table, the model's accuracy is greatly reduced due to the untargeted poisoning attack. In the presence of a maximum number of malicious clients, the value of the accuracy drops down to 33.87% from 88.87% which results in a 61.89% drop in accuracy for ECGv1. For ECGv2, the success of the attack is even higher resulting in 71.31% depletion in accuracy. In ECGv2, it is observable that a small percentage of malicious clients can drastically reduce the accuracy of the model. Thus, the accuracy for MNISTv1 decreased from 96.46% to 89.86%. For MNISTv2, a bigger variance in accuracy is seen. When there are 20% malicious clients present, accuracy falls to 86.06%. The success of targeted attacks is low as compared to untargeted attacks. This is due to minimal perturbation in the training samples. The training data of malicious clients contain less corrupt data compared to untargeted scenarios, thereby causing the accuracy to drop by not more than 7% in either of the split versions. The accuracy after distance-based attacks is worse than targeted attacks, causing accuracy to drop up to 11.48% in ECGv1 and 15.11% in ECGv2. However, the overall accuracy depletion is more for distance-based attacks compared to targeted attacks. Similar to distance-based attacks induced in the ECG dataset, here the adversary targets a specific class. By manipulating class labels with distance measures, the maximum drop in accuracy is 5.89% in MNISTv1. In MNISTv2 the value of A_d is 8.26%. We also compare the vales of the precision (P), Recall (R), and F-score (F) as provided in the tables <ref>-<ref>. The table shows the metrics values for different classes. As expected, the metrics change a lot due to the different types of proposed attacks. For example, the precision for the ECGv1 model decreases from 40% to 10% for category 1. It should be noted that ECG classification is greatly impacted by the attacks as compared to the MNIST classification data. For example, F-score is only 1% for category 2 in the ECGv2 model. It should be noted that an attacker can adopt various strategies to affect the performance of the model according to their choice and based on their motive. An attacker can perform untargeted attacks that affect the overall performance of the splitFed-based systems and affect its reliability. Thus, the global model is not able to achieve good performance for any of the classes. The attacker could employ targeted attacks thus affecting the performance of only a specific class rather than the whole global model. Thus, although it will have higher accuracy, it will induce unfairness in the system as the global model would tend to predict only a specific class. Similarly, the distance-based attack is a compromise between the two such that it depletes the accuracy as well as impacts specific classes thus inducing unfairness in the system. §.§ Impact of Changing Cut Layers The layer at which the model is divided between the client and server in the SFL has a serious influence on how effective poisoning attempts are. Attack intensity also varies with different cut layer choices. It is evident from the numerical data in both case studies that the poisoning attack on MNISTv2 and ECGv2 is more effective since these versions produce greater values of A_d. The reason for this is that there are now more layers in the client segment, giving the adversary greater room to initiate a more powerful and efficient attack. However, with a smaller number of model layers on the client segment, the model's overall accuracy is not greatly affected. Figure <ref> and <ref> depicts the relationship between accuracy drop A_d and cut layer observed from the experimental results of the two case studies. §.§ Accuracy Depletion with Changing Percentage of Malicious Clients The percentage of malicious clients plays a vital role in degrading the model accuracy during data poisoning attacks. Increasing the value of malicious clients in the SFL setting can drastically reduce accuracy. Considering the possibilities of a practical scenario, it is not ideal to have a large number of malicious clients in the SFL system. In this paper, the depletion of accuracy is studied with a varied percentage of malicious clients. The results of the two case studies make it clear that even with 10% of malicious clients, the accuracy value falls to a certain level. In untargeted attacks, the higher the percentage of malicious clients, the higher the value of accuracy drops. 40% of malicious clients in ECGv2 causes the accuracy to drop from 88.89% to 26.50%. As expected, the results of all three attack strategies clearly conclude that increasing the percentage of malicious clients contributes to the success of data poisoning attacks. After the critical analysis of experimental results, untargeted attacks have a significant impact on the classifier results. However, an attacker can still adopt targeted or distance-based attacks to reduce the classifier performance for a specific class. By adopting this strategy, it is possible to initiate attacks that cannot be directly detected while still maintaining better accuracy. This can degrade the classifier performance for one specific class chosen by the adversary. § CONCLUSIONS This paper is the initial attempt to investigate the effectiveness of various types of data poisoning attacks against SFL. The performance of the attack strategy is evaluated under several factors such as the number of split layers between the client and server and varying percentages of malicious clients in the SFL setting. An important indicator that shows how accuracy decreases with attack intensity is the value of accuracy drop. Distance-based data poisoning attacks have higher efficacy than targeted attacks. The highest value of accuracy drop resulted due to distance-based attack is 8.26% for the MNIST dataset and 15.11% for the ECG dataset. Furthermore, it can be concluded the SFL is more vulnerable to untargeted attacks which deteriorate the overall performance of the classifier. In addition to that SFL is more susceptible to distance-based data poisoning attacks compared to conventional targeted poisoning attacks. It should be noted that an attacker can employ targeted or distance-based attacks to lower the performance of the classifier for a particular class. Thereby, launching attacks that are difficult to detect immediately yet maintains a good overall accuracy. As a result, the performance of the classifier for a selected class is degraded. This research revealed the risk and vulnerability of SFL from the empirical results obtained after inducing data poisoning attacks with malicious clients. ieeetr
http://arxiv.org/abs/2307.01965v1
20230705002922
An analysis of scam baiting calls: Identifying and extracting scam stages and scripts
[ "Ian Wood", "Michal Kepkowski", "Leron Zinatullin", "Travis Darnley", "Mohamed Ali Kaafar" ]
cs.CR
[ "cs.CR" ]
IEEEpubidpullup6.5 Network and Distributed System Security (NDSS) Symposium 2024 26 February - 1 March 2024, San Diego, CA, USA ISBN 1-891562-93-2 https://dx.doi.org/10.14722/ndss.2024.23xxx www.ndss-symposium.org [] An analysis of scam baiting calls: Identifying and extracting scam stages and scripts. Ian D. Wood Macquarie University ian.wood@mq.edu.au Michal Kepkowski Macquarie University michal.kepkowski@students.mq.edu.au Leron Zinatullin Macquarie University Travis Darnley Macquarie University Mohamed Ali Kaafar Macquarie University, Australia dali.kaafar@mq.edu.au August 1, 2023 ============================================================================================================================================================================================================================================================================================= Phone scams remain a difficult problem to tackle due to the combination of protocol limitations, legal enforcement challenges and advances in technology enabling attackers to hide their identities and reduce costs. Scammers use social engineering techniques to manipulate victims into revealing their personal details, purchasing online vouchers or transferring funds, causing significant financial losses. This paper aims to establish a methodology with which to semi-automatically analyze scam calls and infer information about scammers, their scams and their strategies at scale. Obtaining data for the study of scam calls is challenging, as true scam victims do not in general record their conversations. Instead, we draw from the community of “scam baiters” on YouTube: individuals who interact knowingly with phone scammers and publicly publish their conversations. These can not be considered as true scam calls, however they do provide a valuable opportunity to study scammer scripts and techniques, as the scammers are unaware that they are not speaking to a true scam victim for the bulk of the call. We applied topic and time series modeling alongside emotion recognition to scammer utterances and found clear evidence of scripted scam progressions that matched our expectations from close reading. We identified social engineering techniques associated with identified script stages including the apparent use of emotion as a social engineering tool. Our analyses provide new insights into strategies used by scammers and presents an effective methodology to infer such at scale. This work serves as a first step in building a better understanding of phone scam techniques, forming the ground work for more effective detection and prevention mechanisms that draw on a deeper understanding of the phone scam phenomenon. § INTRODUCTION Phone scams, sometimes referred to as voice phishing or ‘vishing’, are a form of social engineering attack that leverages the telephone system. Scammers generate persuasive scenarios to convince victims to share personal information or pay at times substantial sums of money. These can e.g., impersonate authoratitive sources such as tax or law enforcement, offer seemingly free gifts or present a imagined threat such as a hacker accessing your bank details. The prevalence of phone scams has increased dramatically in recent years <cit.> flagging an urgent need for effective methods to combat them. Reports show that people suffer significant loses due to scams, losing, for example, $1.2 billion to impostor scams in the US in 2020 alone <cit.>. Victims of successful scams often feel embarrassment, guilt and shame, which is believed to contribute to the under-reporting of fraud cases <cit.>, with some demographics appearing disproportionately affected <cit.>. The problem of phone scams remains difficult to solve due to existing telecommunication system limitations (e.g. the existence of legacy phone networks in the global phone system that do not support modern security measures) and techniques and technology used by bad actors (e.g. VoIP and Caller ID spoofing) <cit.>. Scammers also frequently operate outside of the victim’s jurisdiction making it difficult to address from a legal and law enforcement perspective. Considering scammers' monetary gains and difficulties with securing the telephone network, scam calls will likely remain a popular choice for criminals. Maintaining up-to-date data about the ongoing scam campaigns, scammer behavior and the techniques they employ is challenging and yet crucial to design and maintain effective defensive approaches. Data acquisition needs to be at once agile, able to adapt to the ever-changing scam landscape, to provide deep insights into the techniques employed in current scams and to do all this at scale. One particular application of such insights, and the initial motivation of this study, is the creation of conversational AI bots that masquerade as susceptible scam victims. Recent advances in conversational AI allow for fluent generation of language, however they still struggle with situational awareness, hence incorporation of live contextual knowledge in to such models promises to improve their ability to act as convincing scam victims. In this study we introduce a semi-automated framework to analyze scam calls and infer information about scammers, their scam approaches and strategies at scale. We demonstrate our framework on a sample of recordings obtained from public sources, identifying insightful details on how scammers operate. To obtain scam transcripts, we searched YouTube for videos mentioning "scam call" or "scam call recording" as well as trawling the `scambaiting' YouTube tag. We transcribed the audio of the resulting videos using a commercial automated transcription service and further cleaned the data to remove irrelevant transcripts and sections of the transcripts that are not part of the scam call, leaving us with 341 transcripts totaling 90 hours of scam calls to analyze. Approximately 60% of the 825 originally collected videos either did not contain actual conversations with scammers or were deemed of insufficient quality (e.g.: containing only short disconnected snippets of scam conversation). Though this data does not constitute true conversations between scammers and their victims and represents a small sample of primarily US scam types, it does allow us to analyze scammer methodologies, including the scripts they follow and particular social engineering techniques they apply. Scam baiters deliberately draw out the call and present challenging personas to the scammers, providing a rich view on how the scammers themselves behave in a diversity of situations. Further, the actions of scam baiters and responses of scammers are a perfect match for the purpose of training and informing AI “victim bots”, our initial and primary motivator. It must be pointed out that scam baiters are not true victims, and though they may attempt to pose as such, likely do not present many behaviors that a true victim may exhibit. Thus our study can provide insights into the scripts and techniques used by scammers and some scammer behaviors, but cannot be considered comprehensive. The framework begins with the identification of phone scam types present in available data. We manually label all transcripts, identifying four main scam types and 3 further scam types with a marginal presence. Identifying the type of scam being undertaken in a call can be useful for gaining a deeper understanding of the mechanisms and techniques used by scammers. We then demonstrate recognition of the type of scam given a relatively small sample of annotated scams and simulated scam type recognition in a live call setting by limiting the number of utterances available to the recognition model. We found that identification of the scam type of a call is effective with just one utterance (80-90% accuracy) and highly accurate with 5 or 6 utterances (92-98%). Next, we propose an analysis of the content of each type of scam. We apply a contextualized topic model <cit.> and a publicly available emotion detection model to scammer utterances. We find that the topic model is able to identify scam themes observed during close reading of scam transcripts and note relatively subdued emotions on the part of the scammer. During close reading of the transcripts, we observed consistent sequences of scam stages, with the scammers appearing to follow pre-defined scripts. In order to verify this observation, and as a further tool to automate the analysis and understanding of scammer techniques and the mechanisms they leverage, we apply a Hidden Markov Model (HMM) to the outputs of the topic model. HMMs attempt to uncover simple state transition processes underlying complex sequences of observations. We found that the HMM models are able to capture the structure and flow of the scam calls, again with clear interpretations that match close reading. This is an indication that the scammers use a consistent scam structure, likely following a well defined script. We verified the model through manual annotation and found model outputs to match human interpretations of identified scam stages. We then construct a machine learning model to infer the state in a live call setting (i.e., from the utterances up to a given point in a transcript, infer the scam state at that point). We were able to correctly infer the states with reasonable accuracy, achieving an increase of 45% accuracy over a random model for scam types with more than 50 available transcripts, and around 80% accuracy when we accept predictions that are one step ahead or behind. An overview of our processing and analysis pipeline can be found in Figure <ref>. The story thus revealed is scripted scam progressions with embedded social engineering techniques. For example, in social security number scams, the scammer first establishes themselves in a position of authority, posing as a representative of the social security administration or similar authority, then describes an ongoing investigation into highly illegal events (drug smuggling, money laundering, allusions to murder, ...) associated with the victim's social security number, all the while stating that their aim is to prove the victim innocent. The victim is asked to pay a fee to expedite the resolution process and avoid being prosecuted. To the best of our knowledge, this is the first work to apply automated machine learning techniques to scam call recordings and effectively extract meaningful insights into scripts used by scammers and their strategies and behaviors. Our analysis allows us to paint a rich picture of the characteristics of scam calls, leading to a deeper understanding of the phenomenon. This work is intended as a first step toward future mitigation strategies and tools such as real time early detection and public education, leading to protection of potential victims and reduction of the amount of money lost to scams. The key contributions of this paper are as follows: * We collect and collate a data set of 341 conversations between scam baiters and scammers, annotated by scam type. We publicly released this data set at <http:to.be.released.on.acceptance>. * We develop and demonstrate a semi-automated framework for identifying and tracking current phone scam scripts, scammer behavior and scam strategies at scale. Our framework leverages hidden Markov models combined with topic modeling and automated emotion recognition for identifying the progression of scam scripts and facilitating the recognition of persuasion techniques used in telephony scams. We demonstrate effective predictive models in a simulated live call setting. We publicly release the code for our predictive models at <http:to.be.released.on.acceptance>. * We provide a summary of insights into the rich tapestry of social engineering techniques identified in our data through a combination of our automated analyses and close reading of the scam conversations. § RELATED WORK Current solutions: Solutions to the problem of phone scams have been suggested for both telecommunications provider <cit.> and end user applications <cit.> to combat Caller ID spoofing through authentication. However, as of the writing of this paper, these have not seen widespread adoption in part because, to be effective, they would need to adopted universally across the globe. Existing deployed solutions focus primarily on creating blocklists of known bad numbers <cit.> or proposing a reputation system based on caller behavior <cit.> yet many scam calls make it through despite these measures <cit.>. It has been shown, for example, that adding a number to the national Do-Not-Call Register may produce the opposite result <cit.> with attackers potentially abusing such lists. Researchers have also proposed analyzing the audio features of the call <cit.>, developing virtual assistants that vett incoming calls <cit.>, implementing application indicators that alert users to potentially unsafe calls <cit.> and using simple natural language processing techniques on initial call utterances to detect scams <cit.>. Several commercial scam detection tools have recently appeared using these and related data driven techniques <cit.>. Despite these scam call prevention technology advancements, 2022 industry and government reports<cit.> show that the scam calls as well as related monetary losses continue to increase (e.g., in 2022, over 30% more money was lost than in 2021 as a result of the phone scams in Australia). Datasets and telephony honeypots: The ‘in the moment’ nature of telephone conversations makes it difficult to analyze them on a large scale. As a result, data used in research of scam calls to date has been limited in scale (e.g. <cit.>) or containing primarily call metadata with no or little call content (e.g., <cit.>). Telephony honeypots, systems deployed in telephony networks to capture malicious calls, are one option to scale up the study of malicious calls. Drawing on the use of honeypots to detect network intrusions, telephony honeypots have been deployed to detect voice spam <cit.>. Similar methods were used to discover, record and analyze tech support scams <cit.>. Despite some success of these methods, legal and regulatory constraints associated with recording calls in various jurisdictions as well as the need to engage with scammers in real time present challenges to this approach. Telemarketing and chatbots: There are some similarities between telemarketing and scam calls in respect to technology and delivery methods used, however attackers’ motivations, approaches and impact on victims differ when it comes to phone scams <cit.>. Attempts have been made to waste telemarketers’ and scammers’ time using pre-recorded messages and chatbots <cit.> however this approach has not been widely applied to countering scam calls. Conversation analysis: Some preliminary analysis of phone scams has been performed <cit.> highlighting the challenges with data collection compared to traditional phishing attempts. Researchers have studied persuasion techniques <cit.> finding a similar diversity in persuasive techniques to our analysis with the exception that they identified social proof as common where we did not find evidence of that technique. A further study used forensic linguistics <cit.> to analyze scam calls so as to better understand methods used by attackers. More recently, the use of ‘scam signatures’ has been proposed <cit.>, paving the way for early detection of scams based on semantic content of the conversation. Natural language processing techniques have been suggested to improve detection of phishing and other social engineering attempts in emails <cit.>, and phone scams <cit.>. Approaches to date, however, are not scalable, requiring substantial human input and tested on very small data sets. § DATA SET Our data consists of annotated and marked up transcripts of conversations between phone scammers and YouTube scam baiters. This section describes how we obtained and processed the data, covering the first 5 steps in Figure <ref>. The last step, “HMM model training” is discussed later in Section <ref>. §.§ Sourcing Scam Transcripts Collecting data for the purpose of studying scam calls is a challenging task as individuals do not typically record their phone calls and victims of scams may be hesitant to share or publicly release recordings of the calls due to potential embarrassment or concerns about exposing financial or personal information that were discussed during the call. To obtain data for our analysis, we relied on recordings of phone scam conversations posted on YouTube by individuals known as "scam baiters". Scam baiters are individuals who engage with and record scam calls, attempting to draw out the call then typically confront the scammers about their unethical practices. While these calls cannot be considered representative of genuine interactions between scammers and their intended victims, the scammer utterances however are bona fide and provide insights into scammer techniques as well as the progression of hypothesised scam call scripts. We searched YouTube for channels that mention “scam baiting” (or "scam baiters", "scammers baited", etc.) and manually vetted all their uploaded videos, carefully selecting those that predominately contained actual scam calls and with long, coherent scammer conversations. We filtered out instances of videos where scam baiters used irony or offensive language towards the scammers, and only kept the videos where the scam baiters seriously acted out the role of a scam victim. In this way, we obtained 341 transcripts primarily from six different scam baiting channels with an average length of 90.7 utterances (44.8 for scammers, 45.9 for scam baiters) and a median length of 82 (40.5 for scammers, 41 for scam baiters), and a maximum length 242 utterances. §.§ Scam Baiters Our data set was collected from the publicly available recordings of conversations between scammers and scam baiters. Scam baiters pretend to be a vulnerable victim to engage scammers. Even though this does not reflect real scam calls, the analysis of how scam baiters interact with scammers can contribute to our understanding of the properties of scam calls. In Figure <ref> we see the distribution of scam types in our data. Interestingly, the most common scam types (reward, support, refund and social security number) were present in the recordings of all scam baiters. However, we can observe that scam baiters have a preference for a certain types of scam calls (e.g., Boda Scambaits published a significant number of scams about social security numbers). Table <ref> presents a detailed overview of the transcripts collected for each scam baiter. Looking at the utterances lengths in words and seconds, we observed that 3 scam baiters (Scammer Jammer, Rinoa Poison and IRLrosie) try to overtalk the scammer, whereas the remaining 3 scam baiters allow the scammer to dominate the conversation. §.§ Data Pre-Processing We transcribed the YouTube videos into diarised text using a commercial speech-to-text API (rev.ai). We then manually edited the transcripts to assign utterances as either Scammer or Victim and to remove parts of the transcripts that could not be considered part of a typical scam call: YouTube video introduction, side comments by the host not relevant to the call itself, and the part of the conversation including and following the Victim reveal. Some transcripts were completely removed as, on closer inspection, they did not resemble a realistic scam call and instead were likely produced entirely for entertainment purposes. Due to the artificial nature of the "victim" in these calls, our analysis focuses on scammer utterances. Our final sample contains 15234 of these. § OVERVIEW OF THE DATA §.§ Overall Description Here we present an overall description of the content of the captured scam transcripts informed by close reading and interpretation of data enrichment results (see Section <ref>). In the sample analysed, scammers often mention or impersonate authority figures: IRS, HMRC, FBI, etc. Scammers also attempt to increase trust by providing a verification of stated claims or prove authenticity of the caller by using spoofed numbers that appear to originate from a known entity. In social security number scams, police, court orders, arrest warrants, gaol and other negative consequences in the event of non-compliance are mentioned. Scammers attempt to prevent interruption of the script or defer questions by talking over the victim. The victim is nudged towards a faster and easier option to avoid negative consequences by ‘resolving the matter’ here and now, introducing time pressure and preventing the victim from taking the time to think this through. The “quick resolution” is claimed to be available only if acted on immediately, introducing further time pressure. A decision is demanded during the call; a refund, reward or reversal is promised if the victim complies. Depending on the scam type, an attempt to establish the victim's location or request to travel to a location where the fee can be paid, gift card purchased or money deposited is often carried out. This is followed by a payment or purchase request or obtaining the victim's bank card details. Specific, not rounded up amounts are requested and legitimate sounding ways to pay are mentioned. Alternatively, rapport building and polite social protocol is followed in other scam types (e.g. tech support). Legitimate sounding explanations of the reason for the request and specific details are shared to add credibility to claims. Step-by-step guidance to navigate to a website, install software (often TeamViewer to get remote access to the victim's machine) or fill out online forms with personal and often financial information is provided. A handover to another individual, often a supervisor, is sometimes introduced by scammers when a given milestone in the script is achieved (e.g. victim installs TeamViewer or confirms banking details). Due to the length of some calls or if the desired result is not achieved immediately, scammers may attempt to end the current call and continue the scam in a subsequent call at a later stage. §.§ Data Statistics Our data set contains 341 transcripts which in total represent 90 hours of phone scam conversations. We found 7 types of scam in the transcripts (see Section <ref>), however, three of them, "family member", "tax" and "charity", have marginal presence, with only 2, 2 and 1 transcripts respectively and are not considered for further analysis. Social security number scam transcripts are the largest group in our dataset (140), followed by refund scams (110). The remaining transcripts belong to support (63) and reward (25) scams. Statistics on our data set can be found in Table <ref>. Firstly, we measured basic text statistics to learn what is the conversation approach of the parties (Scammer and Scam Baiter). Surprisingly, scammers do not always dominate the conversation. In terms of word count and duration, scammers used more words in social security number and refund scams, whereas Scam Baiters are more talkative in support and reward scam types. We can posit the presence of extended explanations on the part of scammers in these scams as the cause, however we acknowledge that active distraction from the Scammer's directions and story by Scam Baiters will also be a significant factor. Regarding word rate both Scammer and Scam Baiter are close to the regular speech rate range (2 – 2.5 wps[According to National Center for Voice and Speech <https://ncvs.org/archive/research_tissue.html>]). There appears to be a level of word rate coordination: higher scam baiter word rates matched with higher scammer word rates. This is of interest, as it indicates a level of connection, where the scam baiter may be affecting the scammer, and warrants further investigation. § DATA ENRICHMENT In this section we describe the modeling and annotation we performed to enrich the data. This included manually annotating scam types for each transcript, emotion extraction per utterance and automated extraction of scam sequences, which can be thought of as a proxy for scripts followed by scammers. Scam sequence extraction was done through a combination of topic modeling to extract themes and common word patterns followed by hidden Markov modeling (HMM) over topics to extract scripted and thematic sequences. §.§ Scam Type Annotation We annotated the overall type of scam for each transcript in our data. Two of the papers' authors conducted the annotation. First a classification scheme was independently determined, then a consensus view of the scam types present was taken — the scam types mentioned in Section <ref> were unambiguous and easy to agree on, with the exception of “reward” scams, which were initially identified as two categories and later merged (see below). An initial 82 transcripts were then annotated independently by both authors with the agreed on scam types. Initial annotations had 83% agreement, and all discrepancies were readily identified as simple errors or ambiguities in the interpretation of two of the categories. It was decided that these two categories (“gift card” and “reward”) should be merged as they represented minor variations on the same scam sequence and both had few transcripts. The remaining transcripts were annotated with the updated scheme by one author and verified by the second. §.§ Emotion Extraction For emotion detection we used a RoBERTA <cit.> based emotion prediction model trained on a balanced subset of almost 20,000 human annotations from 6 publicly available data sets[https://huggingface.co/j-hartmann/emotion-english-distilroberta-base]. The model estimates Ekman's 6 basic emotions plus neutral: joy, sadness, anger, fear, disgust, surprise and neutral. The model was trained using cross-entropy loss on discrete labels (0 if an emotion is not present, 1 if it is). As such the scores can be interpreted as probabilities that the given emotion is present. Figure <ref> presents distributions of emotion scores by scam type. We found that for all scam types, surprise and anger are the most common emotion. We note that overall, the scammers were less emotional than the scam baiters. This is particularly visible with surprise. We propose two complementary explanations for this. First, the primary scam types present in our data present serious scenarios: identity theft (social security number scams), ongoing hacking of the victim's computer (support scams), and scammers posing as company representatives (refund and reward scams). In those roles, it is natural that the scammer would be serious and unemotional, but this also plays a role in social engineering, increasing the sense of authority. Second, on listening to the calls, it was our impression that that the Scam Baiters were acting out exaggerated roles, in part to entertain their audience and in part to engage the scammers. It is unclear how their portrayals would relate to the emotions of true scam victims and potential victims. §.§ Scam Progression In order to understand the methods and tools of phone scammers, a deeper understanding of how scam calls progress is needed. Here we seek to establish automated or semi-automated approaches to analyze and track scam call content, and with a view to deployment in a large scale scam call monitoring setting. We first seek a high level understanding of the content of scammer utterances using recent topic modeling methods. We find that the topics thus uncovered faithfully reflect our observations from close reading, identifying key steps that we observed in the transcripts. The topic model is applied to all the data with individual utterances as documents. Each transcript is then transformed into a sequence of topic intensities, and we apply a Hidden Markov Model (HMM) over these sequences for each scam type. Again, the states inferred by the HMM models correspond to our close reading observations, revealing clear scam progressions that match our observations. As can be expected, the quality and richness of HMM models varies with the quantity of data available. Finally, investigate the feasibility of identifying the scam stages as identified by the HMM models in a live call setting. §.§.§ Topic Modelling We used contextualised topic models <cit.>, a recent neural topic model combining contextualised representations from large pre-trained language models and the variational auto-encoding neural topic model ProdLDA <cit.>. We trained a model with 50 topics on scammer utterances. The model showed good topic diversity (inverse rank biased overlap <cit.> 0.993) and reasonable coherence (C_V coherence <cit.> of 0.454). We explored β values of 0.01, 0.1, 1.0 and 3, and found 1.0 to perform well on both C_V and NPMI coherence metrics. We manually labelled each topic by considering both the top topic words and observing the top utterances (those scoring highest on the topic) and a weighted sample of utterances (using topic scores as weights) from the top 100 utterances. In most cases, consistent and readily interpretable semantics were observed, with a small number of topics appearing to merge distinct meanings (6 topics) and a small number with no apparent coherent meaning (2 topics). Table <ref> lists the most prominent topics for each scam type. See Appendix <ref> for a full list of estimated topic labels and indications of merged/incoherent topics. Overall, we found that the topic model successfully revealed semantics that we observed in the data. Finally, we discuss topics and their frequency of appearance in utterances of each scam type. The top topics (with probability above 1.5x the average topic probability of 0.02) and their frequencies are shown in Table <ref>. The complete topic frequency data can be found in Appendix <ref>. For the social security scams, we found 6 topics with elevated probability. For example, topics 32 and 39 that address legal charges against the victim. Similarly, we found elevated probability for topics that describe vouchers and gift cards with reward scams (topics 6 and 38). In the case of support scams, the 2 topics with elevated probability (8 and 22) focus on technical details such as computer or phone instructions. Interestingly, refund scams do not have topics with elevated representation above our chosen threshold, and we show the top two topics. We note that the highest topic frequencies for support and refund scams are relatively low. We believe this is due to the lower quantity of data from these scam types and the observed diversity in the scripts and techniques used. §.§.§ Hidden Markov Models over Topics We use a hidden Markov model to extract scam scripts presumed to be followed by the scammers for each scam type. Hidden Markov models seek to represent time sequence data as the trace of a finite state machine. States “generate” utterances through a list of topic probabilities (known as emission probabilities) associated with each state and there is a table of state transition probabilities which determine the state of the subsequent utterance. HMM inference algorithms attempt to find a set of states (given by emission probabilities) and a state transition table which maximises the probability that the data was generated by those states and transitions. Typically the number of states is fixed. We use the multinomial hidden Markov model from the python package hmmlearn with default parameters. The data consisted of the top topic (from our 50 topic model) for each utterance, providing a sequence of integers (one for each utterance) for each transcript. First, we split the data into 3 equal subsets (train, validation and test) and and made an initial selection of the number of states for each scam type through grid search, ranking by log likelihood on validation data. We observed that the log likelihood scores on test data with the resulting models aligned with those on validation data and were marginally better than on training data, indicating that the model is not overfitting. We then used 5-way cross validation on all data to make a final choice of the best number of states for each scam type[For support and reward scams we took the second best (7 vs. 4 states and 5 vs. 3 states respectively) as the log likelihoods were similar and resulting transition graphs more informative.], then trained our final models using all available data. In all cases, we fit each model configuration 50 times, taking the model with best log likelihood (on validation data for cross validation and on all data for the final models). This is usual practice with this kind of HMM model, as the gradient descent based EM (Expectation - Maximization) algorithm has a tendency to get stuck in local minima — multiple random starts usually results in a much better fit. To evaluate the models, we first examined the apparent meaning and coherence of inferred states. We manually labeled each inferred state by considering the labels and utterances associated with the most significant topics linked to each state. Overall, we found that a consistent interpretation could be applied to the utterances associated to a given state and that the progression of states is readily interpretable as a scam script progression and agrees with our observations from close reading. An example social security number transcript with utterance states is provided in Appendix <ref>. Figure <ref> shows the resulting graph for refund scams, see Appendix <ref> for the remainder. We then chose one transcript from each scam type at random and three authors manually annotated utterances using our state labels and descriptions of associated topics as a guide. We allowed two state choices where there was some ambiguity and considered annotations to agree if the second choice matched the first choice of other annotators. Table <ref> shows Krippendorff's Alpha between annotators and Cohen's Kappa between states inferred by HMM models and those obtained by vote between annotators, alongside the number of utterances in the selected transcripts and number of states used in both annotation and HMM models. We found that annotators could consistently identify states, with strong inter-annotator agreement for Refund and Reward and moderate agreement for social security number and support scams. Kappa scores show substantial agreement with HMM inferred states for all but the Reward scam, which has moderate agreement. Scam types with more data provided HMM models with more states that were also more expressive and easier to interpret. This reflects the ability of statistical based dimensionality reduction models such as HMMs and topic models to reliably extract structures in the data. With more data, there is sufficient evidence for more nuanced models without overfitting. Another feature of the graphs to note is the generally lower probabilities assigned to transitions in the later stages. This is particularly prominent with states 3 and 4 with social security number scams (see Appendix <ref>), which appeared mostly at the end of transcripts and whose links did not pass the threshold for inclusion in the graphs. We believe this is due to two factors: first, the fact that a significant proportion of the transcripts end before the later stages of the presumed scam script, hence there is relatively little data, resulting in less focused states (covering broader semantics) with fewer transitions. The second factor is the greater diversity in conversations in later stages of the scam, where the scammer adapts their instructions to the victims circumstances and the scam baiter has had more opportunity to derail the script. We observed numerous transcripts where the conversation diverged into random and extended chit-chat — essentially, the scam baiter had succeeded in distracting the scammer from the script. Another important methodological point is the value of building these HMM models on data from a single type of scam, which substantially improves the clarity of the resulting model. We at first built a model on all our data, and although it was not without interesting and interpretable structures, the models we present here are much more focused and more clearly interpretable. This highlights the importance of classifying and labeling scam types. We note that topic models are more robust to diverse semantics (indeed are designed to tease them out), and being unsupervised, generally require more data to obtain a good model, thus we chose to pool data across scam types for the topic model. The coherence of the structures in the underlying data also plays a role here, and we take the success of the HMM approach, together with the sequential nature of the resulting models as indicators that there are well defined sequential structures consistent across the data, supporting the hypothesis that the scammers are following pre-defined scam scripts. We further discuss interpretation of the inferred HMM models from the perspective of social engineering in Section <ref>. § OBSERVATIONS OF SOCIAL ENGINEERING The term “social engineering” covers persuasive techniques used by scammers to convince their victims to act to their own detriment and to the scammers benefit. In this section we discuss social engineering techniques observed through close reading of our data and topics and HMM states associated with those techniques. The art of persuasion has been studied over many years, and several categorisations of psychological principles that can enable a manipulator to control the actions and choices of victims have emerged. In the words of pioneering psychologist in this area, B. J. Fogg “As I see it, social influence is a broad area, with flexible boundaries and competing ways to categorise influence strategies” <cit.>. Keeping that in mind and for the purposes of simplicity, we organise our discussion of social engineering techniques following the five categories provided by Ferreira et.al. <cit.>, which draws together two previous categorisations <cit.> into a single model and has a focus on techniques used in phishing, taking careful note of boundary cases and those that do not really fit. We investigated the data through a combination of close reading and interpretation of topic model topics and HMM states seeking to identify the application of social engineering by the scammers. Overall, we found that the scam calls are very rich in applied social engineering techniques, with a substantial proportion of scammer utterances building on the persuasive landscape of the scams. It is not the purpose of this work to provide a comprehensive review of the techniques used in our data, however we provide here examples of the more prominent uses of persuasive techniques we encountered, and where practical, link them to topics and HMM states. We also note that in many cases, multiple persuasive techniques are combined in a single scam step or even in a single utterance. This is not unusual in persuasive acts, for example the invitation messages of successful Facebook apps often achieve this <cit.>. One example is this quote from a social security number scam: …be specific and genuine on this phone call because this recording can be used in your favor or can be used against you in the courthouse…. Here we see Authority (referring to the court house), distraction (fear of legal consequences) and commitment (once the victim provides the requested PII). Authority (AUTH): Society trains people not to question authority so they are conditioned to respond to it. People usually follow an expert or pretence of authority and do a great deal for someone they think is an authority. <cit.> This is a common persuasive technique used in scams as has previously been observed (e.g. <cit.>), and our data is no exception. We observed the scammers assuming the role of an officer from the social security administration in social security scams (…you for calling social security administration…, …My name is officer Carol Snyder…), or a qualified IT support professional in support scams, of an employee from a reputable company in reward scams (…Thank you for calling PayPal. This is Daniel. How can I help you?…). Further subtle markers of authority are also present in several scam types. Examples include the scammer “verifying the identity” of the victim (e.g. state 7 and topic 46, …Can you verify me the last four digits of social security number?…) as well as formal sounding language and procedures (…This is the case identification number…). Social Poof (SP): People tend to mimic what the majority of people do or seem to be doing. People let their guard and suspicion down when everyone else appears to share the same behaviours and risks. In this way, they will not be held solely responsible for their actions. <cit.> In the scam types present in our data, this technique did not appear to play a significant role. Liking, Similarity & Deception (LSD): People prefer to abide to whom (they think) they know or like, or to whom they are similar to or familiar with, as well as attracted to. <cit.> In all scam types, we find HMM states that resemble informal chit-chat. The willingness of scammers to engage in friendly conversation with their victims appears to be an application of this technique. This is suggested in particular by the common proximity of chit-chat with payment instructions and procedures, a critical point in a scam at which the trust of victim is crucial. In some transcripts, the scammer and scam baiter already know each other from previous calls, meaning the scam spans several days and multiple calls. This longer engagement engenders familiarity and trust and represents the application of this technique. Commitment, Reciprocation & Consistency (CRC): People feel more confident in their decision once they commit (publicly) to a specific action and need to follow it through until the end. This is true whether in the workplace, or in a situation when their action is illegal. People have tendency to believe what others say and need, and they want to appear consistent in what they do, for instance, when they owe a favour. There is an automatic response of repaying a favour. <cit.> This category is rather broad, and covers multiple categories from other categorisation schemes. Reciprocation: Scammers will often state that they are attempting to help the victim (…So let me help you to get connected with our Amazon… — refund scam; …I just wanting to help you out to get there.… — support scam; …There is no need to worry because I'm here to help you… — social security number), which represents an appeal to Reciprocity. Commitment: In all the main scam types in our data, after explaining the situation, the scammer asks if the victim would like to fix the problem (social security number, support and refund) or receive the gift (reward). By answering “yes”, the victim is subsequently motivated to proceed with the scam rather than go back on their commitment. An involved sequence of tasks for the victim is also a form of commitment, where the victim has invested in and implicitly validated the scenario presented by the scammer. In social security number scams, the victim is asked to verify their identity (state 7), write down the case id (state 10), listen to the litany of evidence against them (states 1, 0). In refund scams, victims are asked to provide information about their phone/computer (state 3) and download and install remote desktop software (state 5). One question of note that appears in many social security scams asks the victim for approximate bank balances (…what will be the current dollar amount balance approximately you having in your checking account and as well as in your savings?…). We believe this also serves to inform the scammer of the value of the victim — particularly high value victims are afforded more time and effort, and are often redirected to more senior scammers. Distraction (DIS): People focus on one thing and ignore other things that may happen without them noticing; they focus attention on what they can gain, what they need, what they can lose or miss out on, or if that thing will soon be unavailable, has been censored, restricted or will be more expensive later. These distractions can heighten people’s emotional state and make them forget other logical facts to consider when making decisions. <cit.> All scam types in our data utilise a form of distraction — a (typically highly emotional, typically fearful) scenario that captures the victims attention such that they focus less on the steps they are asked to take. In support scams, the scammer reports that there has been suspicious activity on the victim's computer and goes on to claim that it has been hacked and the hackers can steal their bank details etc… In refund scams, they claim that a victim's online purchasing account has had an expensive order that was suspicious (…This is suspicious activity on the Amazon account…). In social security number scams, the whole scam scenario of atrocious acts combining drug smuggling, money laundering (…these are now used to launder approximately $240,000…) and hints of murder, all carried out using the victim's stolen identity can undoubtedly cause extreme fear and apprehension. In reward scams, a free gift is offered. § OTHER INSIGHTS INTO SCAMMER METHODS §.§ Scam Stages Our HMM models painted an interesting picture of scam call progressions. All analysed scam types have a similar high level structure: * Greetings. * Explanations of the problem. * Instructions on what to do. * Financial exploitation attempt. The exception is the HHM model for reward scams which was not as successful at identifying scam structure, likely due to lack of data (see Section <ref>). For example, in refund scams (Figure <ref>) greeting is represented by state 0., problem explanation is covered by state 4. States 3., 1., and 5. describe instructions on what to do and states 6., 8., and 7. target financial exploitation. Note that the greetings for each scam type are distinct, with the scammers introducing themselves as different characters, using different levels of formality etc…, which enabled distinct scam types to be distinguished on the basis of very few utterances (see Section <ref>). Interestingly, we noticed that in 3 models (social security number, refund, and support) the state representing casual chit chat is linked with the money/payment related state. The explanation for this can be twofold. We suspect that it may be the scammer's deliberate action to distract the victim from the actual target of the scam (i.e., stealing money — see Section <ref>). The second option is that scam baiters actively change the topic of the conversation when the scammer tries to close the scamming attempt with a financial request. It is likely that both have an element of truth. We noticed that one of the states for social security number scams adds an interesting variation of the scam that is not present in other types. The "redirection to supervisor" state (state 6.) is used to redirect the victim to a second scammer. We suspect that this technique is effective in the social security administration context because of the authoritative position played by the second scammer (usually introduced as senior agent). It would also allow senior, more experienced scammers to be brought to the call for the final stages that lead to payment. §.§ Weaponized Emotions Phone scams, unlike other types such as email or SMS, the scammer has greater presence and thus a more direct opportunity to impact victim emotions. From our preliminary examination of data set recordings, we observed that scammers leverage emotion manipulation in their social engineering techniques. Here we present a review of emotions detected in our data as seen from the perspective of inferred states and topics, and attempt to identify the roles they played in the scams, if any. Though we observed that scammers expressed less emotion than scam baiters, none the less variations in how scammers express emotion were evident. To measure which state is associated with a particular emotion, we averaged the emotion scores for utterances assigned to the state by the HMM model. Figure <ref> and Appendix <ref> present heat maps of relative emotion strengths for each state (relative to the median emotion strength among all utterances for that emotion), providing an indication of where scammer emotion is concentrated. We observe that in many cases there is a concentration of emotion on particular HMM states. This suggests that scammers may indeed use emotions in their social engineering techniques. In order to interpret the concentrations of emotion for social security number scams, we examine the highest emotion scoring utterances for each state-emotion pair with higher relative emotion score (see Figure <ref>). Firstly, we examined anger, which was high in "Call reason high level introduction" (state 1.) and "Case and procedure explanation" (state 9.). We found that scammers try to limit the questions from the victim by making sure they are not interrupted. We found numerous of examples of phrases that angrily enforce victims to not interrupt (e.g., …Do not interrupt me in between…, …Listen to me, do not interrupt me once more…, …suspend your social for interrupting the officer…). We also noticed that angry language was used in threats presumably to encourage compliance. For example, …I will send a local sheriff on your doorstep…, …I'm sending the cops to your house…. The score for the joy emotion is notably high for "Short utterances (chit chat)" (state 3.) and "Initial call greeting" (state 8.). In the case of initial call greetings, scammers tend to initiate the call with a joyful phrase (e.g., …Hello. And thank you for calling…). For the chit chat utterances (state 3) we could not find any common scheme, though we note that chit chat utterances were more emotional overall. This is consistent with the scammer leveraging the victim's desire for connection to increase rapport (the LSD category of social engineering techniques — see Section <ref>). The results for fear clearly show that utterances used to explain the scam theme are meant to trigger fear (states 0,1,2,9). The highest score was found for state 2 - "Asking about possible identity theft" (e.g., Have you ever lost it or someone stolen your personal identities from you in Texas? Like your driving license, your social security card, or any of your state ID?). We found scammers suggest that the identity was used in illegal operations, most likely to increase seriousness of the case (e.g., …your social security number …has been found suspicious for criminal activity."). Sadness was the most prominent in chit chat utterances (state 3.). We suspect this is because of more compassionate language including apology phrases (e.g., …I'm very sorry for the officers who didn't explain to you the case…, I'm really sorry for the inconvenience, sir…, I'm really sorry. I cannot assure you…), though empathizing with victim sadness also plays a role, for example [after Victim talks about passed husband and family]…They are so lucky to have you, so yeah. And you'll live by yourself… An outstandingly high score (1.8) was found for disgust for the “Call reason detailed description" state. The utterances for this state and emotion contain descriptions of crime scenes such as stolen cars, drugs and blood (e.g., …the investigation started when we found an abandoned car …and the car contained some blood as well as some drugs …). Interestingly, surprise scores are relatively consistent between states, with no states standing out, and neutral scores are especially so. § APPLICATIONS: PREDICTING SCAM TYPE AND SCAM STAGES Incorporating side information into text based NLP systems is well established. In our setting, conversational AI agents can be tailored to specific types of scam, leveraging extracted scam scripts for known scam types. In order to make best use of this knowledge, we need to detect which type of scam we are seeing early in a call and we need to be able to track how and when the call progresses through the script. §.§ Detecting Scam Type We wish to predict the type of scam, preferably early in a call, in a simulated live call setting. This is achieved by progressively restricting the number of utterances available to the predictive model. Note that we do not attempt to distinguish legitimate calls from scam calls in this work. We use a standard text classifier built on a RoBERTa base model <cit.> from the Huggingface model repository[<https://huggingface.co/roberta-base>] and build a separate binary model for each scam type. We use only scammer utterances for this task. Figure <ref> shows recognition performance for the four scam types we investigate. We report F1 scores for the predictors; Precision, recall and accuracy follow similar trajectories. All scam types have strong performance with access to just one utterance (85% - 95%) and very strong performance with 5-6 utterances (92% to 98%). Refund and support scams are harder to distinguish than social security number and reward scams. We hypothesise that this is due to greater diversity in the conversations with scam baiters and in variations in specific details of the scams themselves. These results match our expectation and demonstrate that effective tools can be built for scam intelligence gathering and applications that operate on live scam calls based on identifying scam types early in a call. There are two main caveats that should be noted here. Firstly, our approach assumes that it is known that a given call is a scam call, however there are effective approaches and tools for detecting scam calls in a live call setting, so we argue that this is not a limitation. Secondly, only four scam types are included here, and the task we perform distinguishes between those four. There are, however, many and diverse phone scams being deployed in the world. We demonstrate that it is possible to distinguish between diverse scam types, as represented by our data, however as scams evolve and borrow from each other, we may expect that some scams (and hence, for example, the call centers they operate from) will be more difficult to distinguish. §.§ Predicting Scam Stages Here again we deal with a simulated live call setting. We assume we already know which type of scam we are dealing with and wish to track the progression of the call through an already established scam script such as those extracted through the techniques in Section <ref>. To demonstrate the feasibility of the task, we wish to predict the stage of a scam call in a simulated live call setting. We train predictive models whose inputs are sequences of scammer utterances up to a point in the transcript and whose target is the scam stage for that utterance as inferred by the appropriate HMM model. We do this separately for each scam type. Since approximate identification of the current position in the scam script is also useful information, we consider a relaxed target that includes the previous scam stage and the succeeding one — that is, the scam stage of the first utterance after (or preceding) the utterance in consideration that has a different scam stage. See Appendix <ref> for details data preparation, model training and evaluation. Our models achieve a margin of 50% over a random model for social security number and refund scams, and 30% over for support and reward scams (Table <ref>). Interestingly, this margin is similar for the strict and relaxed targets. Again we see that support and reward, with less data available, have a lower margin over a random model. It is interesting to note that performance on the strict target is similar across scam types. This at fist seems perplexing, as the task for e.g. social security number scams (SSN) with 11 states is much harder than for reward scams with 5 states. We believe that this is a reflection of the optimization of the HMM models: they efficiently extract the available information, with each extracted state supported by a similar level of information. We consider these results to successfully demonstrate that effective tools to track scam stages in a live call setting can be built given sufficient data. Though strict target accuracy around 50% may seem low, we note that to achieve this with many states (11 for social security numbers) is substantially difficult as seen in the margin over a random model, and that most of the “errors” are in fact identifying a nearby state in the state sequence. § ETHICS As part of this study, we took the necessary steps to ensure that our research is within legal and ethical boundaries. We consulted with the University's legal experts and Ethics Board to ensure that we were compliant with ethical policy and relevant laws. Since scammers are human subjects, we ensured that our actions do not cause them harm. It should be noted that the videos that were used in the study were likely made public without the scammers’ consent, however the scammers are completely anonymous, and the content can not be linked to them as individuals. Our Ethics Board noted that no requirement to obtain consent was necessary based on the following conditions of our experiment: * Only call recordings already in the public domain were analysed. * There was no recruitment, targeting or identification of individuals. § LIMITATIONS The sample used for analysis has a number of limitations. Firstly, it only includes YouTube videos of scam calls, most which were recorded by ‘scam-baiters’, people who aim to waste scammers’ time with the full knowledge of the fact that the call was a scam. Therefore, the victim is not behaving genuinely in these calls, albeit the scammer is genuine up to the point they realise the victim is a scam baiter. In particular, scam baiters sometimes seek to lead the scammer into extended conversations irrelevant to the scam — they will play the part of an ideal victim in initial phases of a call, but lead the call astray more and more as the call progresses, leading to noisy results towards the later stages of calls. Another limitation is that published scam baiting videos are edited during the post-production process. Edits such as merging a few calls in a single video or removing pieces of conversation make the extraction of the scam calls more challenging. Particularly some scam baiters include their own comments (directed to the audience) that pollute the scam call conversation. Although efforts have been made to remove these issues by manually removing less relevant parts of the recordings that are not indicative of a typical scam call, as well as splitting scripts containing excerpts from multiple calls, some of these features may persist in the data. We also note that some scam baiters may cherry-pick the calls the choose to post on YouTube, however this is probably a good thing, as it acts as a filter for shorter conversations where the scammer quickly recognises that the call is not bona-fide. The inherent bias that results must, however, be acknowledged. The capabilities of the video transcribing and diarisation service are also limited and transcripts are prone to errors such as mis-attributing utterances to speakers, incorrectly identifying words used and utterance boundaries, and not recognising very short utterances (instead merging them with the preceding and following utterances). In some cases, the diarisation process returned more than 2 speakers which increased the overall workload of correctly identifying scammer's utterances. Again, although efforts have been made to correct these errors during the manual transcript cleanup, some errors still persist. We note that the capabilities of state of the art text to speech services are steadily improving, and expect these issues to be reduced in future. One feature of many of the calls that challenges audio transcription is the scammer and scam baiter talking over each other. In these cases, the transcription tool usually identifies just one speaker, occasionally with a word or two injected from the other. This can be extremely difficult to accurately transcribe, even for human transcriptions. Sourcing call data where the scammer and victim are in separate audio channels would remove this problem and allow for analysis of the causes and impact of this interesting phenomenon. The data covers relatively few scam types, all of which are from the United States. None the less, it suffices to demonstrate the practicality of recognizing and extracting scam scripts and distinguishing scam types across a selection of common scam types. The uneven proportion of the types of scam transcripts in our data set influences the granularity of the information from our analysis. This is particularly visible for reward scams, which had only 23 transcript samples. Although instructive of the need for more data to obtain quality analyses, it remains a limitation of this work. The fact that the emotion detection model used is a text only model presents another limitation. Such models are not capable of detecting prosodic emotion signals (characteristics related to the tone, pitch, accent and other paralinguistic voice features) and rely solely on the words used. § FUTURE WORK Our data collection approach enabled us to obtain sufficient data to perform meaningful machine learning pattern recognition on several types of scam. This motivates future approaches to obtain large data sets of scam call conversations. Continuing to monitor public releases of scam baiter conversations will no doubt allow further similar conclusions and insights to be gained about newer types of scam. Directly engaging with scam baiters and recruiting people to engage in scam baiting would allow even larger and more timely data sets. Another approach to obtain greater volumes of data as well as more timely data (for example data for new scam types shortly after they appear) could be to train conversational AI to perform scam baiting via telephony honeypots. We believe that recent advances in conversational AI have reached the stage where this has become an effective strategy. Criminal organisations that specialise in phone scams are constantly evolving by implementing new technical and social engineering techniques. Moreover, they adjust the scam campaigns to take advantage of recent events (e.g., the pandemic crisis). Unfortunately, law enforcement and even scam-fighting organisations struggle to quickly provide information about the latest scams. We believe that our analysis pipeline could be adjusted to help in identifying and more importantly understanding new scam campaigns, providing valuable insights for both detection and public education campaigns. An interesting direction for future investigation is the application of our analysis results into real time detection models and tools. For example, an ongoing conversation that has a high probability score in a certain HMM scam model could raise an alarm. Such technology could be integrated into existing scam prevention applications and promises to provide real-time alerts warning the victim of their immanent mistake. § CONCLUSION Phone scams still remain a popular and inexpensive method to execute scam campaigns. Modern prevention solutions do not address all aspects of scam calls, and thus they are not effective. In this research we analysed 90 hours of scam transcripts acquired from public sources. We applied machine learning techniques to identify topics, emotions and patterns in the dynamics of scam calls. In particular, we found that Hidden Markov Models over topic model output were effective at identifying steps in underlying scam scripts and techniques We further identified aspects of the expression of emotion by scammers, including a general tendency to express little emotion in the scam types in our data, the use of anger to bully the victim into compliance and the use of familiar and emotional chit chat to lull the victim in the last stages before extracting payment from them. Finally, we mapped our findings to the literature on social engineering and persuasion techniques and provided an extensive set of examples of their application from our data. Our work demonstrates the effectiveness of a combination of structured subjective analysis and automated machine learning techniques to analyse and understand the content and structure of phone scam conversations. We hope that our contribution will enable more effective analysis of scam attempts in the future and contribute to our growing understanding of the phone scam phenomenon and how to defeat it. § ACKNOWLEDGEMENTS Research supported by the Australian Office of National Intelligence through National Intelligence and Security Discovery Research Grants (NISDRG) number NI220100105 and the Macquarie University Cyber Security Hub. IEEEtranS § APPENDIX § NLP MODEL DETAILS FOR PREDICTING SCAM TYPE AND SCAM STAGES For both these tasks, we use a standard pre-trained text classifier built on a RoBERTa base model <cit.> from the Huggingface model repository[<https://huggingface.co/roberta-base>], which was then fine-tuned on our scam transcript data. We used a learning rate of 2^-5, batch size of 16 (scam state prediction) and 10 (scam type prediction), and weight decay of 0.01. No fine tuning was done and performance was measured via cross validation (average of per-fold accuracy for scam state prediction and per-fold F1 for scam type prediction, see below for details). We used an early stopping strategy with a patience of 5 epochs and threshold of 0.01 on both metrics. Data for scam type prediction consisted of scammer utterances from a given scam transcript delimited by RoBERTa's separator token (“</s>”). Data labels were annotated scam types. The number of utterances was progressively increased with a new model trained each time. Separate binary classifiers were trained for each scam type. Cross validation was performed with 7 folds (4 for reward scams) stratified over scam types, F1 used as the evaluation metric and average performance over folds reported. Data for scam stage prediction consisted of scammer utterances from a given scam transcript delimited with the string “<UTT_category_time-stamp>” followed by RoBERTa's separator token (“</s>”). Here “category” is replaced by the name of the annotated scam category and “time-stamp” by the time stamp in the YouTube video corresponding to the start of the utterance. Data labels were state labels from the HMM model for the respective scam category (see Section <ref>). Separate classifiers were trained for each scam category and data consisted only of transcripts from that category. We report two forms of accuracy: the first (used for early stopping) accepts the previous and following state as correct, the second only accepts the state assigned to the utterance in question. Note that there are typically long sequences of the same state, so the utterance with the following state may be several utterances ahead. Cross validation was performed with 6 folds, chosen such that all data from a given transcript appears in only one fold, and average performance over folds reported. § HEAT MAPS OF EMOTION ASSOCIATIONS WITH HMM STATES § TOPIC DESCRIPTIONS AND FREQUENCIES We manually labelled utterances with high corresponding topic scores. For each topic, we collected the top 100 utterances. We manually inspected the top 10 utterances and 5 utterances randomly selected from the top 100 set (weighted by utterances' topic scores). These 15 utterances were used by two of the authors to assign a short description for each topic. In the table below, topic frequencies 50% above a random baseline (i.e., >0.03, blue) are highlighted, as are topics with merged meanings (pink) and incoherent topics (green). § HMM TRANSITION GRAPHS Below are the state transition graphs for refund, support and reward (Figures <ref>, <ref>, <ref>). To determine the thresholds for including edges in the graph, we first discounted the average self-probability of nodes (probability of a node linking to itself) and included links with probability greater than the average remaining probability: threshold=(1-tr(T)/n) / (n-1) where T is the state transition matrix, tr(.) is the matrix trace operation and n is the number of states. § HMM STATES INTERPRETATIONS § EXAMPLE TRANSCRIPT WITH STATE TRANSITIONS
http://arxiv.org/abs/2307.03248v2
20230706183816
CFD-based Design Optimization of Ducted Hydrokinetic Turbines
[ "Jeongbin Park", "Bradford G. Knight", "Yingqian Liao", "Marco Mangano", "Bernardo Pacini", "Kevin J. Maki", "Joaquim R. R. A. Martins", "Jing Sun", "Yulin Pan" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
inst1]Jeongbin Park inst1]Bradford G. Knight inst2]Yingqian Liao inst2]Marco Mangano inst2]Bernardo Pacini inst1]Kevin J. Maki inst2]Joaquim R. R. A. Martins inst1]Jing Sun inst1]Yulin Pancor1 yulinpan@umich.edu [cor1]Corresponding author [inst1]organization=Naval Architecture and Marine Engineering, University of Michigan, city=Ann Arbor, postcode=MI 48109, state=MI, country=USA [inst2]organization=Aerospace Engineering, University of Michigan, city=Ann Arbor, postcode=48109, state=MI, country=USA Hydrokinetic turbines extract kinetic energy from moving water to generate renewable electricity, thus contributing to sustainable energy production and reducing reliance on fossil fuels. It has been hypothesized that a duct can accelerate and condition the fluid flow passing the turbine blades, improving the overall energy extraction efficiency. However, no substantial evidence has been provided so far for hydrokinetic turbines. To investigate this problem, we perform a CFD-based optimization study with a blade-resolved Reynolds-averaged Navier–Stokes (RANS) solver to explore the design of a ducted hydrokinetic turbine that maximizes the efficiency of energy extraction. To handle the high-dimensional design space of the blade and duct geometry, we use a gradient-based optimization approach where the gradients are computed using the adjoint method. The final design is re-evaluated through higher-fidelity unsteady RANS (URANS) simulations. Our optimized ducted turbine achieves an efficiency of about 54% over a range of operating conditions, higher than the typical 46% efficiency of unducted turbines such as the well-known Bahaj model <cit.>. § INTRODUCTION The increasing demand for renewable energy has motivated extensive research on hydrokinetic energy conversion systems that extract energy from natural riverine and oceanic flows. Various types of conversion systems have been investigated for decades, including horizontal- and vertical-axis turbines and oscillating hydrofoils <cit.>. Horizontal-axis turbines have been studied the most because of the relatively mature technology <cit.>. A popular benchmark for horizontal-axis hydrokinetic turbines is the Bahaj model, which has been experimentally tested in a cavitation tunnel and a towing tank <cit.>. The unducted Bahaj model generates power with an efficiency of about 46% (the ratio of generated power to the inflow power) at the optimal operating condition. This is the typical efficiency level of well-designed hydrokinetic turbines <cit.>. To evaluate this efficiency, we can compare it to the well-known Betz’s limit of 59.3% <cit.>, which is derived based on the one-dimensional (1D) momentum theory in an unbounded flow domain. There is no general consensus on whether the Betz’s limit should be considered as a hard upper bound on the efficiency of practical energy conversion systems in unbounded flow due to simplifications in the theory. However, it seems clear that further improvement can be sought regarding the current 46% efficiency of horizontal-axis hydrokinetic turbines. One idea to improve the efficiency is to use a duct (also known as a shroud or diffuser) to accelerate the fluid flow passing the turbine blades, thus improving the efficiency of the device. Some researchers have incorporated the duct effect in the 1D momentum theory (or its extended version), with some of which predicting an efficiency well above the Betz’s limit <cit.>. In spite of the insight into the duct effects, the physics may be oversimplified (sometimes misrepresented) meaning the efficiency predicted by these models may not be achievable in practice. The complex turbine-duct interaction involves flow features, such as flow separation, that cannot be captured by the analytical models. These phenomena can significantly affect the mass flow through the duct and the system efficiency <cit.>. To account for complex turbine-duct interaction and more reliably evaluate ducted turbine performance, we must resort to computational fluid dynamics (CFD) simulations or experiments. Table <ref> lists research efforts that used such approaches. Most studies were conducted for wind turbines, but a few were specific to hydrokinetic turbines. As shown in the table, various duct shapes have been proposed and tested for wind turbines, with reported efficiencies ranging from 0.41 to 0.85, surpassing the Betz’s limit (0.59). However, these results must be interpreted in the context of the limitations of the analyses. The CFD models used for evaluations include steady Reynolds-averaged Navier–Stokes (RANS) and unsteady RANS (URANS) solvers, where the turbine blades are modeled using a blade-resolved or a body-force approach. Within these approaches, steady RANS may have difficulties with flow separation along the duct surface in many designs, as well as in capturing transient wake flow patterns and turbine-duct interactions <cit.>. In the body-force approach the actuator disk model measures the extracted power using the product of velocity and thrust at the blade section, which usually results in an over-prediction of the efficiency because only a fraction of the computed power can be converted to the actual (rotational) power. In addition, the definitions of efficiency for ducted turbines are inconsistent in the studies listed in Table <ref>. The inflow power is defined with respect to either the blade swept area or the maximum projection area of the device (or duct). The efficiency based on the blade swept area, C_P,A_b, can be significantly higher than that based on the device area, C_P,A_max, but C_P,A_b does not provide a fair comparison with the efficiency of an unducted turbine as explained later. In evaluating and comparing the performance of ducted turbine designs, we must use the same metric, so we convert all power coefficients using the maximum area as the reference—C_P,A_max in Table <ref>. Considering the above two points, there are significant caveats in the results listed in Table <ref>. The highest fidelity simulation in Table <ref> (the blade-resolved URANS approach performed by <cit.>) predicts a C_P,A_max of 45%, which does not show an advantage of using the duct. The experimental evaluations of <cit.> may be more credible, but they also suffer from uncertainties, such as measurement errors and proximity of the device to the floor, which causes blockage that affects the measured efficiency <cit.>. Finally, most of the results obtained for wind turbines do not translate to hydrokinetic turbines. For example, to sustain higher loads in water, a wind turbine design with a large flange may not be feasible for a hydrokinetic turbine. Additionally, a hydrokinetic turbine blade requires a lower aspect ratio and larger sectional thickness to sustain the higher loads in water <cit.>. Another limitation of the research listed in Table <ref> is that the duct designs were not optimized. Instead, these designs were generated by human intuition or a grid search in a low-dimensional design space. The only exception is the design by <cit.>, who performed gradient-based optimization to develop a ducted wind turbine design. However, they used low-fidelity blade element theory to model the ducted turbine performance without adequately taking turbine-duct interaction into account. An optimal ducted turbine requires numerical optimization that simultaneously considers the blade and duct geometry with detailed shape parametrization. This is challenging because of the high computational cost of CFD evaluations and the high-dimensional design space. Another challenge is selecting the appropriate CFD model in the optimization process. As mentioned earlier, steady RANS is relatively inexpensive but may lead to inaccuracies in predicting the performance of designs where boundary layer separation occurs in the duct. In this paper, we perform CFD-based design optimization of a ducted hydrokinetic turbine. We use 21 parameters to control the shape of the duct (length and multiple sectional radii) and turbine blades (pitch and spanwise twist/chord distributions). We perform gradient-based optimization with gradients computed by a discrete adjoint method <cit.> coupled with steady RANS blade-resolved simulations. This effort builds on previous design optimizations of unducted wind turbine <cit.>. Because of the potential inaccuracies of steady RANS for separated flow, the success of this approach hinges on whether our gradient-based optimization induces a design free of flow separation. This is, fortunately, indeed the case since designs with flow separation tend to be associated with lower efficiency, even when evaluated by the less accurate RANS solver (given enough grid resolution). Our optimized design is re-evaluated by a higher-fidelity URANS blade-resolved solver. The benefits of the duct are demonstrated upon a comparison with the unducted Bahaj turbine, optimized unducted turbine, and our baseline ducted turbines. We follow up with discussions to provide insights on the optimized geometry and the associated flow mechanisms that contribute to improved energy extraction efficiency. The paper is organized as follows. In Section <ref>, we discuss the problem statement, including the description of the physical problem of turbine energy extraction and the setup of the optimization problem. Section <ref> introduces methodology in CFD simulations and optimization process. The results of optimization and higher-fidelity re-evaluation are described in Section <ref>, where we discuss the optimized duct geometry and flow mechanisms. Finally, conclusions are provided in Section <ref>. The computations involved in this work are implemented in open-source codes OpenFOAM <cit.> and DAFoam <cit.>. § PROBLEM STATEMENT §.§ Physical Problem Consider a turbine operating in a uniform inflow U_∞ in an unbounded fluid domain, as shown in Figure <ref>. The turbine converts inflow power (energy) into rotational power, where the effectiveness of this conversion is characterized by the power coefficient, C_P = P/1/2ρ A U_∞^3, where P is the generated rotational power that is given by P=QΩ (torque Q times the rotational speed of the blades Ω). ρ is the fluid density and A is some reference area. For an unducted turbine, A can be chosen as either the blade swept area A_b or the maximum projection area of the device A_max, which are identical. For a ducted turbine, however, using the two values A_max and A_b as A leads to different C_P's, since A_max is greater than A_b. We argue that A_max is the appropriate choice for ducted turbines in order to have a fair comparison of their performance with unducted turbines. The reason is that, with A=A_max, we are essentially comparing the generated power when the inflow power is the same for unducted and ducted turbines. On the other hand, using A_b for ducted turbines results in a larger value of C_P (even above 1) that can be misleading when compared to the efficiency of unducted turbines (see examples in <cit.>). For the above reasons, we adopt A=A_max for the evaluation of C_P in this work, and we will hereafter simply write it as A, referring to the maximum device area for both ducted and unducted turbines. Given a turbine, its efficiency C_P is in general a function of two other non-dimensional parameters, namely the tip-speed ratio (λ) and Reynolds number Re (based on the diameter of the device), defined as λ = Ω R/U_∞, Re=U_∞D_max/ν, where R is the turbine blade radius, ν is the fluid kinematic viscosity. In this paper, we fix U_∞=1.4m/s, ν=1×10^-6m^2/s, A=1.853m^2, and D_max=√((4/π)A)=1.536m, leading to Re≈2×10^6 for both ducted and unducted turbines (see Figure <ref>). For Reynolds number of 𝒪(10^6), the flow is considered fully turbulent, and the dependence of C_P on Re in this range is expected to be relatively weak. We will evaluate C_P for a broad range of λ at this Reynolds number for both ducted and unducted turbines. §.§ Optimization Problem Our objective is to optimize a ducted turbine geometry to maximize its hydrodynamic efficiency C_P at given U_∞(=1.4m/s) and Ω(=17.5rad/s). This design process is applied to both ducted and unducted turbines for a fair performance comparison. In the following, we present the mathematical optimization problem for a ducted turbine, which is the more sophisticated case. The optimization for an unducted turbine can be conducted similarly but with a simpler setup that does not include the duct parameters and the tip clearance constraint. The constrained optimization problem for a ducted turbine can be stated as maximize C_P by varying -30^∘≤{θ_i }_i=1^8≤ 30^∘, 0.8 ≤{b_i/b^B_i}_i=1^8≤ 1.2, 0 ≤ d_3 ≤{d_j}_j=1,2,4≤ D_exit, 0.3 ≤l/l^B≤ 1.5, subject to 2R/d_3 = 0.91, where {θ_i }_i=1^8 are the twist angles at 8 sections of the blade, controlling the blade root pitch and twist profile as shown in Figure <ref>. The cross-sectional areas of 8 sections of the blade, normalized with respect to their baseline, are denoted as { b_i/b_i^B }_i=1^8. Modifying these variables leads to a change in the size of the blade section, but the sectional (foil) shape remains unchanged. Thus, b_i/b^B_i gives the scaling factor for each section, as shown in Figure <ref>. The variables {d_j}_j=1^4 are the diameters at 4 sections along the duct as illustrated in Figure <ref>, with d_3 being the throat section, where the rotor is installed. This section is located at 26.4% of the duct length, following a baseline design of the duct <cit.>. The bound (<ref>) ensures that d_3 is always located at the throat in the optimization and that all duct diameters do not exceed the exit diameter D_exit=D_max=√((4/π)A), as depicted in Figure <ref>. A large exit area reduces flow velocity at the exit through the streamtube expansion, which in turn increases the flow momentum extraction at the blades. Our setup prevents the optimizer from unrealistically increasing the size of the exit section and ensures a fair comparison between different designs, as discussed in Sec. <ref>. The variable l, representing the duct length (with l^B the baseline value), governs the scaling of the duct with respect to the fixed point at the throat (Figure <ref>). The constraint (<ref>) keeps the tip gap ratio as a constant of 9% throughout the optimization process, consistent with the baseline design. It leads to the blade radius R changing with the variation of throat diameter d_3. The design variable bounds are set up in a trial-and-error manner to make sure that the optimized variables do not reach the bounds on the optimized design. The full turbine geometry morphs smoothly throughout the optimization. This continuous morphing is ensured through the Free-Form Deformation method which will be discussed later in Section <ref>. Since we use a gradient-based optimization method (<ref>), local optima potentially exist in the design space. Hence, we adopt the multistart strategy, using two different baseline designs (hereafter named baseline design A and B) with drastically different performances. Both baseline designs adopt the same thin-wall curved-shaped duct as in <cit.>. The two baseline designs differ in the blade geometry (see Figure <ref>). Design A adopts the original Bahaj model with a 0.44m radius and a 20^∘ root pitch. Design B adopts the Bahaj model with a 0.44m radius, a 45^∘ root pitch, and a modified twist profile as in <cit.>. This modified twist profile is obtained by matching the local angle of attack of each blade section in the duct to that of the unducted turbine counterpart through an iterative procedure. When evaluating with the unsteady RANS solver, baseline designs A and B yield C_P=28% and C_P=45%, respectively, at λ=5.5. The hub is not included in the model to simplify the geometry parametrization using the Free-Form Deformation method. The optimization problem is summarized in Table <ref>. § METHODOLOGY This section describes the methodology in optimization and CFD evaluations. An overall flowchart is shown in Figure <ref>. The whole process involves the optimization and the re-evaluation of the optimized design using higher-fidelity simulations. The optimization and high-fidelity re-evaluation use DAFoam <cit.> and OpenFOAM <cit.>, respectively. In what follows, we will describe each component of the methodology in subsections, as outlined in Figure <ref>. To provide a self-contained but easy-to-follow paper for readers, we put additional details in the appendix and keep the main paper as concise as possible. We start from CFD models involved in both the optimization and re-evaluations and then follow up with other components in the optimization framework. §.§ CFD models The governing equations for the flow field around the turbine (Figure <ref>) are the Navier–Stokes equations ∇·U = 0 , ∂U/∂t+∇·(UU) = -1/ρ∇p + ∇·(ν∇U), where U is the flow velocity and p is the pressure. We apply velocity inlet and pressure outlet boundary conditions, and no-slip boundary conditions on the blade and duct surfaces. We consider the Reynolds-averaged Navier–Stokes (RANS) equations with grids only resolving the averaged components of the flow. One can apply the Reynolds decomposition U = U+u' (and the same for pressure) to Eq. (<ref>). U denotes the averaged velocity in a time window or by an ensemble and u' represents the zero-mean turbulent fluctuation. This leads to the unsteady RANS equation: ∇·U = 0, ∂U/∂ t+∇· (U U) = -1/ρ∇p + ∇·(ν∇U)- ∇·u'u', where u'u' is the Reynolds stresses that need to be approximated by turbulence models. In this work, we use the k-ω SST turbulence model together with the automatic near-wall treatment (see <ref> for details on both). The rotating blades are handled in simulations by two blade-resolved approaches: the Multiple Reference Frames (MRF) and the rotating-sliding mesh approach (RS). The former is used for steady RANS solutions (i.e., a solution with time-derivative terms set to zero) with multiple different reference frames, while the latter is used directly in the unsteady solution of Eq. (<ref>). The MRF is used in optimization. The RS is defined as the higher-fidelity approach and used for optimized result re-evaluations (see Figure <ref>). We include a detailed introduction of the two approaches in the following sections. §.§ Multiple Reference Frames (MRF) Method The MRF method is an efficient method for modeling turbomachinery flow. In the MRF method, the computational mesh stays stationary, and the rotational effect is handled through a rotational reference frame. In particular, the fluid domain is separated into two regions: a rotational region surrounding the turbine blades with a blade-fixed reference frame, and the remaining stationary region with an inertial reference frame, as shown in Figure <ref>. In both regions, the flow is considered steady with respect to the corresponding reference frame, so only steady RANS equations need to be solved. To be more specific, in the rotational region, the blades are stationary and experience a steady inflow. The flow velocity in the blade-fixed reference frame can be expressed by U_R=U-Ω×r, where U is the velocity in the inertial reference frame, Ω is the rotation vector of the turbine blades, and r is the distance vector from the axis of rotation to the point of interest (position vector). The steady RANS equations in the rotational region need to be established with the blade-fixed reference frame, which requires further formulations of both Eq. (<ref>) and the k-ω SST model equations. Although the implementation in OpenFOAM/DAFoam solves this complete set of equations, here we only present the rotational-region formulation regarding Eq. (<ref>) to provide the key insights of the method. Combining Eq. (<ref>) and Eq. (<ref>), we obtain (see <ref> for a detailed derivation) ∇·U = 0, ∂U_R/∂ t + ∇· (U_RU) = -1/ρ∇ p + ∇·(ν∇U) - Ω×U. The steady equations solved in the rotational region are Eq. (<ref>), with ∂U_R/∂ t=0. Therefore, in the MRF method, steady versions of Eq. (<ref>) and Eq. (<ref>) are solved in stationary and rotating regions. Solving these steady equations can be done using the SIMPLE algorithm <cit.> implemented as simpleFoam in OpenFOAM. Although the RANS-MRF method provides an efficient numerical solution for the turbine problem (i.e., only two steady RANS equations need to be solved), its accuracy can be compromised because of two issues. First, the rotational region and stationary region are usually chosen in a subjective manner. There is no guarantee that the rotational region covers all the flow features resulting from the rotating and discrete blades. Any mismatch between the choice of the region and the nature of the flow can lead to errors at the interface and thus in the final results. Secondly, for many designs, the flow can be unsteady in nature, especially when flow separation occurs from the duct and/or blade surfaces. Assuming a steady state solution, as in the RANS-MRF method, can lead to significant errors for this type of unsteady flow. As a result, the RANS-MRF method is considered a lower-fidelity model in the context of this paper. §.§ Rotating-sliding mesh approach The rotating-sliding mesh (RS) allows for the direct simulation of the unsteady RANS (URANS) equations (Eq. (<ref>)) with mesh domains that exhibit relative motion. This is needed for modeling rotating geometries. The underlying idea of this method is to allow a region of the computational mesh to rotate with the turbine blades, as illustrated in Figure <ref>. The rotating mesh also creates a technical problem that the mesh at the rotating/non-rotating interface becomes non-conformal, i.e., the nodes at two sides of the interface do not match up. The data transfer across the interface, therefore, needs to be handled by a special interpolation method involving a “supermesh” <cit.>, as described in <ref>. Coupling URANS with the RS is, in principle, much more accurate than the RANS-MRF method as it captures the unsteady nature of the flow. This can be critical in simulating flow around a ducted turbine because the possible flow separation from the duct surface can be captured better. However, in the URANS-RS, a small time step is needed to resolve the blade rotation, so a long simulation time is required for the solution to reach a quasi-steady state. The computational cost of the URANS-RS is hence much higher than that of the RANS-MRF method. In this paper, the URANS-RS is considered a higher-fidelity method and is only used in re-evaluating the performance of optimized designs. §.§ Mesh Configuration The unstructured computational mesh is generated using the OpenFOAM meshing tool snappyHexmesh. A mesh overview is shown in Figure <ref>. The size of the computational domain is 10.4D×10.4D×23.7L, where D=√((4/π)A)=1.536m is the maximum diameter of the duct, and L=2.107m is the length of the duct that is taken from the baseline design. This domain size is sufficient to avoid a blockage effect upon tests. To model flow near the turbine and immediately downstream with higher accuracy, we use a refinement region of 2D×2D×2.5L around the turbine. Within the refinement region, we first apply a level-4 refinement, i.e., each cell of the original mesh is divided into (2)^4 cells in each direction. Then we add around the duct and blade surfaces prism layers that contain further continuously refined cells toward the surfaces (2 and 3 layers are used for the former and latter, both with the expansion ratio of 1.1). The prism layer provides better resolution for boundary layers and is critical for us to obtain well-convergent results in the grid sensitivity study shown later in Section <ref>. In this work, we use three grid resolutions, M0, M1, and M2, with an increasing number of cells, i.e., further refinement from M0 to M2. The coarsest grid M0 is used in the RANS-MRF in the optimization process and has 2-3 million cells (the exact number depends on turbine geometry and re-mesh procedure in optimization). In M1 and M2, the full mesh region (including background mesh and refinement region) is uniformly refined in each direction by the factors of about 1.3 and 1.6, respectively, leading to 4-5 million cells for M1 and 7-8 million cells for M2. All grids M0, M1, and M2 are used in the URANS-RS re-evaluation, including the grid sensitivity study. §.§ Optimization We use Sequential Quadratic Programming (SQP) <cit.>, implemented in SNOPT <cit.>, to solve the optimization problem. One of the challenging tasks is to obtain the gradient of the objective function ∇ C_P with respect to all design variables. Sections <ref> and <ref> discuss respectively two components (see Figure <ref>) involved in the gradient computation: (1) geometry parametrization and mesh deformation; (2) adjoint method. The gradient information is then used in the SQP algorithm to obtain the next design points. We iterate the procedures until satisfying convergence criteria or further design improvements are not achievable. §.§.§ Geometry parametrization via FFD method We need to parametrize the geometry to deform the surface mesh. In this work, we use the Free-Form Deformation (FFD) method <cit.> for the geometry parametrization, implemented in the package pyGeo <cit.>. The principle of the FFD method is to enclose the surface mesh nodes in an FFD box with a specified number of control points (also known as FFD points). The FFD points are analytically connected to the enclosed surface nodes using tri-variate B-splines. More details are presented in <ref>. Controlling the FFD points enables smooth deformation of the enclosed geometry. Figure <ref> shows two examples of geometry deformation controlled by the FFD method in two and three dimensions. Figure <ref> shows an overview of the FFD setup for our ducted turbine. Two levels of FFD boxes are used, with one parent box (black) enclosing all duct and blade geometries and two children boxes (red and blue) enclosing the duct and blades. The 21 design variables in Section <ref> can now be represented by 21 degrees of freedom (DoF) associated with the FFD points. The child FFD box for an individual blade has 32 FFD points placed on 8 sections. Twist variables rotate the four FFD points about the reference axis located at the quarter chord line. Scale variables scale the cross-section by moving the four control points to expand or contract simultaneously. The FFD points across different blades are linked to ensure the same deformation for all blades. The child FFD box for the duct contains 112 FFD points placed on seven sections, but overall only one variable is defined to control all FFD point to change the duct length l. When changing the duct length, all duct FFD points move in the axial direction with perturbations proportional to their distances from the throat. This movement is to ensure that the throat is consistently located at 26.4% of the overall duct length. Note that seven sections are not necessary. This choice is mainly for convenience during setup. The parent FFD box handles the constraint  (<ref>) on the tip-gap ratio and the condition D_exit=√((4/π)A)=1.536m. Twenty-eight FFD points are placed on seven sections in the parent box, in which the FFD points for the last three sections are closely packed horizontally and remain stationary throughout the optimization, such that D_exit=√((4/π)A) is guaranteed. The 16 FFD points on the first 4 sections from the duct inlet are used to control the duct diameters, i.e., design variables d_i. As these FFD points move radially, the child FFD boxes (enclosed in the parent FFD box) deform and move the embedded surface geometry accordingly. Therefore, the constraint (<ref>) is automatically satisfied since the blade expands/contracts proportionally to the duct throat. This complex FFD setup for ducted turbines, we believe, is novel in the engineering application of the FFD method. §.§.§ Adjoint method for derivative computation To compute the derivative of an objective function (in this case, the power coefficient C_P) with respect to design variables, the adjoint method <cit.> is used. In order to clearly explain the adjoint method, we first introduce notations as follows: Let x≡{{θ_i, b_i }_i=1^8, {d_j}_j=1^4, l}∈ℝ^N_x with N_x=21 as the design variables. Let s∈ℝ^M be the state variables in the solution of the RANS-MRF equation. Here M ∼𝒪(10^7) includes three velocities and pressure at each cell in the computational grid. Our goal is to compute dC_P/dx∈ℝ^21. If a finite-difference method is used to compute the derivative, one needs at least 22 CFD simulations for derivative computation, even with the lowest-order approximation. This is computationally prohibitive for our application. For the adjoint method to compute dC_P/dx, the first step is to write the function C_P(x,s(x)), and express its total derivative with respect to x as dC_P/dx_1× N_x=∂C_P/∂x_1× N_x+∂C_P/∂s_1× Mds/dx_M × N_x, where ∂ C_P/∂x should be considered as the change of power coefficient C_P as the design variables (i.e. geometry) are varied, with flow solution s remaining unchanged. ∂ C_P/∂s is the change of C_P as the flow solution s changes with a fixed turbine geometry. These partial derivatives are relatively easy to compute, with more details presented in <ref>. The term that is difficult to compute in Eq. (<ref>) is ds/dx. To compute it, one needs to further involve the RANS-MRF state equations in terms of their discretized residual form R(x,s(x))=0. Here R(x,s(x)) ∈ℝ^M considering the same number of equations as the number of unknowns in s. Since R(x,s(x)) should remain zero with a change of x (if the flow solution is correctly obtained), we have dR/dx_M× N_x=0⇒∂R/∂s_M× Mds/dx_M× N_x=-∂R/∂x_M× N_x. Direct solution of Eq. (<ref>) gives ds/dx = -[∂R/∂s]^-1_M× M∂R/∂x_M× N_x. It is worthwhile to discuss the computational cost associated with Eq. (<ref>) at this point. The matrix multiplication in Eq. (<ref>) leads to a computational complexity of 𝒪(M^2 N_x) that is very expensive since M∼𝒪(10^7) and N_x is also large. This has to be added by the cost to invert a M× M matrix, which is, in general, more expensive. Even if one uses some iterative solver for linear systems to solve Eq. (<ref>), the procedure needs to be repeated for N_x times since ds/dx (as well as the RHS) has N_x columns. The computation is, therefore, also very expensive. On the other hand, the computational cost can be significantly reduced by simply substituting Eq. (<ref>) to Eq. (<ref>) and considering a re-grouping of the multiplications: dC_P/dx_1 × N_x=∂C_P/∂x_1 × N_x-(∂C_P/∂s_1 × M[∂R/∂s]^-1_M × M)∂R/∂x_M × N_x. Instead of computing Eq. (<ref>), we first compute the multiplication grouped in the parenthesis in Eq. (<ref>). This computation can be done by solving the so-called adjoint equation (the adjoint is equivalent to the transpose of a real matrix in our case) [∂R/∂s]^T_M× Mψ_M× 1=[∂C_P/∂s]^T_M× 1 whose solution transpose provides ψ^T=∂C_P/∂s[∂R/∂s]^-1 as the parenthesis term in Eq. (<ref>). The solution of Eq. (<ref>) involves solving a linear system only once, instead of N_x times as needed for Eq. (<ref>), and hence is much less expensive (also compared to the direct computation of Eq. (<ref>)). The computational cost to solve Eq. (<ref>) is generally similar to the RANS-MRF computation. Therefore, in each iteration of the optimization, the computational cost is in the same order as one RANS-MRF solution. The only remaining component is the calculation of partial derivatives ∂R/∂s in Eq. (<ref>), which can be found in <ref> with other derivatives mentioned above. §.§.§ Volume mesh deformation When marching to the next design point, the turbine geometry is deformed. The entire computational volume mesh is deformed accordingly. The volume mesh deformation is computed based on the analytic inverse-distance weighting method <cit.>, implemented in the IDWarp package <cit.>. Given a 2D surface, for example a blade surface, with N surface mesh nodes, the geometry deformation leads to the movement of each node. We assign two quantities (M_i, b_i) for each node with i=1,2,...,N, where b_i is the translation distance of the node and M_i is the rotation matrix such that n_i^new=M_i n_i^old with n_i^new and n_i^old the normal vectors at the node. In particular, both n_i's are computed by a weighted average of the normal vectors for all surrounding cell faces of the node. After (M_i, b_i) are obtained for i=1,2,...,N, we can compute the deformation of any volume mesh by summing the contribution from each surface node, i.e., Δr=∑_i=1^N w_i (M_i r + b_i - r) with r any volume node and Δr its movement. The weighting factor w_i has the empirical form <cit.> that grows in a polynomial form with the inverse distance between the volume and surface nodes. Figure <ref> shows the deformation of the computational mesh during a ducted turbine optimization as an example. § RESULTS AND DISCUSSION In this section, we present the results of optimization (<ref>), followed by re-evaluation using the higher-fidelity URANS solver, as well as discussions on the optimized geometry and flow mechanism. Before showing the optimization results, we present two additional studies conducted in our work. The first is the validation of the RANS-MRF and the URANS-RS solvers with experimental data. Since we are not aware of systematic measurements of the performance of ducted hydrokinetic turbines, we use the experimental results of the unducted Bahaj turbine for validation. Figure <ref> shows the power coefficient C_P and thrust coefficient C_T (the axial force on the turbine normalized by the momentum of the inflow) for the Bahaj turbine at a range of λ, obtained from the RANS-MRF and the URANS-RS, in comparison with the experimental results <cit.>, all at experimental Reynolds number Re=1×10^6. The RANS-MRF solver is run with a second-order numerical scheme for the convection term in the RANS equations, which will be changed in the optimization process as described later in detail. From Figure <ref>, it is clear that for the unducted Bahaj turbine, both solvers predict similar results mostly consistent with the experimental data. The URANS solver seems more accurate in evaluating C_T for all λ and C_P at lower λ (e.g., the value on which our optimization is based). The second is a grid-search study with 5 design parameters using the RANS-MRF and coarse-grid (even coarser than M0) flow solvers that we conducted before the gradient-based optimization. This search, as detailed in <ref>, does not provide a successful design of the ducted hydrokinetic turbine with improved efficiency (compared to the unducted Bahaj model). This failure is very likely due to the low-dimensional parameter space, which is insufficient to explore effective designs. It also implies that designs with more parameters using a gradient-based method, as we present below, are necessary for designing complex geometries such as ducted turbines. §.§ Optimization and Re-evaluation We solve the optimization problem (<ref>) using methods described in Section <ref>. In the RANS-MRF solver, a first-order numerical scheme is used for computing the convective term (i.e., to construct the flux in cell faces). We note that the first-order scheme is more dissipative than the normally-used second-order scheme, but the former is critical to obtain convergent flow solutions for many duct designs, especially those associated with flow separation. Specifically, upon extensive tests, we find that the second-order scheme shows fluctuating flow solutions in many cases, which in turn affects the accuracy of the adjoint method, preventing an accurate gradient computation. On the other hand, while the first-order scheme may provide less accurate solutions for cases with separated flow, the obtained C_P is usually low for these cases and the optimization leads to designs with no flow separation and with improved efficiency. In each RANS-MRF simulation, we consider the solution converged as the residuals stop dropping, which in general occurs when residuals of momentum equation reach 𝒪(10^-4∼ 10^-5). Figure <ref> shows the change of C_P as the optimization progresses, starting from both baseline designs A and B (hereafter optimizations A and B). We first notice that the two starting points yield C_P=25% and 41%, respectively, evaluated by the RANS-MRF. Both values are lower than the counterparts (28% and 45%) reported earlier from the URANS-RS. We see that optimization of both baseline designs leads to a fast increase of C_P at the beginning until both C_P values (almost) plateau. The small bumps on both curves in Figure <ref> correspond to the restarting/re-meshing procedure. This remeshing step is necessary because geometry deformations that are too large lead to mesh quality degradation and hence deteriorate the quality of the flow and adjoint solutions. A manual restarting/re-meshing procedure improves the optimization behavior, leading to additional increases of C_P. Both optimizations A and B are stopped when the C_P value plateaus even with further restarting/re-meshing. In practice, we find that this convergent situation corresponds to the SNOPT optimality metric <cit.> of approximately 10^-1.7, which is consistent with cases in an unducted wind turbine optimization <cit.>. For such optimality conditions, although ∇ C_P does not completely vanish, the benefit of further optimizing the turbine is compromised by the mesh deformation so that some practical optimal points are reached. The two optimized designs yield similar values of C_P, namely 0.4822 from A and 0.4782 from B achieved respectively at λ=6.39 and λ=6.18 (since the blade radius R is optimized, which affects λ). Since the RANS-MRF is the low-fidelity solver (due to issues mentioned in Section <ref> and the first-order convection scheme), we re-evaluate the optimized designs A and B using the high-fidelity URANS-RS. The obtained values from the URANS-RS on the M0 grid are added to Figure <ref>, which yield 54.6% and 52.9% for the optimized designs A and B. Here, the difference between two solvers for ducted turbines (due to the complexity of the flow) is remarkably larger than that for unducted turbines as shown in Figure <ref>. A grid sensitivity study is also conducted, which evaluates C_P for the optimized designs A and B using the URANS-RS on meshes M0, M1, and M2 with increasing resolution. In these URANS solutions, an adaptive time step is used (with an average time step of 1×10^-5 seconds, i.e., about 0.01^∘ turbine rotation for 1 time step) with total simulation times of 15 seconds to reach the quasi-steady state. With the NSF Stampede2 cluster, the simulations using the M2 grid take about 240 hours on 576 CPUs. The results are shown in Table <ref>, which indicates that C_P values vary only by 𝒪(1%) (in terms of the absolute value), i.e., they are not very sensitive to the large range of variation of grid resolutions. Based on results from M2, the two final designs yield similar C_P ≈ 54% that is much higher than 46% of the unducted Bahaj model (or standard unducted turbines). We further evaluate C_P of the two optimized designs at a range of λ using the URANS-RS with the M0 grid. The results are shown in Figure <ref> together with C_P of the unducted Bahaj model, as well as the optimized unducted turbine using the same setup. We note that the unducted turbine is optimized for fixed Ω = 21 rad/s, corresponding to λ=6. We see that the optimized ducted turbine designs not only work well for the designed value of Ω but also perform with high efficiency for a large range of λ. Moreover, the maximum C_P for each design is in fact not achieved at the designed λ (marked by stars in the figure) but at some larger value of λ. Considering Figure <ref>, the maximum C_P for the two designs are 56% and 54% achieved at λ=6.94 and 6.99, respectively. We finally examine the geometries of the optimized designs A and B. Figure <ref> shows the optimized duct shapes of the two designs, laid on top of the baseline duct design <cit.>. Overall, we see that the optimized designs have shorter duct lengths and enlarged throat area. The optimized design A has a duct with length of 0.861m (59.1% reduction compared to the baseline ducted turbine) and a throat radius of 0.559m (16.5% increase). The optimized design B has a duct with length of 1.350m (35.9% reduction compared to the baseline ducted turbine) and a throat radius of 0.554m (15.5% increase). The twist and chord length profiles of the optimized/baseline designs are shown in Figure <ref>. We see that the chord lengths of the optimized designs do not vary much from the baseline, but significant changes occur in the twist profile through the optimization. Moreover, starting from drastically different twist profiles in baseline designs A and B, the two optimized designs converge to very similar twist distributions, especially for r/R>0.35, where most torque is generated. Different geometries of the optimized designs A and B may indicate that they locate on two local optima in the design parameter space. Given the comparable performance despite the different duct lengths, we conclude that the blade twists and duct throat areas are the driving design parameters. §.§ Analysis of Flow Mechanism In this section, we analyze the flow fields of unducted turbines and baseline/optimized ducted turbines in order to understand the major flow mechanism leading to the improvement of performance. The optimized design A is used as an illustration. We first plot in Figure <ref> variations of some relevant performance metrics together with the variation of C_P (first row) in the first 22 iterations of the optimization process (before the first restarting/re-meshing). These metrics include the flow rate J passing the turbine blades and the thrust coefficient C_T=T/(0.5ρ U_∞^2 A) on the duct and blades, shown respectively in the second, third, and fourth rows of Figure <ref>. We also divide the 22 iterations of the optimization process into three stages I, II, and III, respectively: stages with the fast growth of C_P (marked by `+' in the figure), slow growth of C_P (∗), and plateau of C_P (×). In stage I, while the duct C_T remains almost unchanged, the blade C_T increases rapidly. This is the most favorable situation to improve C_P since clearly more and more loading from the total is distributed on the blades. In stage II, this favorable variation of C_T cannot be maintained (i.e., its potential has been exhausted in stage I), and the opposite trend is observed with increased duct C_T and decreased blade C_T. The further (slow) increase of C_P in stage II, therefore, must be associated with a different mechanism that is perhaps the more effective transition from blade loading to rotational motion (or torque). Both duct and blade C_T become unchanged in stage III as C_P plateaus. The overall increase of C_T in the whole process is 14% and the C_P increase is 87%. This is another favorable feature of the current optimization since it would be much more demanding for supporting structures with a much higher C_T. Finally, the flow rate J is highly correlated with C_P in the whole optimization process and increases constantly until its simultaneous plateau with C_P. The above analysis motivates us to study further the flow rate metric, which remains consistent with the trend of C_P in the optimization process. In Figure <ref>, we show flow visualizations for the optimized unducted turbine, baseline turbine A, and optimized turbine A, obtained in the quasi-steady solution from the URANS solver. To facilitate a fair comparison, we only show the streamlines in the flow tube that passes the turbine blades. Since the inflow velocity is fixed at 1.4m/s for all cases, the flow rate in the tube is proportional to the area of the tube at the inlet. From Figure <ref>, it is clear that the optimized design corresponds to the case with the largest flow tube inlet area. Physically, this indicates that a well-designed duct draws a larger volume of water (compared to unducted and baseline turbines) into the throat, which is accompanied by a higher flow rate (2.25m^3/s compared to 1.80m^3/s and 1.55m^3/s for the other two cases) across the blades. This metric of flow rate is the most effective indicator of the ducted turbine performance, instead of the flow speed at the throat. As an example, the baseline design A is associated with an accelerated flow speed at the throat but not improved efficiency. This analysis explains the enhanced performance of the optimized ducted turbine through the improved flow conditioning provided by the duct, confirming the long-existing hypothesis in the field of hydrokinetic turbines. § CONCLUSIONS In this paper, we conduct gradient-based design optimization of ducted hydrokinetic turbines using CFD and the adjoint method. Two baseline designs with drastically different performances are chosen as starting points of the optimization. The resultant designs for both cases yield similar performance with C_P≈ 54% when evaluated by the high-fidelity URANS solver. Both designs capture similar critical geometrical features in terms of the duct throat area and blade twist profile. This value of C_P is 8% higher than standard unducted turbines, including the Bahaj model. We further demonstrate that the optimized designs not only achieve high C_P at the design rotational speed, but also yield high performance over a wide range of rotating speeds and thus λ. Finally, we study the flow mechanism associated with performance improvement and show that C_P among different designs is correlated to the flow rates passing the turbine blades. The optimized design corresponds to the case with a maximum flow rate due to the suction of a well-designed duct. The current work demonstrates the great potential of gradient-based optimization with the adjoint method in designing geometrically complex renewable energy devices such as ducted turbines. Nevertheless, the current optimized design needs further modifications for model tests and real-world applications. Two major issues of the current design are: (1) it has a thin-wall-shaped duct that is difficult to manufacture; (2) it does not include a hub that is necessary for installation. Both issues result from the limitation of the FFD method for geometry parametrization that one needs to improve for better designs. We are now working on the Engineering Sketch Pad (ESP) <cit.> for geometry parametrization, which can, in principle, overcome the above two issues. With the ESP method replacing the FFD method, we expect our next-round design to result in some geometry that is ready for a model test in the towing tank at the University of Michigan. § ACKNOWLEDGEMENTS This work is a part of the Re-configurable Array of High-Efficiency Ducted Turbines for Hydrokinetic Energy Harvesting (RAFT) project, supported by the United States Department of Energy (DOE)-ARPA-E under SHARKS program award No. DE-AR0001438 (Program Director Dr. Mario Garcia-Sanz). We thank the ARPA-E staff, especially SHARKS program manager Dr. Mario Garcia-Sanz, for the financial support and helpful discussions and for challenging us to improve the quality of the work. We also thank the full RAFT team, especially Dr. M. Reza Amini, Dr. Kartik Praful Naik, and Mr. Boxi Jiang for many discussions (weekly within the University of Michigan and bi-weekly for the whole team). Help from Dr. Ping He on DAFoam is also greatly appreciated. This work utilized the Stampede2 high performance computing (HPC) system at the University of Texas at Austin as well as the Anvil HPC system at Purdue University through allocation TG-MCH220016. This computing allocation is supported by the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number #1548562 and also by the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. § K-Ω SST TURBULENCE MODEL AND AUTOMATIC WALL TREATMENT The approximation of the Reynolds stress term in Eq. (<ref>) can be obtained by the Boussinesq hypothesis, which assumes that the Reynolds stress is related to an eddy-viscosity ν_t and the mean velocity gradients, namely (for incompressible flow) -u_i'u_j' = ν_t(∂U_i/∂ x_j+∂U_j/∂ x_i) - 2/3kδ_ij where k=1/2u_i'u_i' is the turbulent kinetic energy and δ_ij is the Kronecker delta function. To obtain k and ν_t in Eq. (<ref>), two-equation turbulence models based on k-ϵ model <cit.> or k-ω model <cit.> have been developed. Here, ϵ is the turbulence dissipation rate, and ω = ϵ/(k C_μ) with C_μ=0.09 is the so-called specific turbulence dissipation rate. These models include transport equations for k and ϵ (or ω), and ν_t is obtained as a function of the computed k and ϵ (or ω) values. It has been identified by <cit.> that the k-ϵ model is robust for regions far from the wall but not accurate when integrating down to the wall (some wall functions are necessary to simulate the viscous sublayer). On the other hand, the k-ω model offers a more accurate resolution of the viscous sublayer but is less accurate for the far field, e.g., it is over-sensitive to the freestream turbulent conditions <cit.>. On top of both models, the k-ω based Shear Stress Transport (SST) model <cit.> is proposed, which connects the near-wall region predicted by the k-ω model and far-field by the k-ϵ model using blending functions. In principle, the k-ω SST model captures the advantage of both models. In addition to the k-ω SST model, we apply an automatic wall treatment <cit.> for near-wall simulation. The automatic wall treatment offers a wall y^+-insensitive simulation, i.e., great flexibility in the first-cell size at the wall. The principle is to set up different wall boundary conditions for k and ω depending on the y^+ value. It has been verified <cit.> that the flow prediction in the near-wall region is insensitive and robust for y^+ from 𝒪(0.1) to 𝒪(100), and the automatic wall treatment has been practically applied for cases with the first cell up to the range in the logarithmic layer, i.e., y^+ < 300. § DERIVATION OF EQUATIONS IN THE ROTATING REFERENCE FRAME We start by stating transformations of velocity and acceleration in inertial and rotating reference frames U = U_R+ Ω×r dU/dt = [dU/dt]_R + dΩ/dt×r+2Ω×U_R+Ω×Ω×r, where variables with subscript R are evaluated in the rotating reference frame. Here we consider the case dΩ/dt=0 that is consistent with our application. Derivations of Eq. (<ref>) can be found in standard textbooks on dynamics, such as <cit.>. Substituting Eq. (<ref>) into Eq. (<ref>) and making use of the fact that divergence of a curl of a vector is zero, we obtain the continuity equation in rotating reference frame ∇·U_R = 0. In order to derive momentum equations in the rotating reference frame, we consider Eq. (<ref>) with material derivatives replacing the unsteady and convection terms, to which we can then substitute Eq. (<ref>) to obtain [dU/dt]_R + 2Ω×U_R + Ω×Ω×r = -1/ρ∇ p + ∇·(ν∇U) The viscous term ∇·(ν∇U) can be transformed to ∇·(ν∇U_R) using the fact that ∇·[ν∇(Ω×r)]=0. We then expand the material derivative in Eq. (<ref>) to obtain the momentum equations in the rotating reference frame: ∂U_R/dt+∇·(U_RU_R) = -1/ρ∇ p + ∇·(ν∇U_R) - 2Ω×U_R - Ω×Ω×r While Eq. (<ref>) can be considered the final equation in a rotating reference frame, it can be simplified further for easier implementation in the finite volume method. For this purpose, we re-arrange the convection term as ∇· (U_RU_R) = ∇· [U_R(U-Ω×r)] = ∇· (U_RU) - ∇·U_R(Ω×r) - U_R·∇(Ω×r) = ∇· (U_RU) - Ω×U_R where the following equality is used for the third line U_R·∇(Ω×r) = U_R_l∂/∂ x_lϵ_ijkΩ_j r_k = U_R_lϵ_ijkΩ_j ∂ r_k/∂ x_l = U_R_lϵ_ijkΩ_j δ_kl = ϵ_ijlΩ_j U_R_l = Ω×U_R. The final equations in the rotating reference frame therefore yield ∇·U = 0 ∂U_R/∂ t +∇· (U_RU) = -1/ρ∇ p + ∇·(ν∇U) - Ω×U, which is Eq. (<ref>) in the main paper. For a steady solution of Eq. (<ref>), we can consider U as the unknowns that are consistent with the outer stationary region and use one unified code for the solution with some minor modifications (in terms of forcing Ω×U and the U_R term leading to a correction in computing the cell face flux) for the rotating region. § DATA TRANSFER AT THE INTERFACE IN THE RS The interfaces involved in the turbine problem are shown in Figure <ref>. For cells on each side of the interface, the flux value on the cell face toward the interface needs to be constructed. This cannot be obtained from a standard interpolation method because the cells on two sides of the interface are non-conformal, i.e., with non-overlapping cell faces from the two sides. Given the donor and target meshes at the interface, our goal is to compute the flux value at the cell face of the target mesh. This requires reconstruction of the cells (as well as cell-centered properties) on the donor side into a “supermesh” <cit.> whose faces (at the interface) contain all nodes of the target mesh, so that the cell face flux on the target side can be computed by interpolation between the supermesh and target mesh. In general, the interpolation via supermesh involves a Galerkin projection method described in detail in <cit.>. Here we provide a simple example to illustrate the gist of the approach, as sketched in Figure <ref>. With donor mesh T_D and target mesh T_T that are non-conformal (Figure <ref>(a)), our goal is to construct the supermesh T_S that serves as a common ground for interpolation. The supermesh T_S has to satisfy the properties: * Nodes on T_S contains all the nodes on T_D and T_T (and intersections of their edges). * For every (face cell) element in T_S, the intersection of it with any element of T_D or T_T must either be zero or the whole element. The connection between T_D and T_S is shown in Figure <ref>(b), with the shaded area denoting cell faces of T_D. Since each cell face of T_D is now perfectly split into triangle cell faces of T_S, we can assign properties to the center of the volume supermesh (associated with triangle faces) according to the area ratio of the triangles. In other words, the property at T_D is distributed into fractions according to the area fraction of triangles in T_S. The properties at T_S can then be used for interpolation with that at the target cell T_T since each target cell face is also perfectly split into triangle faces of T_S (Figure <ref>(c)). Specifically, we interpolate between the volume cell of T_S and the corresponding volume cell at the T_T side to obtain the flux value (which is associated with the corresponding triangle area). These fluxes are then added to form the value for a single cell face of T_T. § ANALYTICAL CONNECTION BETWEEN FFD POINTS AND ENCLOSED GEOMETRY In this section, we explain how to obtain the analytical connection of FFD points to the enclosed geometry. While the FFD method is versatile in handling geometries in different dimensions, we focus on an application consistent with our work, which involves the FFD points creating a lattice box in ℝ^3 that encloses a three-dimensional (3D) geometry within the box. Essentially, the analytical connection is to establish a mapping ℝ^3→ℝ^3 for each point inside the box. The mapping can be established by making use of the tri-variate B-spline function: X(u,v,w) = ∑_i=0^N_u-1∑_j=0^N_v-1∑_k=0^N_w-1B_i,m_u(u)B_j,m_v(v)B_k,m_w(w)P_i,j,k, where X(u,v,w)∈ℝ^3 is the coordinate of the volume enclosed by FFD points (and our geometry of interest is a part of this volume), parameterized by u, v, and w (all ∈ [0,1]). The indexes i, j, and k loop in three directions of the lattice in ℝ^3, N_u, N_v, and N_w are the number of FFD points in each direction of the lattice, and P_i,j,k∈ℝ^3 are coordinates of the lattice FFD points. The indexes, m_u, m_v and m_w are prescribed degrees of the B-spline basis functions in three directions. The B-spline basis function is defined recursively by (taking the i-direction as an example) B_i,0(x) = 1 if t_i ≤ x <t_i+1 0 otherwise, B_i,k(x) = x-t_i/t_i+k-t_iB_i,k-1(x) + t_i+k+1-x/t_i+k+1-t_i+1B_i+1,k-1(x), where t_i is the so-called open knot vector that is determined by choice of N_u and m_u <cit.>. With Eq. (<ref>), the sensitivity of the geometry to a particular FFD point, in terms of derivative, can be analytically expressed as ∂ X(u,v,w)/∂ P_i,j,k = B_i,m_u(u)B_j,m_v(v)B_k,m_w(w). Since X∈ℝ^3 and P∈ℝ^3, the derivative on the LHS of Eq. (<ref>) requires further clarification. It essentially means that the three piecewise derivatives of one ℝ^3 vector with respect to another ℝ^3 vector are equal with one another and also equal to the RHS. In our case, linked FFD points are used as one degree of freedom (DoF). The derivative of the geometry coordinates with respect to the particular DoF motion can be computed by summing the contribution of each FFD point using Eq. (<ref>). § PARTIAL DERIVATIVE COMPUTATION IN THE ADJOINT METHOD This section presents the underlying principle of how the partial derivatives in the adjoint method can be computed. All these derivatives are computed by backward automatic differentiation <cit.> in DAFoam with graph coloring method for acceleration <cit.>. Four types of partial derivatives are involved: ∂C_P/∂s_1× M, ∂C_P/∂x_1 × N_x, ∂R/∂s_M× M, ∂R/∂x_M × N_x. For the former two, we consider P=Ω∫r× p d𝒮, where p is the pressure, r is the distance to the rotating axis, the integration is over the blade surface with differential element d𝒮. It is clear that C_P depends on both the state variable s (i.e., pressure p) and design variable x (i.e., the integration surface). The derivative ∂ C_P/∂s can be simply computed by perturbing the pressure field in the flow solution. For computing ∂ C_P/∂x, we write P=Ω∑_ir_i× p_iΔ𝒮_i which is in discrete form. A perturbation in x results in perturbations of the FFD points, which deforms the geometry. This, in turn, leads to the surface mesh deformation that affects the r_i and Δ𝒮_i. The partial derivative ∂ C_P/∂x can therefore be computed using the chain rule to connect all processes. For the latter two, we consider the discretized governing equation in residual form, i.e. R=0. Perturbation of s changes the velocity field, pressure field, and the velocity flux (at cell faces) so that R is perturbed. The derivative ∂R/∂s can then be computed correspondingly. Perturbation of x leads to the deformation of all computational mesh in the full fluid domain (discussed in Section <ref>). The construction of velocity flux (or more precisely, the coefficients in the algebraic governing equation) is thus affected, leading to the computation of ∂R/∂x correspondingly. § DISCRETE GRID-SEARCH IN A LOWER-DIMENSIONAL DESIGN SPACE This section describes our effort in a brute-force grid search of a lower-dimensional design space. Taking the baseline duct design as a reference <cit.>, we first create 5 sets of duct configurations, each set representing one type of duct with 5 varying designs each. As sketched in Figure <ref>, the 5 sets are: (1) varying outlet area from the baseline design; (2) varying outlet area from the baseline design with throat section as the duct inlet; (3) varying both inlet and outlet areas from the baseline design with a finite-thickness duct (outer surface simply as a straight line); (4) varying outlet area from the baseline design with a flange (inspired by <cit.>); (5) varying inlet area from the baseline design. In total, these represent 25 duct designs, with the ratio of maximum and minimum duct areas ranging from 1.25 to 2.75. For each of these duct designs, other design variables considered are listed below with discrete values: * λ: 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5 * Blade root pitch: 10^∘, 20^∘, 30^∘, 40^∘, 45^∘ * Twist profiles: Baseline blade designs A and B * Tip gap ratio: 0.09R, 0.2R The RANS-MRF solver is used for each design to evaluate its performance on a coarse grid with 1-2 million cells (even coarser than M0). In total, about 450 cases are run with results of their efficiency C_P shown in Figure <ref>. Since it is difficult to visualize results in the design space of more than three dimensions, we simply plot all results as functions of λ, i.e., for each λ, a large number of C_p values resulting from varying other design variables are stacked. While it is not our goal to further distinguish designs at each λ, it is clear that the maximum C_p among all 𝒪(450) cases is only 38% evaluated by the RANS-MRF. We pick up a few designs with relatively high C_p and re-evaluate their performances using the higher-fidelity URANS solver on the M0 grid, with results also shown in Figure <ref>. We see that the maximum efficiency computed by the URANS is 45%, which happens to correspond to our baseline design B (in terms of both the duct and blade geometry) but is still lower than 46% from the unducted Bahaj model. Therefore, a simple brute-force grid search, as performed here, does not provide any duct turbine design with higher efficiency than that of standard unducted turbines. elsarticle-num-names
http://arxiv.org/abs/2307.00532v1
20230702095428
Exploring $T_{ψψ}$ tetraquark candidates in a coupled-channels formalism
[ "Pablo G. Ortega", "David R. Entem", "Francisco Fernández" ]
hep-ph
[ "hep-ph", "hep-ex" ]
[]pgortega@usal.es Departamento de Física Fundamental, Universidad de Salamanca, E-37008 Salamanca, Spain Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain []entem@usal.es Departamento de Física Fundamental, Universidad de Salamanca, E-37008 Salamanca, Spain Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain []fdz@usal.es Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain This study investigates the properties of the T_ψψ tetraquark candidates within a coupled-channels calculation of the cc̅- cc̅ system, specifically focusing on the J^P=0^±, 1^±, and 2^± sectors. The analysis includes various channels containing a J/ψ, ψ^', η_c, and η_c^' meson. By searching for poles in the scattering matrix, a total of 27 states in different J^PC sectors with masses ranging from 6.1 to 7.6 GeV/c^2 are identified. The study further investigates the masses, widths, probabilities, and branching ratios of these states, leading to the identification of three potential candidates for the experimental T_ψψ(6200) tetraquark, two candidates for T_ψψ(6600), three for T_ψψ(6700), five for T_ψψ(6900), and two for T_ψψ(7200) tetraquarks. Additionally, the paper discusses strategies to discriminate between different candidates and explores possible detection channels for further cc̅- cc̅ states. Exploring T_ψψ tetraquark candidates in a coupled-channels formalism F. Fernández August 1, 2023 ==================================================================== § INTRODUCTION Understanding the spectroscopy, structure and dynamics of exotic hadrons is one of the most challenging areas of contemporary physics research. In recent years, high-energy experiments have revealed a wealth of multiquark states that defy conventional explanations based on baryon (qqq) or meson (qq̅) configurations. The seminal discovery of X(3872) by the Belle group <cit.> marked a turning point, subsequently leading to the identification of several tetraquark states, including Z_c(3900), Z_c(4020) <cit.>, Z_cs(3985)^-, Z_cs(4220)^+ <cit.>, which exhibit charmonium-like properties, Z_b(10610), Z_b(10650) <cit.>, which resemble bottomonium states, or openly exotic states such as the T_cc(3875)^+ <cit.> or the T_cs0(2900)^0, T_cs1(2900)^0 <cit.> particles. The study of the properties and behaviour of these exotic hadrons promises to deepen our understanding of the fundamental interactions that govern the subatomic world, transcending the conventional quark compositions. Recent breakthroughs have been made by the LHCb, CMS, and ATLAS Collaborations, as they have observed resonances in the di-J/ψ and J/ψψ^' [For simplicity, in this work we will denote ψ(2S) as ψ^' and η_c(2S) as η_c^'.] invariant mass distributions <cit.> in proton-proton collision data at √(s)=7, 8 and 13 TeV. These resonances, with ccc̅c̅ minimum quark content, such as T_ψψ(6200), T_ψψ(6600), T_ψψ(6900) and T_ψψ(7200) [In this work we will follow the naming convention of Ref. <cit.>.] has sparked renewed interest in investigating fully charmed and beauty four-quark mesons. These experimental results provide a unique opportunity to test and refine our current understanding in this field. The existence of heavy exotic mesons composed of two or four c and b quarks has intrigued researchers since the early stages of multiquark hadron studies <cit.> and, since the experimental observation of T_ψψ candidates a large number of theoretical studies have been devoted to explaining their properties, either as compact tetraquark states <cit.>, diquark-antidiquark structures <cit.> and meson-meson molecules or coupled-channels effects <cit.>. Many of these exotic states, such as X(3872), Z_b(10610), Z_c(3900), P_c(4470) and others, tend to emerge close a two-hadron threshold. It is therefore tempting to infer a molecular nature for such kind of states. Similarly, many of the recent T_ψψ states such as the T_ψψ(6200), T_ψψ(6600) or the T_ψψ(6900), are close to many charmonium-charmonium thresholds such as the J/ψ J/ψ, η_cη_c^' or the J/ψψ^' threshold, respectively. Motivated by these observations, this study investigates the properties of the T_ψψ candidates T_ψψ(6200), T_ψψ(6600), T_ψψ(6700), T_ψψ(6900) and T_ψψ(7200) in a coupled-channels formalism based on a constituent quark model (CQM) <cit.>, which has been widely used in the heavy quark sector <cit.> and extended to the study of other exotic states such as the X(3872) <cit.>, the T_cc^+ <cit.> or the T_cs and T_cs̅ states <cit.>. The advantage of using an approach with a relatively long history is that all model parameters are already constrained by previous works. Consequently, from this point of view, we present a parameter-free calculation of the T_ψψ states, extending our recent analysis of the similar T_cc^+ and T_cs exotic candidates <cit.>. The organization of the manuscript is as follows: After this introduction, section <ref> provides a brief overview of the theoretical framework. Section <ref> primarily focuses on the analysis and discussion of our theoretical findings. Lastly, in Sec. <ref>, we present a summary of our work and draw conclusions based on the obtained results. § THEORETICAL FORMALISM In this work we will explore the T_ψψ tetraquark candidates as meson-meson molecules. This system has many similar features as the recently discovered T_cc^+ tetraquark, with minimum quark content ccu̅d̅. Then, for the T_ψψ we will follow the same formalism as in Ref. <cit.>, where the T_cc^+ was described as a J^P=1^+ DD^* molecule. For this reason, in this section we will only briefly provide the most relevant theoretical aspects for the study of the T_ψψ states. The constituent quark model (CQM) employed in this work has been extensively detailed in the literature. For a full description, including expressions of all the potentials and the values of the model parameters, the reader is kindly referred to Ref. <cit.> and its update Ref. <cit.>. The main elements of our constituent quark model (CQM) encompass the constituent light quark masses and the exchanges involving Goldstone bosons, which arise as manifestations of the dynamical breaking of chiral symmetry in Quantum Chromodynamics (QCD). Additionally, the model incorporates the perturbative interaction of one-gluon exchange (OGE) and a non-perturbative confinement interaction <cit.>. However, it is worth noticing that, whereas the Goldstone boson exchanges are considered for two light quarks (qq), they are not allowed in the light-heavy (qQ) and heavy-heavy (QQ) configurations.[Here, we denote q={u,d,s} and Q={c,b}.] On the contrary, the most important contributions of the one-gluon exchange and confinement potentials are flavour-blind and are the only interactions relevant for this work, where all the quarks involved are beyond the chiral symmetry breaking scale. Regarding the confinement interaction, while it has been proven that multi-gluon exchanges generate an attractive potential that rises linearly with the distance between infinitely heavy quarks <cit.>, it is essential to consider the influence of sea quarks on the strong interaction dynamics. Sea quarks contribute to screening the rising potential at low momenta and eventually lead to the breaking of the quark-antiquark binding string <cit.>. To account for this behaviour, our CQM incorporates the following expression: V_ CON(r⃗ )=[-a_c(1-e^-μ_cr)+Δ] (λ⃗_q^c·λ⃗_q̅^c) , where a_c and μ_c are model parameters. At short distances this potential exhibits a linear behavior with an effective confinement strength, σ=-a_c μ_c (λ⃗^c_i·λ⃗^c_j). However, it becomes constant at large distances, with a threshold defined by {Δ-a_c}(λ⃗^c_i·λ⃗^c_j). Additionally, the model incorporates QCD perturbative effects mediated by the exchange of one gluon, derived from the vertex Lagrangian ℒ_qqg = i√(4πα_s) ψ̅γ_μ G^μ_c λ^c ψ . Here, α_s represents an effective scale-dependent strong coupling constant, given by α_s(μ)=α_0/ln(μ^2+μ_0^2/Λ_0^2) where μ is the reduced mass of the qq̅ pair and α_0, μ_0 and Λ_0 are parameters of the model <cit.>. The described CQM details the qq (qq̅) interaction at microscopic level and allows us to build the cc̅ meson spectra <cit.>, by solving the two-body Schrödinger equation through the use of the Gaussian Expansion Method <cit.>. This computational approach not only simplifies the evaluation of the necessary matrix elements but also ensures a satisfactory level of accuracy. In order to describe the cc̅-cc̅ interaction from the underlying qq dynamics we employ the Resonating Group Method <cit.>. For that, we assume that the wave function of a system composed of two charmonium mesons A and B can be written as Ψ = A[ϕ_Aϕ_Bχ_Lσ_STξ_c] where ϕ_A(B) is the wave functions of the A(B) meson, χ_L the relative orbital wave function of the AB pair, σ_ST their spin-isospin wave function and ξ_c their color wave function. As we have two pair of identical quarks, we have to consider the full antisymmetric operator A, so the wave function is completely antisymmetric. For the cc̅-cc̅ system, this operator can be written as A=(1-P_c)(1-P_c̅), up to a normalization factor, where P_c is the operator that exchanges c quarks and P_c̅ the operator that exchanges charm antiquarks between mesons. Following Ref. <cit.>, for identical mesons, the antisymmetrizer is reduced to Ψ = (1-P_c̅) {|ϕ_Aϕ_Bχ_Lσ_STξ_c ⟩}, whereas for non-identical mesons, the wave functions is a combination of AB and BA configurations, given by Ψ = (1-P_c̅) { |ϕ_Aϕ_Bχ_Lσ_STξ_c⟩ +(-1)^μ |ϕ_Bϕ_Aχ_Lσ_STξ_c ⟩} with μ=L+S-J_A-J_B. Interestingly, the sign of the antisymmetry determines the C-parity of the states. This sign differs from the antisymmetry sign by a factor of (-1)^L_A+S_A+L_B+S_B. Hence, the C parity is equal to (-1)^μ for PP and VV channels (where P is a pseudoscalar meson and V a vector meson) and (-1)^μ+1 for PV channels. However, in our calculation we will leave the C parity undefined, and evaluate the J^PC of the found states afterwards. The interaction between cc̅-cc̅ mesons can be split into a direct term, with no quark exchange between clusters, and an exchange kernel, which incorporates them. The direct potential V_D(P⃗',P⃗_i) can be written as V_D(P⃗',P⃗_i) = ∑_i∈ A, j∈ B∫ dp⃗_A' dp⃗_B' dp⃗_A dp⃗_B× ×ϕ_A'^∗(p⃗_A') ϕ_B'^∗(p⃗_B') V_ij(P⃗',P⃗_i) ϕ_A(p⃗_A) ϕ_B(p⃗_B) , where V_ij is the CQM potential between the quark i and the quark j of the mesons A and B, respectively. The exchange kernel K_E, that models the quark rearrangement between clusters, can be written as K_E(P⃗',P⃗_i) = H_E(P⃗',P⃗_i) - E_T N_E(P⃗',P⃗_i) . which is a non-local and energy-dependent kernel, separated into a potential term H_E plus a normalization term N_E. Here, E_T denotes the total energy of the system and P⃗_i is a continuous parameter. The exchange Hamiltonian and normalization can be written as H_E(P⃗',P⃗_i) = ∫ dp⃗_A' dp⃗_B' dp⃗_A dp⃗_B dP⃗ϕ_A'^∗(p⃗_A') × ×ϕ_B'^∗(p⃗_B') H(P⃗',P⃗) P_c̅[ϕ_A(p⃗_A) ϕ_B(p⃗_B) δ^(3)(P⃗-P⃗_i) ] , N_E(P⃗',P⃗_i) = ∫ dp⃗_A' dp⃗_B' dp⃗_A dp⃗_B dP⃗ϕ_A'^∗(p⃗_A') × ×ϕ_B'^∗(p⃗_B') P_c̅[ϕ_A(p⃗_A) ϕ_B(p⃗_B) δ^(3)(P⃗-P⃗_i) ] , where H is the Hamiltonian at quark level. The properties of the T_ψψ tetraquark candidates, investigated here as meson-meson molecular systems, will be obtained as poles of the scattering matrix, given in non-relativistic kinematics as, S_α^α' = 1 - 2π i √(μ_αμ_α'k_αk_α') , T_α^α'(E+i0^+;k_α',k_α) , where k_α and μ_α represents the on-shell momentum and reduced mass for channel α, respectively. The T matrix of the coupled-channels calculation is obtained from the Lippmann-Schwinger equation T_β^β'(z;p',p) = V_β^β'(p',p)+∑_β”∫ dq q^2 V_β”^β'(p',q) ×1/z-E_β”(q)T_β^β”(z;q,p) , where β represents the set of quantum numbers necessary to determine a partial wave in the meson-meson channel, V_β^β'(p',p) is the full RGM potential, sum of direct and exchange kernels, and E_β”(q) is the energy for the momentum q referred to the lower threshold. § RESULTS In this section we present the results of the coupled-channels calculation of the cc̅-cc̅ system in J^P=0^±,1^±,2^±. We have included the thresholds and partial waves shown in Table <ref>, which are the combination of the lowest lying S-wave charmonium resonances, that's it: J/ψ, ψ^', η_c and η_c^'. We restrict ourselves to relative orbital momenta L≤ 2, since higher ones are expected to be negligible. Direct interactions are only driven by gluon annihilation diagrams, which are rather small for charmonium. Confinement potential does not have direct interaction because we deal with a two-color-singlet system. Thus, the leading interaction is the exchange diagrams. This implies that their identification as pure molecules is questionable as we are not dealing with a residual direct interaction, but a short-range interaction that mixes quarks. Nevertheless, in this work we will denote the found states as molecules, in a broad sense of a resonant state of two colourless mesons, regardless of the binding mechanism. Before presenting the results, it is worth mentioning that there is a theoretical uncertainty in the results as a consequence on the way the model parameters are adjusted to describe a certain number of hadron observables. Such fitting is done within a determinate range of agreement with the experiment, which is estimated to be around 10-20% for physical observables that help to fix the model parameters. This range of agreement will be taken as an estimate of the model uncertainty for the derived quantities and, in order to analyse its effect, we will estimate the error of the pole properties by varying the strength of our potentials by ±10%. The results of our calculations are shown in Table <ref> (masses, widths and probabilities) and Table <ref> (branching ratios). We find up to 27 poles in different J^PC sectors, that's it: 2 in 0^-+, 9 in 0^++, 1 in 1^–, 1 in 1^-+, 6 in 1^+-, 2 in 2^-+, 5 in 2^++ and 1 in 2^+-. Their masses range from 6.1 to 7.6 GeV and are quite broad. Due to Heavy Quark Spin Symmetry, the states are relatively degenerate between the {0^-,1^-,2^-} and the {0^+,1^+,2^+} sectors, but there are significant deviations due to the specific partial waves on each sector. The most explored detection channels are J/ψ J/ψ and J/ψψ^'. In Table <ref> we can identify up to 18 states with significant branching ratios to the J/ψ J/ψ channel, and 13 states that can decay to the J/ψψ^' channel. Among them, we can identify candidates for the experimental states T_ψψ(6200), T_ψψ(6600), T_ψψ(6700), T_ψψ(6900) and T_ψψ(7200), which are described in more detail below. Additionally, we have candidates that do not decay to the above channels. For example, the two 0^-+ and 2^-+ wide resonances with masses around 6744 MeV/c^2 decay only to η_cψ^', while the two 0^++ and 1^+- states with masses around 6100 MeV/c^2 can only decay to η_cη_c and η_c J/ψ, respectively. We also find a broad resonance in the 1^– sector with a mass of 6745_-5^+4 MeV/c^2 and a width of 444^+20_-18 MeV, which decays only to η_cη_c^'. Recently, Belle Collaboration searched for double-charmonium states in the e^+e^-→η_cJ/ψ reaction and found no significant signal <cit.>. This is consistent with our results and points to e^+e^-→η_cη_c^' as a more promising reaction. §.§ T_ψψ(6200) The T_ψψ(6200) (or T_ψψ(6220)) tetraquark was discovered in ATLAS <cit.> in the J/ψ J/ψ channel, but its existence was previously suggested in Ref. <cit.> from an ­a­na­ly­sis of the near-threshold region of the J/ψ J/ψ invariant mass spectrum measured by LHCb <cit.>. Its mass and width is 6220±50 MeV/c^2 and 310±120 MeV, respectively. Its quantum numbers are not yet determined, but Ref. <cit.> argued it as a 0^++ or 2^++ J/ψ J/ψ structure. Other theoretical studies give similar predictions. For example, Ref. <cit.> assign the T_ψψ(6200) state as a η_cη_c 0^++ molecule using the QCD sum rule method, Ref. <cit.> supported its assignment as a ground state tetraquark with J^PC=0^++ or 1^+-, Ref. <cit.> identifies it as the 0^++ tetraquark, same as Ref. <cit.> though the authors also have a near 1^+- candidates. Ref. <cit.> predicts tetraquark states close to 6.2 GeV/c^2 at 0^++, 1^+- and 2^++, Ref. <cit.> have close candidates in 1^++, 1^+- and 2^++, Ref. <cit.> in 1^+- and 2^++, Ref. <cit.> in 0^+ and 2^+ and Ref. <cit.> describe them as a 0^++ tetraquark state. In our coupled-channels calculation we find three possible candidates for the T_ψψ(6200) in the J^PC=0^++, 1^+- and 2^++ sectors, the closest in mass being the 1^+- one. The 0^++ candidate is a 78.6^+0.7_-0.6% J/ψ J/ψ molecule with a mass of 6265.1_-0.6^+0.4 MeV/c^2 and a width of 163_-7^+8 MeV, with primary decay channels to J/ψ J/ψ (B=65±2%) and η_cη_c (B=35± 2%). The 1^+- state is mostly a η_c J/ψ (71.4_-0.9^+0.8%) state with a mass and width of 6249 ± 2 MeV/c^2 and 230_-13^+15 MeV, respectively. The η_c J/ψ channel is also its main decay mechanism (B=71±1%), together with J/ψ J/ψ. Finally, the 2^++ candidate is a J/ψ J/ψ molecule (97.4_-0.5^+0.4%) that decays entirely to J/ψ J/ψ, with mass 6273 ± 3 MeV/c^2 and with 234_-13^+15 MeV. It is likely that the experimental signal is a mixture of the three candidates. It is true that the J/ψ J/ψ channel is not the main decay mechanism for the 1^+- state, but the final strength of the experimental peak also depends on the production amplitude to the J/ψ J/ψ channel in the J^P=1^+ sector, which could be larger than for J^P=0^+ or J^P=2^+. In order to resolve the different J^PC states, we suggest exploring the η_cη_c and η_c J/ψ channels, which are only accessible for the 0^++ and 1^+- states, respectively. §.§ T_ψψ(6600) and T_ψψ(6700) The T_ψψ(6600) tetraquark has been detected in the J/ψ J/ψ invariant mass spectrum at ATLAS <cit.> and CMS <cit.> in proton-proton collision data at √(s)=13 TeV. Its mass and width have been measured to be 6620±30 MeV/c^2 and 310±90 MeV, respectively, at ATLAS; and 6552±10±12 MeV/c^2 and 124^+32_-26±33 MeV at CMS in a no-interference model and 6638^+43+16_-38-31 MeV/c^2 and 440^+230+110_-200-240 MeV in an interference model. The masses and widths are compatible in the interference model, but the width is significantly smaller in CMS if the no-interference model is used. In addition, there is a dip in the measured J/ψ J/ψ mass spectrum around 6.75 GeV, which is not properly accounted for in LHCb's Model I. To analyse it further, LHCb and CMS used LHCb's Model II, which takes advantage of destructive interference between components and managed to improve the description of the data when a Breit-Wigner resonance around 6.7 GeV was added. Although the existence of this state, called T_ψψ(6700), remains to be confirmed, LHCb determined its mass and width to be 6741±6 MeV/c^2 and 288±16 MeV <cit.>, respectively, while CMS gave a mass of 6736±38 MeV/c^2 and a width of 439± 65 MeV <cit.>. On the theoretical side, many studies have proposed candidates for the T_ψψ(6600) and T_ψψ(6700) tetraquarks, with different properties. For example, Refs. <cit.> assigned the T_ψψ(6600) as the first radial excitation of the 0^++ or 1^+- tetraquark state, Ref. <cit.> identified it as a 0^++ or 2^++ state and, similarly, other studies have candidates with J^PC=0^++, 1^+- or 2^++ <cit.> Our results show three candidates around 6.6-6.8 GeV with masses and widths compatible with both the T_ψψ(6600) and T_ψψ(6700). For example, in J^PC=0^++ we find a resonance with a mass of 6679 ± 3 MeV/c^2 and a width of 118_-13^+14 MeV. Although its mass is slightly larger than the CMS or ATLAS values for the T_ψψ(6600), its width is compatible with the CMS measurement (124±29±34 MeV). In J^PC=1^+- we have a resonance at (M,Γ)=(6694.0_-0.1^+0.4 MeV/c^2, 347 ± 14 MeV), compatible with the ATLAS measurement for the T_ψψ(6600) and the CMS fit for the T_ψψ(6700). Finally, in the 2^++ sector we have a state with a mass of 6793_-2^+1 MeV/c^2 and a width of 120_-10^+11 MeV, which falls in the energy region of the T_ψψ(6700), although it is narrower than the actual fits for this state. Of course, we need more experimental information to clarify the existence and nature of these states before drawing any conclusions. A good channel to distinguish these states is the η_cη_c channel, which is only accessible for the 0^++ state, and the η_c J/ψ channel, which is only allowed for the 1^+- state. §.§ T_ψψ(6900) The T_ψψ(6900) was the first cc̅cc̅ candidate discovered. It is a narrow structure observed by LHCb in 2020 in the di-J/ψ invariant mass spectrum <cit.>. Its Breit-Wigner mass and width have been determined to be 6905±11±7 MeV/c^2 and 80± 19±33 MeV, respectively, in a fitting scenario without interference, and 6886±11±11 MeV/c^2 and 168±33±69 MeV, in a fitting scenario where interference is allowed. Recently, this structure has been confirmed by CMS <cit.> (M=6927± 9± 4 MeV/c^2, Γ=122^+24_-21±18 MeV) and ATLAS <cit.> (M=6.87± 0.03_-0.01^+0.06 GeV/c^2 and Γ=0.12± 0.04_-0.01^+0.03 GeV) in the J/ψ J/ψ mass spectrum. In addition, ATLAS has detected the T_ψψ(6900) structure in the J/ψψ^', with BW parameters 6780±360 MeV/c^2 and 390±110 MeV, providing an additional decay channel. This tetraquark is undoubtedly the most studied. For example, Ref. <cit.> assigned it a 0^++ χ_c0χ_c0 molecular structure, Ref. <cit.> identified it as a 0^++ second radially-excited tetraquark state, and Ref. <cit.> concluded that it is most likely a 0^++ radial excitation of a diquark-antidiquark state. Other studies agree with the 0^++ assignment <cit.>, but leave the door open to other alternatives such as 0^-+, 1^–, 1^-+, 1^+- or 2^++. Among all of our candidates in Table <ref> we can highlight the structures in 0^-+, 0^++, 1^-+, 2^-+ and 2^++ as possible candidates for the T_ψψ(6900), which are in the 6.8-6.9 GeV energy region. We predict three almost degenerate resonances with J^PC=0^-+, 1^-+ and 2^-+, whose masses are around 6.86 GeV/c^2 and their widths are ∼450 MeV. These are η_c^' J/ψ states in a relative P-wave mixed with the J/ψψ^' channel, thus they are candidates to the ATLAS sign of the T_ψψ(6900) states. Unlike the 0^++ and 2^++ candidates, the former resonances can also decay to the η_c^' J/ψ channel, so this is a good channel to evaluate their existence. In the J^PC=0^++ sector we also have a signal in the J/ψ J/ψ and J/ψψ^' mass spectrum, due to a virtual state below the Jψψ^' threshold, in the second Riemann sheet. Its mass is 6782_-3^+2 MeV/c^2 and its width 18_-6^+9 MeV, although as it is a virtual state its width cannot be directly compared with the Breit-Wigner properties experimentally measured. It mainly decay to J/ψ J/ψ (B=70_-11^+1%), and also to η_cη_c^' (B=21_-2^+1%), which could be a good detection channel. Finally, the 2^++ candidate is a resonance with a mass of 6793_-2^+1 MeV/c^2 and a width of 120_-10^+11 MeV. It is, practically, a J/ψψ^' state (70±2%), which mainly decays to J/ψ J/ψ (B=68± 1%) and Jψψ^' (B=32± 1%). Its width is compatible with the experimental data from LHCb, CMS and ATLAS in di-J/ψ channel, whereas its mass is slightly smaller. §.§ T_ψψ(7200) In addition to the above T_ψψ(6900) state, the LHCb Collaboration suggested a broad structure peaking at about 7.2 GeV, later named T_ψψ(7200). In 2022, the CMS <cit.> and ATLAS <cit.> collaborations provided its Breit-Wigner properties, measured from the J/ψ J/ψ and J/ψψ^' mass spectra data, respectively. Its mass was determined to be 7287^+20_-18±5 MeV/c^2 (CMS) and 7220±30 MeV/c^2 (ATLAS), while its width was measured to be 95^+59_-40±19 MeV (CMS) and 100^+130_-70 MeV (ATLAS). From a theoretical point of view, this state was mostly identified as a 0^++ structure <cit.>, but other alternatives such as 1^+ or 2^+ were suggested <cit.>. For this state, we predict two possible virtual candidates with 0^++ and 1^+- quantum numbers, around 7.3 GeV/c^2. The 0^++ state has a mass of 7276_-1^+2 MeV/c^2 and a width of 35_-13^+11. It mainly decays to J/ψ J/ψ, η_cη_c and η_cη_c^', with a small branching to J/ψψ^'. The 1^+- virtual state has a mass of 7290_-10^+15 MeV/c^2 and a width of 238_-17^+29 MeV, which decays to η_c J/ψ, J/ψ J/ψ, η_c ψ^' and η_c^' J/ψ. We want to remark here that the position and width of the virtual poles cannot be directly compared to the Breit-Wigner parameters as measured by the LHCb, CMS and ATLAS collaborations, as the virtuals are in an unphysical sheet and we only see them as bumps above the nearest thresholds. § SUMMARY In this study we have analysed the cc̅- cc̅ system in a coupled-channels calculation of the J^P=0^±, 1^± and 2^± sectors, including the thresholds η_cη_c, η_c J/ψ, J/ψ J/ψ, η_cη_c^', η_cψ^', η_c^' J/ψ, J/ψψ^', η_c^'η_c^', η_c^'ψ^' and ψ^'ψ^' (that's it, all channels containing a J/ψ, ψ^', η_c and η_c^'), with the partial waves detailed in table <ref>. We have searched for poles in the scattering matrix and found 27 states with masses between 6.1 and 7.6 GeV/c^2 in different J^PC sectors. In particular, we find 2 states in 0^-+, 9 in 0^++, 1 in 1^–, 1 in 1^-+, 6 in 1^+-, 2 in 2^-+, 5 in 2^++ and 1 in 2^+-. Their masses, widths, probabilities and branching ratios have been studied (see tables <ref> and <ref>), finding candidates for the experimental T_ψψ(6200), T_ψψ(6600), T_ψψ(6700), T_ψψ(6900) and T_ψψ(7200) tetraquarks. A summary of our tentative assignments compared to the current experimental ccc̅c̅ candidates is given in Table <ref>. We have discussed different detection channels that could help to discriminate between different candidates, and analysed the best strategies to search for the rest of the predicted T_ψψ states. This work has been partially funded by EU Horizon 2020 research and innovation program, STRONG-2020 project, under grant agreement no. 824093 and Ministerio Español de Ciencia e Innovación, grant no. PID2019-105439GB-C22.
http://arxiv.org/abs/2307.02216v1
20230705114626
Chimera states in neural networks and power systems
[ "Shengfeng Deng", "Géza Ódor" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.dis-nn", "nlin.CD", "physics.comp-ph" ]
AIP/123-QED ]Chimera states in neural networks and power systems Institute of Technical Physics and Materials Science, Center for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary odor.geza@ek-cer.hu Institute of Technical Physics and Materials Science, Center for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary Partial, frustrated synchronization and chimera states are expected to occur in Kuramoto-like models if the spectral dimension of the underlying graph is low: d_s < 4. We provide numerical evidence that this really happens in case of the high-voltage power grid of Europe (d_s < 2) and in case of the largest, exactly known brain network corresponding to the fruit-fly (FF) connectome (d_s < 4), even though their graph dimensions are much higher, i.e.: d^EU_g≃ 2.6(1) and d^FF_g≃ 5.4(1), d^KKI113_g≃ 3.4(1). We provide local synchronization results of the first- and second-order (Shinomoto) Kuramoto models by numerical solutions on the the FF and the European power-grid graphs, respectively, and show the emergence of chimera-like patterns on the graph community level as well as by the local order parameters. [ Géza Ódor August 1, 2023 ================== We show that Kuramoto oscillator models on large neural connectome graph of the fruit-fly, a human brain, as well as on the power-grid of Europe produce chimera states. This is in agreement with the low spectral dimensions that we calculated by the eigenvalue spectra of the Laplacian of these networks. We compare these results with the topological dimension measurements and previous simulations, strengthening that frustrated synchronization should occur, which can generate slow relaxations, obtained in previous studies within the neighborhood of the synchronization transition point. § INTRODUCTION Synchronization phenomena are very widespread in nature and the understanding of their behavior is in the focus of interest. In neural systems, like the brain oscillatory behavior of building elements has been measured by different techniques, while in case of power grids, the alternating currents can also be described by coupled oscillators. Both systems are expected to operate close to the synchronization transition point. In case of the normal brain, self-tuning to the critical point is hypothesed <cit.> and confirmed by experiments <cit.> and theoretical considerations <cit.>. The advantage of criticality is the optimal computational performance, sensitivity as well as dynamically generated long-range memory and interactions <cit.>. In case of power grids the competition of supply and demands tune the system close to the synchronization transition point <cit.>. Synchronization models described by the first Kuramoto equation <cit.> have recently been investigated on complex networks and partial synchronization was found if the spectral dimension is below 4 even if generalizations of the Euclidean dimension, the graph and the Hausdorff dimension are high or diverge <cit.>. Partial synchronization is more probable in strongly connected modules or communities, which also happens both in biological and technical structures. Modular and most often hierarchical organization is known in general brain networks <cit.>, among others in case of the fruit-fly (FF) connectome <cit.>, as well as in power grids <cit.>. Thus synchronization occurs in the strongly coupled modules first, while in the loosely coupled parts, nodes may remain desynchronized for the same conditions, which was called frustrated synchronization <cit.>, reminiscent of the semi-critical Griffiths Phases (GP) of condensed matters <cit.>. Besides, in these phenomena, fluctuations of the global order parameters diverge in an extended control parameter space. Recently this was shown in case of Kuramoto models on brain connectomes <cit.> as well as in case of power-grids <cit.>. One can also relate such structures, emerging in these heterogeneous systems, to chimera states, in which subsets of an ensemble of identical, interacting oscillators exhibit distinct dynamical states, such as one group of synchronized oscillators and one group of desynchronized oscillators <cit.>. Firstly chimeras were defined in systems of identical oscillators <cit.>. In such a case, a non-zero phase lag term is essential for partial synchronization to occur. Realistic models, however, require oscillators to be heterogeneous and chimeras have been detected on complex <cit.>, brain-like networks <cit.>. The purpose of the present study is to provide numerical evidences of such structures by solving Kuramoto equations in seemingly different areas of complex systems, on the largest available brain connectome and on the European high voltage power-grid network. We show they are characterized by a low spectral dimension: d_s<4. In these models quenched heterogeneity is present structurally, by the topology of the graphs as well as by the different self-frequencies of nodes. § METHODS In this section, we detail the models and the methods we applied to describe synchronization on different networks. §.§ The first-order Kuramoto model Several oscillator models have been used in biology, the simplest possible one is the Hopf model <cit.>, which has been used frequently in neuroscience, as it can describe a critical point with scale-free avalanches, with sharpened frequency response and enhanced input sensitivity. The local dynamics of each brain area (node) is described by the normal form of a supercritical Hopf bifurcation, also called a Landau-Stuart oscillator, which is the canonical model for studying the transition from noisy to oscillatory dynamics. Another complex model, describing more non-linearity [ In the weak coupling limit an equivalence with the integrate-and-fire models <cit.> was shown.] is the Kuramoto model <cit.>, with phases θ_i(t), located at the N nodes of a network, according to the dynamical equation θ̇_̇i̇(t) = ω_i^0 + K ∑_j W_ijsin[ θ_j(t)- θ_i(t)] . The global coupling K is the control parameter of this model, by which we can tune the system between asynchronous and synchronous states. The summation is performed over the nearest neighboring nodes, with connections described by the weighted/unweighted adjacency matrix W_ij and ω_i^0 denotes the intrinsic frequency of the i-th oscillator. For simplicity, we used the Gaussian distribution with zero mean and unit variance for the self-frequency distribution g(ω_i^0) with respect to a rotating frame <cit.>. Using this model the resting state critical behavior on large human connectomes <cit.> was compared with that of the FF <cit.> on the global order parameter level and the topology dependence has been pointed out, which suggested extended fluctuation region and GP-like behavior in case of the human connectomes in contrast with the FF network. Very recently we have also investigated an extension of Eq. (<ref>) to the Shinomoto–Kuramoto (SK) model, with periodically driven forces <cit.> to describe task phase of the brain models <cit.> θ̇_j(t) = ω_j^0+K∑_k W_jksin[θ_k(t)-θ_j(t)] + Fsin(θ_j(t)) + ϵη_j(t) . Here ϵ describes an excitation, with a zero centered, Gaussian random annealed noise η_j(t) and a site-dependent periodic force term, proportional to a coupling F, was also added. But in fact, a small η proved to be irrelevant, for the synchronization transition, caused by F, in the presence of the chaotic noise. One of the main conclusions of Ref. <cit.> was that community dependent values of the Hurst exponent H and the β exponent, measuring self-similarity of time series, varied more with F>0, than in the resting state of the brain, corresponding to F=0. Now we shall test this community dependence of R and Ω in the steady state. §.§ The second-order Kuramoto model The time evolution of power-grid synchronization is described by the swing equations <cit.>, set up for mechanical elements (e.g. rotors in generators and motors) with inertia. It is formally equivalent to the second-order Kuramoto equation <cit.>, for a network of N oscillators with phases θ_i(t): θ̇_̇i̇(t) = ω_i(t) ω̇_̇i̇(t) = ω_i^0 - αθ̇_̇i̇(t) + K ∑_j=1^N A_ijsin[ θ_j(t)- θ_i(t)] . Here α is the damping parameter, which describes the power dissipation, or an instantaneous feedback <cit.>, K is the global coupling, related to the maximum transmitted power between nodes; and A_ij, which is the adjacency matrix of the network, contains admittance elements. The quenched external drive, denoted by ω_i^0, which is proportional to the self-frequency of the i-th oscillator and carries a dimension of inverse squared time [1/s^2], describes the power in/out of a given node when Eq. (<ref>) is considered to be the swing equation of a coupled AC circuit, but here, similar to the first-order Kuramoto model, we have chosen it zero centered Gaussian random variable as rescaling invariance of the equation allows to transform it out within a rotating frame. For simplicity, one can assume that ω_i(0) is drawn from the same distribution as ω_i^0 and numerically set ω_i(0)=ω_i^0, amounting to taking [s]=1. In our present study the following parameter settings were used: the dissipation factor α, is chosen to be equal to 0.4 to meet expectations for power grids, with the [1/s] inverse time physical dimension assumption. To characterize the phase transition properties the phase order parameter R(t) has been studied for both the first- and second-order Kuramoto models. To ensure the relaxation to the steady states, we measured the Kuramoto phase order parameter z(t_k) = r(t_k) exp[i θ(t_k)] = 1 / N ∑_j exp[i θ_j(t_k)] , where 0 ≤ r(t_k) ≤ 1 gauges the overall coherence and θ(t_k) is the average phase at discrete sampling times t_k, which was chosen to follow an exponential growth: t_k = 1 + 1.08^k to spare memory space. The calculation of derivatives was done adaptively at small time steps via the Bulirsch-Stoer stepper <cit.>. The sets of equations (<ref>), (<ref>) and (<ref>) were solved numerically for 10^3 - 10^4 independent initial conditions in  <cit.>, initialized by different ω_i^0-s and different θ_i(0)-s if disordered initial phases were invoked. Then sample averages for the phases and the frequencies give rise to the Kuramoto order parameter R(t_k) = ⟨ r(t_k)⟩ , and the variance of the frequencies Ω(t) = 1/N∑_j=1^N (ω(t)-ω_j^2(t)) . We don't discuss the time-dependent behavior of the global order parameters as this has been investigated in detail in <cit.>. In the steady state, which we determined by visual inspection of R(t) and Ω(t), we measured their half values and the standard deviations σ(R(t)) and σ(Ω(t)) in order to locate the transition points. In the paper we used the σ(R), σ(Ω) values, obtained by sample and time averages in the steady state. §.§ Topological and spectral dimensions The effective graph (topological) dimension d_g is defined by N(r) ∼ r^d_g, where we counted the number of nodes N(r) with chemical distance r or less from randomly selected seeds and calculated averages over many trials <cit.>. In most cases, d_g as obtained from this cluster growing method can be served as an estimation for the more rigorously defined Hausdorff dimension d_H≈ d_g <cit.>. In Ref. <cit.> the graph dimension of the FF was estimated to be d_g^FF=5.4(1), while in <cit.> we provided a value for the unweighted European power-grid d_g^EU=2.6(1). For regular Euclidean lattices, it has been shown that a true transition to global synchronization under the thermodynamics limit is only possible for d>4 in both the first- and the second-order Kuramoto models <cit.>, while for d≤ 4, there is only a crossover from the asynchronous phase to a partial synchronous phase characterized by an increasingly broadened variance of R and a shifting crossover point K_c' as the system size increases <cit.>. The natural question is then if it is the topological dimension d_g defined in Eq. (<ref>), or equivalently the Hausdorff dimension d_H, that dictates the synchronization properties for general networks which may assume non-integer dimensions. Refs. <cit.> suggested that the synchronization properties of a general network should be related to the spectral dimension derived from the eigenvalue spectrum of the graph Laplacian matrix, and even more so for the so-called complex network manifolds studied therein, which are constructed out of finite-dimensional simplicies but are characterized by an infinite Hausdorff dimension due to their small-world properties <cit.>. Graph spectral properties of complex networks have been shown to be particularly relevant to the structures of networks <cit.>. Following Refs. <cit.>, we adopt the normalized Laplacian L with elements L_ij=δ_ij-A_ij/k_i for unweighted networks, where k_i denotes the degree of node i. Similarly, for weighted networks, the elements of the normalized Laplacian are given by L_ij=δ_ij-W_ij/k_i' , where k_i'=∑_j W_ji denotes the weighted in-degree of node i. The normalized Laplacian has real eigenvalues 0=λ_1 ≤λ_2 ≤…≤λ_N, the density of which scales as <cit.> ρ(λ)≃λ^d_s/2-1 for λ≪ 1, where d_s is the spectral dimension. The cumulative density is then given by ρ_c(λ) =∫_0^λ dλ'ρ(λ') ≃λ^d_s/2. Since Eqs. (<ref>) and (<ref>) hold for small λ values, for the fruit-fly connectome and the European high-voltage power grid network that are going to be studied in Sec. <ref>, with which N≫ 1, we will only extract the densities for the first 200 smallest eigenvalues for ease of eigenvalue computation without loss of generality. As illustrated in Fig. <ref>, Euclidean lattices in dimension d have spectral dimension d_s=d. Therefore in this case the spectral dimension is also equal to the Hausdorff dimension of the lattice, d_s = d_H. However, in general, networks can have non-integer spectral dimension d_s not equal to their Hausdorff dimension. Ref. <cit.> demonstrated that in lower spectral dimensions d_s<4, there is a parameter regime that exhibits frustrated synchronization with spatio-temporal fluctuations even in the stationary state. Then, similar to the emergence of rare regions in Griffiths phases <cit.>, one should expect to observe states with rare regions–usually called “chimera states”–in such frustrated synchronization as well, as we will demonstrate in what follows with the aid of the local order parameter of Kuramoto models. §.§ Local order parameters of the first- and second-order Kuramoto models To investigate the heterogeneity further, we measured the local Kuramoto order parameter, defined as the partial sum of phases for the neighbors of node i r_i(t)= 1/N_i.neigh|∑_j^N_i.neigh A_ij e^i θ_j(t)| . This local Kuramoto measure was firstly suggested by Restrepo et al. <cit.> to quantify the local synchronization of nodes, which allows us to visualize regions of synchronized/unsynchronized chimera-like behavior and which will be the main quantity of interest of this paper. § CHIMERA STATES §.§ Chimera states in the fruit-fly (FF) and in a human connectome First, we examine chimera states in the first-order Kuramoto model on the FF connectome. Connectomes are defined as structural networks of neural connections of the brain <cit.>. For the fruit fly, we used the hemibrain dataset (v1.0.1) from <cit.>, which has N_FF=21 662 nodes and E_FF=3 413 160 edges, out of which the largest single connected component contains N=21 615 and E=3 410 247 directed and weighted edges, with weights being the number of connections between a pair of nodes. The number of incoming edges varies between 1 and 2708. The weights are integer numbers, varying between 1 and 4299. The average node degree is ⟨ k⟩= 315.129 (for the in-degrees it is: 157.6), while the average weighted degree is ⟨ w⟩= 628. The adjacency matrix, visualized in <cit.>, shows a weak hierarchical modular structure, however, it is not random. For example, the degree distribution is much wider than that of a random graph and exhibits a fat tail. The analysis in <cit.> found a weight distribution p(W_ij) with a heavy tail, and assuming a power-law (PL) form, a decay exponent 2.9(2) could be fitted for the W_ij > 100 region. The modularity quotient of a network is defined by <cit.> Q=1/Nk∑_ij(A_ij- k_ik_j/Nk)δ(g_i,g_j) . The maximum of this value corresponds to the optimal community structure characterizes how modular a network is, where δ(g_i,g_j) is 1 when nodes i and j were found to be in the same community g, or 0 otherwise. Community detection algorithms based on modularity optimization get the closest to the actual modular properties of the network. The modularity was calculated using community structures detected by the Louvain method <cit.>, from which we obtained Q_FF≈ 0.631 <cit.>. The effective graph (topological) dimension, obtained by the breadth-first search algorithm is d_g^FF=5.4(5). To compute the spectra dimensions, we extracted the cumulative density distributions of the first 200 eigenvalues of both the Laplacian matrices of the unweighted and weighted FF connectomes and plot them in a log-log scale as shown in Fig. <ref>. For small enough λ values, the distributions indeed display a scaling regime, permitting estimation of spectral dimensions by Eq. (<ref>), as listed in the plot legend and Table <ref>. Now, even though d_g^FF>4, since d_s<4, one should expect frustrated synchronization in some parameter regime where chimera states may be observed. To that end, we solved Eq. (<ref>) and Eq. (<ref>) numerically with respect to different coupling strengths and F on the weighted FF connectome and observed the respective global phase order parameter R(t→∞) in the stationary state. The stationary state is typically reached after a few hundred time steps; see, for example, the first panel of Fig. <ref>. Practically, we followed the dynamics up to t=1000 to ensure stationarity. In Ref. <cit.>, we had estimated the critical coupling K_c≃ 1.6 from the peak of the variance of R(t→∞) with respect to K. By using this critical coupling, we further calculated the local order parameter Eq. (<ref>) for each node, averaged over 20 independent simulation runs. In Fig. <ref>, the local order parameters for three representative time steps are displayed by encoding the respective values to a color map. Since the simulations were started from a fully asynchronous state, we see that the system gradually evolves into a more synchronous state at larger times. However, even in the globally stationary state characterized by a constant R value, the local order parameters show rather inhomogeneous patterns, with some parts of the connectome are more synchronized (greener regions) while some other parts are less synchronized (redder regions), indicating the emergence of chimera states <cit.>. Note the disparity of the synchronization levels between different regions is quite large in this case, with greener regions almost fully synchronized and red regions fully unsynchronized. What is more, as partially shown by the second and the third panels at t=748 and t=1885, the distribution of the local order parameters can still evolve in the globally stationary state. Simulation results seemed to suggest quite random temporal behavior for the local order parameters (not shown here), but more careful studies for the long-time behavior are still needed to examine if it is periodic with a very long period. These results are thus suggestive of strong spatio-temporal fluctuations in chimera states, as it is typical for frustrated synchronization <cit.>. To provide more evidence of the Chimera states, we have calculated the order parameters in the steady state at K=1.6 in the nine largest communities, determined by the Louvain method <cit.>. However, the community dependence is rather weak in case of the Kuramoto model. So we enhanced the local synchronization by adding periodic forces within the framework of the SK model. The transition point shifts in the range F_c ∈ [ 0.05, 0.1 ]. Even more evident community dependence could be found in the frequency synchronization points estimated by the peaks of the variances of the order parameter Ω. As one can see in Fig. <ref>, frequency entrainment occurs in the range F'_c ∈ [ 0.025, 0.1 ] in different communities. That means that for certain forces some communities are in the super-, while others are in the sub-critical states locally, suggesting Chimeras. As it has already been shown in Refs. <cit.> the FF graph exhibits a weak modular structure. Much higher level of modularity can be observed in human connectomes, albeit on a coarse grained scale, describing the white matter. The large human connectomes obtained by DTI of MRI <cit.> has node numbers of order of a million. We have not been able to calculate the local order parameters and the spectral dimensions for such large systems. In <cit.> we calculated their graph dimensions, which proved to be above 3 of the embedding space, but lower than 4, due to the long fiber tracts, connecting distant regions. We show here, that synchronization of communities of the KKI-113 graph exhibits much more visible differences than that of the FF, suggesting strong Chimeras in case of the Kuramoto model running on them. This small-world graph contains 799 133 nodes, connected via 48 096 500 undirected and weighted edges and exhibit a hierarchical modular structure, because it was constructed from cerebral regions of the Desikan–Killany–Tourville parcellations, which is standard in neuroimaging <cit.>. The modularity quotient is much higher, than that of the FF: Q_KKI-113≈ 0.915 and the topological dimension is just d_g=3.4(1) <cit.>. As one can see on Fig. <ref> in certain communities the synchronization is high at K=3, while others are still practically unsynchronized at this coupling, suggesting Chimera states. §.§ Chimera states in the European high-voltage power grid network Unlike neural networks on which the oscillators are massless a power-grid network are massive and should be described by the second-order Kuramoto model. In this subsection, we attempt to show if chimera states can emerge in such systems. Power-grid networks are genuinely hierarchical modular networks if the detailed information for the medium- and low-voltage parts of the grids are also incorporated. Practically, it is almost impossible to infer the entire structure of large power-grid networks, but it is feasible to mimic it by adding medium- and low-voltage parts to the high-voltage (HV) skeleton, according to the empirical hierarchical distribution, as it was done in Ref. <cit.> We downloaded the European HV power grid from the “SciGRID Dataset” <cit.> encoding the 2016 status, deduced via processing google street-map. We have not supplemented this graph with lower-voltage parts, but it already contains 12 kV, 20 kV, ... links, which belong to the middle-voltage category, according to the definition of the 100 kV threshold for HV lines. This graph contains N=13478 nodes, interconnected via E=33844 links. After symmetrizing it, an average degree ⟨ k⟩ =2.51 was obtained. In Fig. 1 of Ref. <cit.> the degree distribution is shown. The tail of the degree distribution for k≥ 15 could be well fitted by a stretched exponential 8.25 × e^-0.53(5)k function, which renders this network at the threshold of robust/fragile: γ=3/2, as according to the definition in <cit.>, networks with a P(k>K)=C e^-k/γ cumulative degree distribution and γ < 3/2 are robust, based on a mean-field percolation theory under random node removals. The adjacency matrix, visualized on Fig. 2 of Ref. <cit.> proves that this is a highly modular graph, characterized by Q^EU=0.963. Furthermore, it is a small-world network according to the definition of the small-worldness coefficient <cit.>. By calculating the graph dimension using the breadth-first search algorithm as shown in the inset of Fig. 3 of Ref. <cit.>, d_g^EU=2.6(1) was obtained. Since the coupling between a pair of nodes of a power grid is proportional to the maximal power P_ij transmitted between them and inverse to the imaginary part X_ij of the impedance of the transmission line, weights computed from the normalized values of P_ij/X_ij had also been considered to construct the weighted network. By again extracting the cumulative density distributions of the first 200 eigenvalues of both the Laplacian matrices of the unweighted and weighted European power-grid networks, Fig. <ref> shows quite clean power laws. As listed in the plot legend and Table <ref>, the estimated spectral dimensions are both well below the critical dimension d_c=4. Hence, one can again expect to observe chimera states, although instead, the second-order Kuramoto model (<ref>) is going to be inspected in this case. We again tune the system to the verge of criticality. By solving Eq. (<ref>) to obtain the peak of the variance of R(t→∞), the critical couplings had been estimated to K_c=80 for the unweighted network <cit.> and K_c=7000 for the weighted network. The stationary states are typically reached after a few hundred time steps (see Ref. <cit.>), but we solved them up to t=20000 to ensure the stationarity of the system. The local order parameters, calculated in the stationary states after averaging over 100 samples, are then obtained with respect to these critical couplings. Fig. <ref> shows that inhomogeneous patterns, encoded again in a color map, indeed overwhelm the system. Due to the higher levels of synchronization, there are quite some proportions of oscillators with their local order parameters relatively closer to 1 as compared to the FF connectome case. Hence the color map shows the quantity 1-r_i instead. Since the differences in the local parameters in greener regions and redder regions are still quite apparent, we see that as suggested by the low spectral dimension, chimera states indeed can be observed in this case. Note that even though the weighted network is a bit less synchronous globally at K=7000 [R(t→∞)≃ 0.47] than the unweighted network at K=80 [R(t→∞)≃ 0.48], the weighted network still seems to be more synchronized locally in many parts of the network. This emphasizes the importance of incorporating edge weights to take into account more realistic couplings between the nodes. Comparing the local order parameter patterns in Fig. <ref> and Fig. <ref>, it is also interesting to note that less synchronous regions are typically also less clustered as compared to regions with higher levels of synchronization. This is in some sense in reminiscence of the analysis in Ref. <cit.>, in which it had been shown that chimera states can also be characterized by the order parameters of different moduli. To provide more evidence of the Chimera states, we have also calculated the steady-state R in the twelve largest communities, determined via the Louvain method with a modularity score close to the maximum Q ≈ 0.795, in the same way as in case of the FF <cit.>. As one can see in Fig.<ref>, synchronization occurs at different couplings in different communities, such that for small K-s the small communities are fully ordered, while the larger ones are still desynchronized. This is related to the size dependence of K_c in case of crossover, however here the the communities are not independent. Note, that the fully ordered communities have less than 100 nodes. § SUMMARY In this paper, we have demonstrated that chimera states can occur in Kuramoto-type models on large networks if the spectral dimension is low, i.e. d_s < 4, even if the graph dimension is not necessarily like that. That happened in case of the graph of the FF connectome, which exhibits d_g = 5.4(1). This is in agreement with the hypothesis, advanced for the first-order Kuramoto model in <cit.>. But as modularity is weak for FF, so do the Chimeras. We can show them by a community-level analysis with an applied periodic external field. In contrast, for a large human connectome possessing high modularity, we show strong community dependence of the local synchronization. Power grids can be described by the second-order Kuramoto model, which possesses inertia. We found that the European HV power grid has a graph dimension d_g = 2.6(1), but the spectral dimensions seem to be below d_s=2. Still, the occurrence of chimera-like patterns can be observed via order parameters and confirmed by a community-level synchronization study. We demonstrated the level of local synchronization by showing the local Kuramoto order parameter, but similar results have been found by calculating the local frequency spreads. We thank Kristóf Benedek and Bálint Hartmann for providing weight calculations of the European network, István Papp for exploring the communities, Jeffrey Kelling for developing the GPU solver code, and Róbert Juhász for the helpful discussions. This research was funded by ELKH grant SA-44/2021, and the Hungarian National Research, Development, and Innovation Office NKFIH grant K128989. Most of the numerical work was done on KIFÜ supercomputers of Hungary. § DATA AVAILABILITY STATEMENT Data are available on request from the corresponding author.
http://arxiv.org/abs/2307.00344v1
20230701134709
Sparse-Input Neural Network using Group Concave Regularization
[ "Bin Luo", "Susan Halabi" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Sparse-Input Neural Network using Group Concave Regularization Bin Luo bin.luo2@duke.edu Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA Susan Halabi susan.halabi@duke.edu Department of Biostatistics and Bioinformatics Duke University Durham, NC 27708, USA August 1, 2023 =================================================================================================================================================================================================================================================================================================== Simultaneous feature selection and non-linear function estimation are challenging, especially in high-dimensional settings where the number of variables exceeds the available sample size in modeling. In this article, we investigate the problem of feature selection in neural networks. Although the group LASSO has been utilized to select variables for learning with neural networks, it tends to select unimportant variables into the model to compensate for its over-shrinkage. To overcome this limitation, we propose a framework of sparse-input neural networks using group concave regularization for feature selection in both low-dimensional and high-dimensional settings. The main idea is to apply a proper concave penalty to the l_2 norm of weights from all outgoing connections of each input node, and thus obtain a neural net that only uses a small subset of the original variables. In addition, we develop an effective algorithm based on backward path-wise optimization to yield stable solution paths, in order to tackle the challenge of complex optimization landscapes. Our extensive simulation studies and real data examples demonstrate satisfactory finite sample performances of the proposed estimator, in feature selection and prediction for modeling continuous, binary, and time-to-event outcomes. Neural networks, Feature selection, High dimensionality, LASSO, Concave penalty § INTRODUCTION In the past decade, advancements in molecular, imaging and other laboratory tests have led to a growing interest in high-dimensional data analysis (HDDA). High-dimensional data refers to a dataset that contains a large number of observed variables relative to the small sample size, which presents a significant challenge in building accurate and interpretable models. For example, in bioinformatics, hundreds of thousands of RNA expressions, Genome-Wide Association Study (GWAS) data, and microarray data are used to understand the biology of disease, with only hundreds of patients involved <cit.>. To address the curse of dimensionality, feature selection has become a critical step in HDDA. By identifying the most representative features to characterize the biology of the disease or the outcome, feature selection approaches can increase the model interpretability and improve the generalization of the model. There are various methods for feature selection, including filter methods <cit.>, wrapper methods <cit.>, and embedded methods <cit.>. Among them, penalized regression methods have become very popular in HDDA since the introduction of the least absolute shrinkage and selection operator (LASSO) <cit.>. Penalized regression method can perform simultaneous parameter estimation and feature selection by shrinking some of the parameter coefficients to exact zeros. While LASSO has been widely used to obtain sparse estimations in machine learning and statistics, it tends to select unimportant variables to compensate for the over-shrinkage for relevant variables <cit.>. To address the bias and inconsistent feature selection of LASSO, several methods have been proposed, including adaptive LASSO <cit.>, the minimax concave penalty (MCP) <cit.>, and the smoothly clipped absolute deviation (SCAD) <cit.>. However, most of these penalized methods assume linearity in the relationship between the variables and the outcomes, while the actual functional form of the relationship may not be available in many applications. Some additive non-parametric extensions have been proposed to resolve this problem <cit.>, but their models rely on sums of univariate or low-dimensional functions and may not be able to capture the complex interactions between multiple covariates. <cit.> propose the HSIC-LASSO approach that leverages kernel learning for feature selection while uncovering non-linear feature interactions. However, it suffers from quadratic scaling in computational complexity with respect to the number of observations. Neural networks are powerful tools for modeling complex relationships in a wide range of applications, from image <cit.> and speech recognition <cit.> to natural language processing <cit.> and financial forecasting <cit.>. Their state-of-the-art performance has been achieved through powerful computational resources and the use of large sample sizes. Despite that, high-dimensional data can still lead to overfitting and poor generalization performance for neural networks <cit.>. Recently, there have been novel developments in using regularized neural networks for feature selection or HDDA. A line of research focuses on utilizing the regularized neural networks, specifically employing the group LASSO technique to promote sparsity among input nodes <cit.>. These methods consider all outgoing connections from a single input neuron as a group and apply the LASSO penalty on the l_2 norm of weight vectors of each group. Other LASSO-regularized neural networks in feature selection can be found in the work of <cit.> and <cit.>. However, regularized neural networks incorporating LASSO suffer from a tendency to over-shrink the non-zero weight of relevant variables and include many false positives in the selected model. The adaptive LASSO was employed to alleviate this problem <cit.>, yet their results are limited to continuous outcomes and assume that the conditional mean function is exactly a neural network. The work in <cit.> bypassed the l_1 regularization by introducing stochastic gates to the input layer of neural networks. They considered l_0-like regularization based on a continuous relaxation of the Bernoulli distribution. Their method, however, requires a cutoff value for selecting variables with weak signals, and the stochastic gate is unable to completely exclude the non-selected variables during model training and prediction stages. In this paper, we propose a novel framework for sparse-input neural networks using group concave regularization to overcome the limitations of existing feature selection methods. Although concave penalties like MCP and SCAD have been shown to perform well in both theoretical and numerical settings for feature selection and prediction, they have not received the same level of attention as LASSO in the context of machine learning. Our proposed framework aims to draw attention to the underutilized potential of the concave penalty for feature selection in neural networks, by providing a comprehensive approach for simultaneous feature selection and function estimation in both low-dimensional and high-dimensional settings. In particular, our proposed method considers all outgoing connections from a single input neuron as a group and applies a proper concave penalty to the l_2 norm of weights for each group. By shrinking all the weights of certain groups to exact zeros, it obtains a neural net that uses only a small subset of variables. In addition, we developed an effective algorithm based on the backward path-wise optimization that yields stable solution paths, to tackle the challenge of complex optimization landscapes. Our simulation studies and real data examples demonstrate the satisfactory finite sample performance of the group concave regularization, which outperforms existing methods in terms of feature selection and prediction accuracy for modeling continuous, binary, and time-to-event outcomes. The rest of this article is organized as follows. In Section 2, we formulate the problem of feature selection for a generic non-parametric model and introduce our proposed method. The implementation of the method, including the composite gradient descent algorithm and the backward path-wise optimization, is presented in Section 3. In Section 4, we conduct extensive simulation studies to demonstrate the performance of the proposed method. The application of the method to real-world datasets is presented in Section 5. Lastly, in Section 6, we discuss the results and their implications. § METHOD §.§ Problem setup Let X ∈^d be a d-dimensional random vector and Y be a response variable. We assume the conditional distribution P_Y|X depends on a form of f(X_S) with a function f ∈ F and a subset of variables S ⊆{1, ⋯, d}. We are interested in identifying the true set S for significant variables and estimating function f so that we can predict Y based on selected variable X_S. At the population level, we aim to minimize the loss min_f∈ F, S_X, Yℓ(f(X_S), Y) where ℓ is a loss function tailored to a specific problem. In practical settings, the distribution of (X, Y) is often unknown, and instead only an independent and identically distributed (i.i.d.) random sample of size n is available, consisting of pairs of observations (X_i, Y_i)_i=1^n. Additionally, if the number of variables d is large, an exhaustive search over all possible subsets S becomes computationally infeasible. Furthermore, we do not assume any specific form of the unknown function f and aim to approximate f nonparametrically using neural networks. Thus, our goal is to develop an efficient method that can simultaneously select a variable subset S and approximate the solution f for any given class of functions using a sparse-input neural network. §.§ Proposed framework We consider function estimators based on feedforward neural networks. Let ℱ_n be a class of feed forward neural networks f_: ^d ↦ with parameter . The architecture of a multi-layer perceptron (MLP) can be expressed as a composition of a series of functions f_(x)=L_D ∘σ∘ L_D-1∘σ∘⋯∘σ∘ L_1∘σ∘ L_0(x), x ∈^d, where ∘ denotes function composition and σ(x) is an activation function defined for each component of x. Additionally, L_i(x) = _ix + b_i, i=0, 1, …, , where _i ∈^d_i+1× d_i is a weight matrix, D is the number of hidden layers, d_i is the width defined as the number of neurons of the i-th layer with d_0=d, and b_i ∈^d_i+1 is the bias vector in the i-th linear transformation L_i. Note that the vector ∈^P is the column-vector concatenation of all parameters in {_i, b_i: i=0, 1, …, }. We define the empirical loss of f_ as _n()= 1/n∑_i=1^n ℓ(f_(X_i), Y_i). The ideal scenario is to have a sparse-input neural network f_ that only takes signals from the important variables, meaning that _0,j=0̱ for j ∉ S, where _0,j denotes the jth column vector of _0. In order to minimize the empirical loss _n() while inducing sparsity in _0, we propose to train the neural network by minimizing the following group regularized empirical loss = _∈^P{_n() + ∑_j=1^d ρ_λ(_0,j_2)+ α_2^2 }, where ·_2 denotes the Euclidean norm of a vector. The objective function in Eq. (<ref>) comprises three components: (1) _n() is the empirical loss function such as the mean squared error loss for regression tasks, the cross-entropy loss for classification tasks, and the negative log partial likelihood for proportional hazards models. Further details can be found in Appendix <ref>. (2) ρ_λ is a concave penalty function parameterized by λ≥ 0. To simultaneously select variables and learn the neural network, we group the outgoing connections from each single input neuron that corresponds to each variable. The concave penalty function ρ_λ is designed to shrink the weight vectors of specific groups to exact zeros, resulting in a neural network that utilizes only a small subset of the original variables. (3) α_2^2, where α > 0, represents the ridge regularization term used to prevent overfitting in neural networks. Note that feature selection, employing ρ_λ, depends exclusively on the magnitudes of weights in the input layer. However, it is possible to diminish the influence of ρ_λ by reducing all weights in the input layer while simultaneously allowing larger weights in other layers, without affecting the network's output. The ridge regularization addresses this issue by promoting smaller, well-balanced weights, thereby improving model stability and mitigating overfitting. Note that when the number of hidden layers D=0, the function f_ reduces to a linear function, and the optimization problem in Eq. (<ref>) becomes the framework of elastic net <cit.>, SCAD-L_2 <cit.>, and Mnet <cit.>, with the choice of ρ_λ to be LASSO, SCAD, and MCP, respectively. §.§ Concave regularization There are several commonly used penalty functions that encourage sparsity in the solution, such as LASSO <cit.>, SCAD <cit.>, and MCP <cit.>. When applied to the l_2-norm of the coefficients associated with each group of variables, these penalty functions give rise to group regularization methods, including group LASSO (GLASSO) <cit.>, group SCAD (GSCAD) <cit.>, and group MCP (GMCP) <cit.>. Specifically, LASSO, SCAD, and MCP are defined as follows. * LASSO ρ_λ(t)=λ |t|. * SCAD ρ_λ(t)=λ |t| for  |t| ≤λ, -t^2-2aλ|t|+λ^2/2(a-1) for λ < |t| ≤ aλ, (a+1)λ^2/2 for |t| > aλ, where a>2 is fixed. * MCP ρ_λ(t)= sign(t) λ∫_0^|t|(1-z/λ a)_+ dz, where a>0 is fixed. It has been demonstrated, both theoretically and numerically, that the concave regularization methods of SCAD and MCP exhibit strong performance in terms of feature selection and prediction <cit.>. Unlike the convex penalty LASSO, which tends to over-regularize large terms and provide inconsistent feature selection, concave regularization can reduce LASSO's bias and improve model selection accuracy. The rationale behind the concave penalty lies in the behavior of its derivatives. Specifically, SCAD and MCP initially apply the same level of penalization as LASSO, but gradually reduce the penalization rate until it drops to zero when t > aλ. Given the benefits of the concave penalization, we propose using the group concave regularization in our framework for simultaneous feature selection and function estimation. § IMPLEMENTATION §.§ Composite gradient descent Note that the optimization in Eq. (<ref>) is not a convex optimization problem since both empirical loss function _n() and the penalty function ρ_λ can be non-convex. To obtain the stationary point, we use the composite gradient descent algorithm <cit.>. This algorithm is also incorporated in <cit.> for sparse-input neural networks based on the LASSO regularization. Denote _n,α()=_n() + α_2^2 as the smooth component of the objective function in Eq. (<ref>). The composition gradient iteration for epoch t is given by ^t+1 = _{1/2 - ^t+1_2^2 + ∑_j=1^d ρ_λ(_0,j_2) }, where ^t+1=^t-γ∇_n,α(^t) is the gradient update only for the smooth component _n,α(^t) that can be computed using the standard back-propagation algorithm. Here γ>0 is the learning rate for the update and can be set as a fixed value or determined by employing the backtracking line search method, as described in <cit.>. Let A_j represent the index set of _0,j within . We define A as the index set that includes all weights in the input layer, given by A = {⋃_j=1^d A_j}. By solving Eq. (<ref>), we obtain the iteration form ^t+1_A^c=^t+1_A^c and ^t+1_A_j=h( ^t+1_A_j, λ), for j=1, ⋯, d. Here, A^c refers to the complement of the set A, and the function h represents the thresholding operator, which can be determined by the specific penalty ρ_λ. By taking ρ_λ to be the LASSO, MCP, and SCAD penalty, it can be verified that the GLASSO, GSCAD, and GMCP solutions for the iteration in Eq. (<ref>) have the following form: * GLASSO h_GLASSO(z, λ)= S_g(z, λ). * GSCAD h_GSCAD(z, λ)= S_g(z, λ), if z_2 ≤ 2λ, a-1/a-2S_g(z,aλ/a-1), if 2λ < z_2 ≤ a λ, z, if z_2 > a λ. * GMCP h_GMCP(z, λ)=a/a-1 S_g(z, λ), if z_2 ≤ aλ, z, if z_2 > a λ, where S_g(z;λ) is the group soft-thresholding operator defined as S_g(z;λ)= (1-λ/z_2)_+ z. Therefore, we can efficiently implement the composite gradient descent by integrating an additional thresholding operation into the input layer. This operation follows the gradient descent step using the smooth component _n,α(). The calculation for epoch t can be summarized as follows: (1) compute gradient of _n,α(^t) using back-propagation, (2) update ^t+1←^t-γ∇_n,α(^t), (3) update ^t+1_A^c←^t+1_A^c and ^t+1_A_j← h( ^t+1_A_j, λ), for j=1, ⋯, d. The final index set of the selected variables is Ŝ={j: _A_j0̱}. §.§ Backward path-wise optimization We are interested in learning neural networks not only for a specific value of λ, but also for a range of λ where the networks vary by the number of included variables. Specifically, we consider a range of λ from λ_min, where the networks include all or an excessively large number of variables, up to λ_max, where all variables are excluded and |W_0| becomes a zero matrix. Since the objective function is not convex and has multiple local minima, the solution of Eq. (<ref>) with random initialization may not vary continuously for λ∈ [λ_min, λ_max], resulting in a highly unstable path of solutions that are regularized by λ. To address this issue, we consider a path-wise optimization strategy by varying the regularization parameter along a path. In this approach, we use the solution of a particular value of λ as a warm start for the next problem. Regularized linear regression methods <cit.> typically adopt a forward path-wise optimization, starting from a null model with all variables excluded at λ_max and working forward with decreasing λs. However, our numerical studies for sparse-input neural networks showed that starting from a sparse solution as an initial model does not produce a larger model along the path until jumping to the full model at a sufficiently small λ. To tackle this problem, we implement a backward path-wise optimization approach, starting from a dense model at the minimum value of λ_min and solving toward sparse models up to λ_max with all variables excluded from the network. This dense-to-sparse warm start approach is also employed in <cit.> using LASSO regularization. To further illustrate the importance of using backward path-wise optimization in regularized neural networks, we investigate variables selection and function estimation of a regression model Y=f(X)+ϵ, where f(X)=log(|X_1|+0.1)+ X_1X_2+X_2 + exp(X_3+X_4) with 4 informative and 16 nuisance variables, and each X_i and ϵ follow the standard normal distribution. More details of the simulations are presented in Section 4. Figure <ref> shows the solution paths of GMCP and GLASSO based on different types of optimization. It is observed that non-pathwise optimization leads to fluctuations or variations in the solution path, whereas forward path-wise optimization tends to remain in the same sparse model until transitioning to a full model with a sufficiently small λ. In contrast, backward path-wise optimization using GMCP and GLASSO produces relatively smooth solution paths. It should be noted that GLASSO has a tendency to over-shrink the weight vectors of informative variables and include more variables in the model. In contrast, GMCP is designed to prevent over-shrinkage and offers a smooth transition from the full model to the null model. In addition to providing stable and smooth solution paths, backward path-wise optimization is advantageous computationally. In particular: * The consecutive estimates of weights in the path are close, which reduces the rounds of gradient descent needed for each iteration. Therefore, the bulk of the computational cost occurs at λ_min, and a lower number of iterations for the remaining λs results in low computational costs. * We observe that the excluded variables from previous solutions are rarely included in the following solutions. By pruning the inputs of the neural network along the solution path, further reduction in computation complexity can be achieved as the model becomes sparse. Since the computational cost scales with the number of input features, this approach can significantly speed up computation, particularly for high-dimensional data. §.§ Tuning Parameter Selection Two tuning parameters are required in our proposed framework: the group penalty coefficient λ and the ridge penalty coefficient α. The former controls the number of selected variables and yields sparser models for larger values of λ, while the latter imposes a penalty on the size of the network weights to prevent overfitting. In all numerical studies presented in this paper, we adopted a 20% holdout validation set from the training data. The model was trained using the remaining data, and the optimal values for λ and α were selected from a fine grid of values based on their performance on the validation set. Python code and examples for the proposed group concave regularized neural networks are available at <https://github.com/r08in/GCRNN>. § SIMULATION STUDIES We assess the performance of the proposed regularized neural networks in feature selection and prediction through several simulation settings. The data are generated through the following function: f(X)=log(|X_1|+0.1)+ X_1X_2+X_2 + exp(X_3+X_4), where each component of the covariate vector X=(X_1, ⋯, X_d)^T ∈^d are generated from independent standard normal distribution. Here d>4 and function f(X) is sparse that only the first four variables are relevant to the outcome. We generate n i.i.d. random samples with continuous outcomes, binary outcomes, and time-to-event outcomes in the following three examples, respectively. (Regression Model) The continuous response Y is generated from a standard regression model with an additive error as follows Y=f(X)+ϵ, where ϵ follows a standard normal distribution. (Classification Model) The binary response Y ∈{0,1} is generated from a Bernoulli distribution with the following conditional probability P(Y=1|X) = 1/1+exp(-f(X)). (Proportional Hazards Model)The survival time T follows the proportional hazards model with a hazard function of the form h(t|X)=h_0(t)exp(f(X)), where h_0(t) is the baseline hazard function. Thus, T=H_0^-1 ( -log(U)exp(f(X)), where U is a uniform random variable in [0,1], and H_0 is the baseline cumulative hazard function defined as H_0(t)=∫_0^t h_0(u)du. We considered a Weibull hazard function for H_0, with the scale parameter =2 and the shape parameter =2. Among n samples, × n of them are randomly chosen as censoring observations with the censoring indicator δ_i=0 and otherwise δ_i=1 for event observations. The censoring rate =0, 0.2 and 0.4 in our simulation. We define the observed time Y_i= T_i if δ_i=1, C_i if δ_i=0, where censoring time C_i is drawn from a uniform distribution (0, T_i). For each example, we consider the low and high dimensional settings in the following scenarios: 1. Low dimension (LD): d=20 and n=300 and 500. 2. High dimension (HD): d=1000 and n=500. We perform 200 simulations for each scenario. The performance of the trained model in prediction and feature selection are evaluated on independently generated n random samples by the following measures: (1) Prediction score, which is defined as the R^2 score, classification accuracy, and C-index for the regression, classification, and proportional hazards model, respectively. (2) Model size (MS), is the average number of selected covariates. (3) False positives rate (FPR), is the percent of selected but unimportant covariates: FPR=|Ŝ⋂ S^c|/|S^c|× 100%. (4) False negatives rate (FNR), is the percent of non-selected but important covariates: FNR=|Ŝ^c ⋂ S|/|S|× 100%. Recall that S represents the true index sets of important variables and Ŝ={j: _0,j_2 0 } denote the index sets of selected variables. In our numerical studies, we consider the concave regularization GMCP and GSCAD for our proposed framework. We name the method of regularized neural networks using GLASSO, GMCP, and GSCAD as GLASSONet, GMCPNet, and GSCADNet, respectively. We compare the proposed group concave regularized estimator GMCPNet and GSCADNet with GLASSONet, neural network (NN) without feature selection (λ=0), random (survival) forest (RF), and the STG method proposed in <cit.>. We also include the oracle version of NN and RF (Oracle-NN and Oracle-RF) as benchmarks, where true relevant variables are known in advance and used directly in the model fitting process. See Appendix <ref> for the implementation details. §.§ Results Table <ref> presents a summary of the feature selection performance of the four approaches, namely STG, GLASSONet, GMCPNet, and GSCADNet, across all simulation scenarios. We exclude the results of the STG method for Example <ref> as it either selects all variables or none of them for the survival outcome. For both LD and HD settings, GMCPNet and GSCADNet consistently outperform the STG and GLASSONet in terms of feature selection. These models exhibit superior performance, achieving model sizes that closely matched the true model, along with low false positive rates (FPR) and false negative rates (FNR) for most scenarios. While STG performs well in certain LD settings, it tends to over select variables in HD scenarios with a large variability in the model size. On the other hand, GLASSONet is prone to selecting more variables, leading to larger model sizes in both LD and HD settings, which aligns with the inherent nature of the LASSO penalty. Figure <ref> displays the distribution of testing prediction scores for the regression, classification, and proportional hazards models (PHM) with a censoring rate of =0.2. The complete results of the PHM can be found in Appendix <ref>. GMCPNet and GSCADNet demonstrate comparable performance in both LD and HD settings, achieving similar results to the Oracle-NN and outperforming NN, RF, and even Oracle-RF in most of scenarios. STG performs similarly to Oracle-NN in the LD setting of the regression model, but its performance deteriorates in the HD setting and other models. Conversely, while GLASSONet outperforms or is comparable to the Oracle-RF method in the LD settings, it suffers from overfitting in the HD settings by including a large number of false positives in the final model. It is worth pointing out that the Oracle-NN outperforms the Oracle-RF in every scenario, indicating that neural network-based methods can serve as a viable alternative to tree-based methods when the sample size is sufficiently large relative to the number of predictors. Additionally, NN without feature selection performs the worst across all the simulation scenarios, highlighting the importance of feature selection, especially in the high-dimensional space. Overall, the simulation results demonstrate the superior performance of the concave penalty in terms of feature selection and prediction. The proposed GMCPNet and GSCADNet methods exhibit remarkable capabilities in selecting important variables with low FPR and low FNR, while achieving accurate predictions across various models. These methods show promise for tackling the challenges of feature selection and prediction in high-dimensional data. § REAL DATA EXAMPLE §.§ Survival Analysis on CALGB-90401 dataset We utilize the data from the CALGB-90401 study, a double-blinded phase III clinical trial that compares docetaxel and prednisone with or without bevacizumab in men with metastatic castration-resistant prostate cancer (mCRPC) to illustrate the performance of our proposed method. The CALGB-90401 data consists of 498,801 single-nucleotide polymorphisms (SNPs) that are processed from blood samples from patients. We assume a dominant model for SNPs and thus each of the SNPs is considered as a binary variable. Since most SNPs are irrelevant for predicting patient survival, we only consider 181 SNPs that are associated with DNA damage-repair genes, and 444 prioritized SNPs based on an updated literature search <cit.>. We also include the eight clinical variables that have been identified as prognostic markers of overall survival in patients with mCRPC <cit.>: opioid analgesic use (PAIN), ECOG performance status, albumin (ALB), disease site (defined as lymph node only, bone metastases with no visceral involvement, or any visceral metastases), LDH greater than the upper limit of normal (LDH.High), hemoglobin (HGB), PSA, and alkaline phosphatase (ALKPHOS). The final dataset has d = 635 variables with a number of patients n = 631 and censoring rate C = 6.8%. We consider the proportional hazard model in the form of Eq. (<ref>) for our proposed methods to identify clinical variables or SNPs that can predict the primary outcome of overall survival in these patients. To evaluate the feature selection and prediction performance of the methods, we randomly split the dataset 100 times into training sets (n=526) and testing sets (n=105) using a 5:1 allocation ratio. We apply the methods to each of the training sets and calculate the time-dependent area under the receiver operating characteristic curve (tAUC) on the corresponding testing sets. The tAUC assesses the discriminative ability of the predicted model and is computed using the Uno method <cit.>. The results of the 100 random splits are presented in Figure <ref>. Our proposed method, GSCADNet, outperforms the others in survival prediction (left panel). It is worth noting that the NN method, which lacks feature selection, tends to overfit in high-dimensional data and performs poorly. Although these three regularized methods of sparse-input neural networks perform similarly in survival prediction, GLASSONet has a tendency to over-select variables and the proposed GMCPNet and GSCADNet select a relatively smaller set of variables without compromising prediction performance (middle panel). The right panel of Figure <ref> demonstrates that GSCADNet successfully selects most of the significant clinical variables and detects some of the important SNPs in predicting overall survival. §.§ Classification on MNIST Dataset We aim to visualize the selection of variables by considering the classification problem on the MNIST dataset. The MNIST dataset is a well-known benchmark dataset in computer vision, consisting of grayscale images of handwritten digits from 0 to 9. In this study, we focus on the binary classification problem of distinguishing between 7s and 8s in the MNIST dataset. We evaluate our proposed methods GMCPNet and GSCADNet, along with existing methods GLASSONet, STG, NN, and RF, based on their feature selection and classification accuracy. The MNIST dataset consists of grayscale images with 28x28 pixels, which gives 784 variables. We randomly select 250 pictures of 7s and 8s from the MNIST dataset, respectively, to form a high-dimensional training dataset with d=784 and n=500. Note that the class labels depend only on the pixels of the central area of the images, and thus a good method for feature selection should identify the relevant pixels and classify the images of 7s and 8s. We also corrupt the images with i.i.d. random noise from a standard normal distribution so that the input features are not sparse. The trained models are evaluated on the testing dataset with 2002 images. We repeated the process of random sampling and model fitting 100 times, and the feature (pixel) selection and classification results are shown in Figure <ref>. We observe that GLASSONet, GMCPNet, GSCADNet all achieve median accuracies greater than 91% and outperform the other methods. While the heatmaps of feature selection show that GLASSONet, GMCPNet, GSCADNet consistently select relevant pixels in high frequencies, GLASSONet tends to over select variables and GMCPNet and GSCADNet choose irrelevant pixels in much lower frequencies (indicated by dark red colors). § DISCUSSION Among the plethora of feature selection methods, penalized regression has gained significant popularity. However, many of these methods rely on the assumption and application of linear theory, which may not capture the complex relationships between covariates and the outcome of interest. In biomedical research, for instance, researchers often normalize data and employ penalized techniques under a linear model for feature selection. However, relying solely on data transformation risks overlooking intricate biological relationships and fails to address the dynamic nature of on-treatment biomarkers. Moreover, advancements in molecular and imaging technologies have introduced challenges in understanding the non-linear relationships between high-dimensional biomarkers and clinical outcomes. Novel approaches are urgently needed to tackle these complexities, leading to an improved understanding of non-linear relationships and optimizing patient treatment and care. In this paper, we have proposed a novel framework that utilizes group concave regularization for feature selection and function estimation in complex modeling, specifically designed for sparse-input neural networks. Unlike the convex penalty LASSO, the concave regularization methods such as MCP and SCAD gradually reduce the penalization rate for large terms, preventing over-shrinkage and improving model selection accuracy. Our optimization algorithm, based on the composite gradient descent, is simple to implement, requiring only an additional thresholding operation after the regular gradient descent step on the smooth component. Furthermore, we incorporate backward path-wise optimization to efficiently navigate the optimization landscape across a fine grid of tuning parameters, generating a smooth solution path from dense to sparse models. This path-wise optimization approach improves stability and computational efficiency, potentially enhancing the applicability of our framework for sparse-input neural networks. The runtime of our proposed method over a solution path of λs (with a fixed α) can be comparable to or even shorter than training a single model with a fixed λ, such as the NN method without feature selection (λ=0). To illustrate this, we examine the algorithm complexity of the NN method, which can be approximated as (ndT), where T denotes the number of epochs for learning the neural network. In contrast, training our proposed method over a solution path of m λs has a complexity of (nd̅T'm), where d̅ represents the averaged number of inputs along the solution path with dimension pruning, and T' is the number of epochs for each λ in the path. In our simulation with the HD scenario (d=1000), we set T=5000, T'=200, and m=50. Assuming the number of inputs decreases equally along the solution path from the full model to the null model, we have d̅=d/2=500. Thus, ndT=nd̅T'm indicates that solving for an entire path of our proposed method requires a similar computation as training a single model. In real applications, especially in high-dimensional scenarios, the dimensionality usually drops quickly along the solution path. Therefore, d̅ can be much smaller than d/2, and thus solving for a whole solution path can be more computationally efficient. It is worth pointing out that we set T' to be small for the first parameter λ_min as well in the HD setting, to avoid overfitting of an initial dense model. In our numerical studies, the parameter tunings are limited to λ and α. However, in real-world applications, it may be necessary to tune additional hyperparameters, such as the learning rate, the number of layers, and the number of nodes in each layer. The computation cost associated with tuning these parameters can be reduced by leveraging parallel computing techniques. Furthermore, when the sample size is moderate and the important variables are sparse, we have observed that using a two- or three-layer neural network with a modest number of nodes per layer (e.g., 5 or 10 nodes per layer) is often sufficient for a wide range of datasets. One limitation of the proposed method arises in ultra-high dimensional scenarios where the number of variables reaches hundreds of millions. Directly applying the proposed sparse-input neural networks in such cases can lead to an exceedingly complex optimization landscape, making it computationally infeasible. To mitigate this limitation, one suggestion is to employ a pre-screening method to reduce the dimensionality to a more manageable size prior to applying the proposed approach. Another limitation pertains to the proposed group regularized method, which is primarily focused on individual feature selection. This limitation becomes particularly relevant when dealing with covariates exhibiting grouping structures, such as a group of indicator variables representing a multilevel categorical covariate, or scientifically meaningful groups based on prior knowledge. A potential future research direction could involve redefining the groups within the proposed framework. This could be achieved by considering all outgoing connections from a group of input neurons as a single group, enabling group selection and accommodating the presence of grouping structures. In conclusion, our study exhibits the advantages of employing group concave regularization for sparse-input neural networks. The findings highlight its efficacy in consistently selecting relevant variables and accurately modeling complex non-linear relationships between covariates and outcomes, across both low and high-dimensional settings. The proposed approach holds the promising potential to enhance modeling strategies and find wide-ranging applications, particularly in diseases characterized by non-linear biomarkers, such as oncology and infectious diseases. This research was supported in part by the National Institutes of Health Grants R01CA256157, R01CA249279, 1R21CA263950-01A1, the United States Army Medical Research Materiel Command grant Award Number HT9425-23-1-0393, and the Prostate Cancer Foundation Challenge Award. figuresection tablesection § EMPIRICAL LOSS FUNCTION The empirical loss functions _n() for regression, classification, and survival models in Examples <ref>-<ref> are defined as follows: * Mean squared error loss for regression tasks. This loss function measures the average squared difference between the true values Y_i and the predictions f_(X_i): _n() = 1/n∑_i=1^n (Y_i - f_(X_i))^2. * Cross-entropy loss for classification tasks. It is widely used in classification problems and quantifies the dissimilarity between the true labels Y_i and the predicted probabilities Ŷ_i of class 1. The predicted probability Ŷ_i is obtained by applying the sigmoid function to f_(X_i): _n() = -1/n∑_i=1^n[Y_ilog(Ŷ_i) + (1 - Y_i)log(1 - Ŷ_i)]. * Negative log partial likelihood for proportional hazards models. It is derived from survival analysis and aims to maximize the likelihood of observing events while considering censoring information. It incorporates the event indicator δ_i, which is 1 if the event of interest occurs at time Y_i and 0 if the observation is right-censored. The negative log partial likelihood is defined as: _n() = -1/n∑_i=1^n {δ_i f_(X_i) - δ_i log∑_j ∈ R_iexp(f_(X_i))}. Here, R_i={j: Y_j ≥ Y_i} represents the risk set just before time Y_i. The negative log partial likelihood is specifically used in the proportional hazards model. § COMPLETE RESULTS FOR SURVIVAL MODEL Figure <ref> shows that larger variations in C-index are associated with larger censoring rates overall. GMCPNet and GSCADNet achieve comparable results to Oracle-NN while surpassing all other methods, including Oracle-RSF. § SIMULATION WITH CORRELATED VARIABLES The simulation study in Section 4 focuses on independent covariates. However, in real-world applications, particularly in high-dimensional settings, the presence of correlations among covariates is common and presents a challenge for feature selection. In this section, we assess the effectiveness of the proposed method using simulated data that incorporates correlated variables. To be more specific, we extend the high-dimensional scenario described in Section 4 by generating a correlated covariate vector, denoted as 𝐗∼ N(0, Σ). The correlation structure is defined using a power decay pattern, where Σ_ij=0.5^|i-j|. This modification allows us to examine the performance of our method in the presence of correlation among the covariates. Comparing the results of feature selection for independent covariates in Table <ref> to the outcomes presented in Table <ref>, it becomes evident that STG and GLSSONet exhibit larger variations in selected model sizes, along with higher false negative rates (FNR) and false positive rates (FPR) in the regression model. This behavior can be attributed to the presence of correlated features. In contrast, the proposed GMCPNet and GSCADNet methods effectively identify relevant variables while maintaining relatively low false positive and negative rates across all models. Furthermore, Figure <ref> demonstrates that both GMCPNet and GSCADNet perform comparably to the Oracle-NN method in the regression and survival models, while outperforming other non-oracle approaches in the classification model. These findings indicate that the proposed methods exhibit robustness against correlations among covariates in terms of feature selection and model prediction. § IMPLEMENTATION DETAILS §.§ Simulation studies We employed Random Forest (RF) with 1000 decision trees for the model fitting process. To ensure a fair comparison among all the neural-net-based methods, we adopted a ReLu-activated Multi-Layer Perceptron (MLP) with two hidden layers consisting of 10 and 5 units, respectively. The network weights were initialized by sampling from a Gaussian distribution with mean 0 and standard deviation 0.1, while the bias terms were set to 0 following the Xavier initialization technique <cit.>. The optimization of the neural networks was performed using the Adam optimizer. We implemented the STG method as described in <cit.> that the learning rate (LR) and regularization parameter λ were optimized via Optuna with 500 trials, using 10% of the training set as a validation set. The number of epochs was 2000 for each trial. The parameter search ranges are displayed in Table <ref>. For all the methods falling within the framework of Equation (1) in the paper, we selected the optimal values of λ and α from a two-dimensional grid, with λ and α ranging over 50 and 10 evenly spaced values on a logarithmic scale, respectively. The selection was based on their performance on the validation set, which consisted of 20% of the training set. To deactivate feature selection, we set λ=0 for NN and Oracle-NN. The learning rate γ was fixed at 0.001. For GLASSONet, GMCPNet, and GSCADNet, the number of epochs at λ_max was set to 2000 for the low-dimensional (LD) scenario and 200 for the high-dimensional (HD) scenario. For all other values of λ, the number of epochs was set to 200 for both LD and HD settings. The number of epochs for NN was consistently fixed at 5000. §.§ Real Data Example In the analysis of real data examples, the implementation details remain the same as the high dimension (HD) scenario in the simulation studies, with the following modifications: * For the survival analysis on the CALGB-90401 dataset, we utilized the MLP with two hidden layers, each consisting of 10 nodes. In hyperparameter tuning, we explored 100 values of λ ranging from 0.01 to 0.1 for GMCPNet and GSCADNet. Additionally, we increase the number of candidates for α to 50. * In the classification task on the MNIST dataset, we adjust the search range of α to [1e-3, 0.1]. The data from CALGB 90401 is available from the NCTN Data Archive at <h>ttps://nctn-data-archive.nci.nih.gov/. The MNIST dataset is retrieved using their official source.
http://arxiv.org/abs/2307.02820v1
20230706072759
Evaluating raw waveforms with deep learning frameworks for speech emotion recognition
[ "Zeynep Hilal Kilimci", "Ulku Bayraktar", "Ayhan Kucukmanisa" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
1 .001 Evaluating raw waveforms with deep learning frameworks for speech emotion recognition Kilimci et al. mode = title]Evaluating raw waveforms with deep learning frameworks for speech emotion recognition a]Zeynep Hilal Kilimcimycorrespondingauthor zeynep.kilimci@kocaeli.edu.tr [mycorrespondingauthor]Corresponding author: Ayhan Küçükmanisa Conceptualization of this study, Methodology, Writing - Original Draft, Supervision b]Ulkü Bayraktar ulkubayraktar@gmail.com Software, Formal analysis, Investigation b]Ayhan Küçükmanisa ayhan.kucukmanisa@kocaeli.edu.tr Conceptualization of this study, Methodology, Writing - Original Draft, Supervision [a]Department of Information Systems Engineering, Kocaeli University, 41001, Kocaeli, Turkey [b]Department of Electronics and Communication Engineering, Kocaeli University, 41001, Kocaeli, Turkey Speech emotion recognition is a challenging task in speech processing field. For this reason, feature extraction process has a crucial importance to demonstrate and process the speech signals. In this work, we represent a model, which feeds raw audio files directly into the deep neural networks without any feature extraction stage for the recognition of emotions utilizing six different data sets, namely, The Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Toronto Emotional Speech Database (TESS), Crowd-sourced Emotional Multimodal Actors (CREMA), Surrey Audio-Visual Expressed Emotion (SAVEE), and TESS+RAVDESS. To demonstrate the contribution of proposed model, the performance of traditional feature extraction techniques namely, mel-scale spectogram, mel-frequency cepstral coefficients, are blended with machine learning algorithms, ensemble learning methods, deep and hybrid deep learning techniques. Support vector machine, decision tree, naive Bayes, random forests models are evaluated as machine learning algorithms while majority voting and stacking methods are assessed as ensemble learning techniques. Moreover, convolutional neural networks, long short-term memory networks, and hybrid CNN-LSTM model are evaluated as deep learning techniques and compared with machine learning and ensemble learning methods. To demonstrate the effectiveness of proposed model, the comparison with state-of-the-art studies are carried out. Based on the experiment results, CNN model excels existent approaches with 95.86% of accuracy for TESS+RAVDESS data set using raw audio files, thence determining the new state-of-the-art. The proposed model performs 90.34% of accuracy for EMO-DB with CNN model, 90.42% of accuracy for RAVDESS with CNN model, 99.48% of accuracy for TESS with LSTM model, 69.72% of accuracy for CREMA with CNN model, 85.76% of accuracy for SAVEE with CNN model in speaker-independent audio categorization problems. Speech emotion recognition Raw audio files Deep learningLSTM CNN CNN-LSTM [ [ August 1, 2023 ================== § INTRODUCTION Emotions are complex psychophysiological changes resulting from the interaction of individual moods with biochemical and environmental influences. This change can be expressed in different ways such as speech, facial expression, body motions, and brain signs by emphasizing emotions like anger, sadness, happiness, fear, excitement, surprise, etc. Speech is a complicated sign that includes significant information related to the content of message and features of speaker (gender, emotion, language, accent, etc.). That is why it has been explored by many disciplines and art forms. Speech emotion recognition (SER) is a common research field in the last decades. SER is utilized in different application areas such as human-machine interaction, education, management of multimedia contents, entertainment, automobile industry, text to speech conversion, medical diagnosis <cit.>. Emotion recognition from speech signal is reasonably hard since the styles of speaking (i.e. pronunciation), speech rates of the speakers is totally diverse from individual to individual and it modifies from location to location (i.e. distinct for native speakers and non-native speakers) <cit.>. Thence, it is more significant to pick up specific attributes of speech which are not influenced by the territory, culture, speaking genre of the talker. Various characteristics such as spectral, prosodic, and acoustic are employed by extracting features from speech signal for emotion recognition task in computer science <cit.>. Then, the procedure is proceeded by classifiers to determine the emotion of speech. Deep learning-based models are commonly employed by the researchers because of providing better predictions and results compared with traditional machine learning algorithms in different domains such as face recognition, voice recognition, image recognition <cit.>, <cit.>, <cit.>. The usage of deep learning architectures facilitates automatic feature selection process unlike gathering hand-crafted features. Lately, diverse deep learning-based models are also utilized for speech emotion recognition tasks in the state-of-the-art-studies. The literature works generally focus on to discover important attributes of speech signal using deep learning models <cit.> or demonstrate the performance of deep learning models on a specific feature extraction technique <cit.>. In this work, we obtain deep features from raw sound data and feeding them into different deep learning algorithms for speech emotion recognition task instead of employing conventional feature extraction techniques such as MFCC, MEL. In this work, evaluating raw waveforms is proposed without applying any feature extraction stage using traditional machine learning algorithms, ensemble learning approaches, and deep learning architectures for speech emotion recognition task. For this purpose, convolutional neural networks, long short-term memory networks, and hybrid CNN-LSTM models are evaluated deep learning techniques while support vector machine, decision tree, naive Bayes, random forests, majority voting, stacking models are evaluated as machine learning and ensemble algorithms. To demonstrate the contribution of the proposed framework, the performance of traditional feature extraction techniques and the results of state-of-the-art studies are compared with the proposed model on six different dataset. The utilization of raw audio files instead of employing feature extraction process for speech emotion recognition task shows remarkable results when compared to the literature studies. The remaining of the paper is organized as follows: Section <ref> presents literature review. Materials and methods used in the study are given in Section <ref>. Data acquisition and proposed framework are demonstrated in Section <ref>. In Section <ref>, experiment results and conclusions are represented. § LITERATURE REVIEW This section gives a brief summary of literature works for speech emotion recognition. <cit.> introduce convolutional neural network (CNN) architecture is introduced for speech emotion recognition task. Mel-frequency Cepstral Coefficients (MFCCs), Mel-scaled spectrogram, Chromagram, Spectral contrast feature, and Tonnetz representation are evaluated at the stage of feature extraction. After that, extracted features from sound files are fed into CNN to show the effectiveness of feature extraction models in RAVDESS, EMO-DB, and IEMOCAP data sets. The proposed model exhibits the best classification performance in EMO-DB data set with 86.1% of accuracy. <cit.> propose a new clustering based approach with the help of radial-based function network for SER. Determined key sequences are fed into Bidirectional long short-term memory network to obtain final state of the emotion. The proposed approach is assessed over IEMOCAP, EMO-DB, and RAVDESS data sets. Experiment results show that the proposed approach represents remarkable results in terms of classification accuracy when compared to the state-of-the-art studies. <cit.> presents a CNN-based framework for speech emotion recognition. To show the effectiveness of the model, IEMOCAP and RAVDESS data sets are evaluated. The authors report that the proposed model enhance the classification accuracy by 7.85% for IEMOCAP and 4.5% for RAVDESS. <cit.> evaluate 1D and 2D CNN-LSTM networks to recognize speech emotion. The performance of hybrid deep learning model is compared with deep belief network and CNN architecture over two data sets namely, IEMOCAP and EMO-DB. They report that the performance of hybrid deep learning model for SER is rather competitive when compared to the conventional techniques. <cit.> investigate the impact of feature extraction techniques to enhance the performance of speech emotion rate. For this purpose, Mel frequency cepstral coefficients, Discrete Wavelet Transform (DWT), pitch, energy and Zero crossing rate (ZCR) models are employed at the stage of feature extraction. To show the efficieny of feature extraction techniques, support vector machine, decision tree (DT) and LDA models are evaluated. The utilization of DT performs the best classification accuracy with nearly 85%. <cit.> propose a multi-learning strategy by providing end-to-end real time model for SER. The proposed dilated CNN (DCNN) model is evaluated on two benchmark data sets namely, IEMOCAP and EMO-DB. Authors report that the usage DCNN model exhibits significant accuracy results with 73% for IEMOCAP and 90 % for EMO-DB. <cit.> propose an ensemble-based framework for cross corpus multi-lingual recognition of speech emotion. The performance of an ensemble model, majority voting, is compared with conventional machine learning techniques. The utilization of ensemble model ensures enhancement in classification accuracy nearly 13% for Urdu data set, roughly 8% for German data set, 11% for Italian data set, and 5% for English data set. <cit.> concentrate on a lightweight CNN approach for speech emotion recognition task. To show the efficiency of proposed model, experiments are carried on IEMOCAP, and EMO-DB data sets. Experiment results indicate that a lightweight CNN model is capable to recognize emotion of speech with 77.01 % of accuracy for IEMOCAP data set, and 92.02% of accuracy for EMO-DB data set. <cit.> presents a novel statistical feature selection technique by taking into consideration average of each featue in the features set for SER. Recognition performance of the model is compared on EMO-DB, eNTERFACE05, EMOVO and SAVEE data sets by using SVM, MLP, and k-NN classifiers. Except eNTERFACE05 data set, SVM model outperforms other machine learning algorithms in terms of classification accuracy in all data sets. <cit.> propose DNN-decision tree SVM model by calculating the confusion degree of emotion with decision tree SVM and training with DNN architecture. To demonstrate the effectiveness of the proposed model, experiments are carried on Chinese Academy of Sciences Emotional data set. The proposed model achieves remarkable experiment results when compared to conventional SVM and DNN-SVM technique by ensuring nearly 6% and roughly 3% enhancement in the success of recognition rate, respectively.<cit.> focus on the multitask learning approach for speech emotion recognition by ensuring speech-to-text recognition and emotion categorization, simultaneously. The efficiency of the model is demonstrated on the IEMOCAP data set by achieving nearly 78% of accuracy. <cit.> present two-layer fuzzy multiple ensemble framework using fuzzy C-means algorithm and random forest model for SER. The proposed framework is capable to recognize emotions on CASIA and EMO-DB data sets by improving recognition accuracy when compared to back propagation and random forest models. <cit.> concentrate on the effect of feature selection model utilizing deep convolutional neural network (DCNN) for SER. After extracting features from pretrained models, the most discriminatory features are determined by a correlation-based feature selection technique. At the classification stage, support vector machines, random forests, the k-nearest neighbors algorithm, and neural network classifier are employed on EMO-DB, SAVEE, IEMOCAP, and RAVDESS data sets. The model achieves 95.10% for Emo-DB, 82.10% for SAVEE, 83.80% for IEMOCAP, and 81.30% for RAVDESS in terms of classification accuracies. <cit.> propose a novel deep multimodal model for spontaneous SER. The proposed model is based on the combination of three different audio inputs by feeding them into multi-CNN fusion model. The combination strategy performs promising classification results when compared with the sate-of-the-art results. <cit.> focus on SER by employing twine shuffle pattern and iterative neighborhood component analysis methods. The proposed model is based on feature generation and selection stages utilizing shuffle box and iterative neighborhood component analysis methodologies, respectively. To demonstrate the efficiency of the model, the experiments are carried out on EMO-DB, SAVEE, RAVDESS, EMOVO data sets. Proposed model achieves 87.43% for RAVDESS, 90.09% for EMO-DB, 84.79% for SAVEE, and 79.08% for EMOVO in terms of classification accuracy. <cit.> present domain invariant feature learning (DIFL) models to address speaker-independent speech emotion recognition. The experiments are performed on EMO-DB, eNTERFACE, and CASIA data set to demonstrate the contribution of the proposed model. Experiment results indicate that the utilization of proposed model exhibits remarkable results compared to the literature studies. To the best of our knowledge, our study is the first attempt to process raw audio files by blending them with machine learning and deep learning methods for the task of speech emotion recognition and differs from the aforementioned literature studies in this aspect. § DATASETS AND METHODOLOGY In this section, datasets used in the study, and proposed methodology are presented. Six different publicly available and widely applied datasets, The Berlin Database of Emotional Speech (EMO-DB), The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Toronto Emotional Speech Database (TESS), Crowd-sourced Emotional Multimodal Actors dataset (CREMA), Surrey Audio-Visual Expressed Emotion (SAVEE), and TESS+RAVDESS are employed in the experiments. After that, methodology is introduced with feature extraction stage, models, and the proposed framework. The proposed speech emotion recognition methodology is demonstrated in Figure <ref>. §.§ Datasets Six different data sets (EMO-DB, RAVDESS, TESS, CREMA, SAVEE, TESS+RAVDESS) are utilized to enhance the generalization capacity of the consequences acquired in the proposed work. EMO-DB dataset <cit.> is composed of definition of different feelings (neutral, happiness, anger, disgust, sadness, boredom, and fear/anxiety) by ten different actors whom of 5 is female. The dataset includes 535 words in German and each of audio file has 16 kHz sampling frequency with 16 bit quantization. The waveform of each emotion is represented in Figure <ref>. RAVDESS dataset <cit.> contains English sentences voiced by 12 male and 12 female actors. RAVDESS is composed of 1,440 utterances with eight different emotion categories, namely, surprised, angry, happy, bored, sad, fearful, disgust, and neutral. TESS <cit.> comprises of audio records of 2 female speakers pronouncing English sentences. TESS contains 2,800 utterances of with anger, disgust, neutral, fear, happiness, sadness, bored, surprise emotional categories. CREMA <cit.> includes 7,442 original clips generated by 43 female and 48 male actors from various ethnicities such as Hispanic, Asian, African, American. Specified 12 sentence are vocalized capturing six different emotions. These are sad, happy, disgust, neutral, anger, fear. SAVEE dataset <cit.> covers 1,680 utterances vocalized by 14 male actors in seven different emotions. These are surprise, anger, happiness, disgust, neutral, fear, sadness. The sentences recorded by actors are picked up from the TIMIT Acoustic-Phonetic Continuous Speech corpus. The combination of TESS and RAVDESS data sets is called TESS+RAVDESS in this study. Because the emotional categories are common, there is no problem to consolidate them. The dataset consists of 4,240 utterances voiced by 14 female and 12 male actors. The distribution of the datasets used in the study is given in Table <ref>. §.§ Feature extraction and pre-processing Feature extraction stage plays an importance role at specifying the performance of any learning methodology. Eligible selection of feature could enable to a better trained method, while inconvenient features would crucially disrupt the training procedure <cit.>. In this work, we mainly focus on to detect the speech emotion from raw audio files without using hand-crafted features. The feature extraction stage is automatically carried out in the deep learning architectures by processing raw audio files, directly. To show the effectiveness of proposed model, the performance of the system is compared with the conventional feature extraction techniques, namely, Mel-scale Spectogram and Mel-Frequency Cepstral coefficients (MFCC). Mel-scaled spectrogram is widely applied in sound classification and speech emotion recognition tasks <cit.>. The features obtained with Mel-scaled spectrogram makes possible to imitate the sound frequency of human in a specific rank. It is known that Mel-scaled spectrogram performs well recognition and, pursing timbre fluctuations in an audio file. On the other hand, it inclines to be weak when a distinguishable representation of pitch classes and harmony are considered <cit.>. Log-mel spectrogram feature set comprises in Mel-scaled spectrograms demonstrating emotion states. At the pre-processing stage, noise reduction, windowing, framing are carried out to the speech signal. 1024 size of short time Fourier transform and 0.025 s and 0.010 s overlapped window size with hamming window are employed in the experiments. In addition, 128 equal-width log-energies are used. Finally, 168 features are acquired as a result of Mel-spectogram processes. The MFCC <cit.> feature extraction technique is considered to be the closest feature extraction technique to the human auditory system. Initially, in this procedure, the original signal is translated from the time domain to the frequency domain using the discrete Fourier transform (DFT). For this conversion, the power spectrum is employed. For the purpose of reducing frequency distortion brought on by segmentation prior to DFT, hamming window is utilized. The frequency is then wrapped from the hertz scale to the mel scale using a filter bank. In conclusion, the logarithm of the Mel scale power spectrum's feature vectors are extracted using discrete cosine transformation (DCT) <cit.>. At the pre-processing stage, as same as with Mel-spectogram noise reduction, windowing, framing are carried out to the speech signal. 40 features are acquired as a result of MFCC processes. The size of the MFCC features varies depending on variations in the input audio size. Because of this, the first 2.5 seconds of each sound are used for feature extraction. Training acoustic models straight from the raw wave-form data is challenging task in speech recognition field. Traditional deep neural network-based acoustic methods is based on processing hand-crafted input features. In this work, we propose deep neural network-based acoustic method which fed with raw multichannel waveforms as input by inspiring studies <cit.>, <cit.>, <cit.>, <cit.>, <cit.> without performing feature extraction stage by constructing high-level representative features. As a pre-processing step, raw sounds are first normalized to mean 0 and variance 1. If the length of the audio data is more than the specified upper limit size (6 sn), it is clipped. Otherwise if length is lower than threshold, input array is padded with "0" values. §.§ Machine learning based methods Machine learning which is a subfield of artificial intelligence, is a way of teaching computers how to do things that people can do naturally, like learning to recognize patterns in data. There are different machine learning algorithms, each of which is good at solving different types of problems. In this work, some of popular machine learning algorithms and their ensemble versions are used to solve speech emotion recognition problem. SVM <cit.> is a machine learning algorithm used for classification (determining whether objects belong to a certain category) and regression (in predicting future values). It is particularly useful for classification problems in which the objects in the data set can be neatly divided into groups, or classes. The SVM algorithm helps us to divide a space into categories so that we can easily put new data points into the right category in the future. This is done by creating a decision boundary that separates the different categories. The Support Vector Machine (SVM) selects the points along the axes that are most helpful in creating a hyperplane. These points are called "support vectors," and this is the reason algorithm is called as a Support Vector Machine. k-NN <cit.> algorithm assumes that things that are similar tend to be near each other. It is the simplest machine learning algorithm. KNN is basically based on the premise that the class values of nearest samples will be similar. In this evaluation two values are used. Distance: The distance of the k nearest sample of the sample whose class value is to be calculated. K (neighbor count): It is the number of nearest neighbors which this calculation will be made. Decision tree <cit.> is a algorithm that can help model situations and make decisions based on dividing the data set into smaller subgroups that are within the framework of certain rules (decision rules). It has a tree-like structure, with branches representing different possible outcomes. Trees are also useful for modeling resource costs and possible outcomes for decision making. The tree structure contains some units. Internal nodes representing the tests or attributes of each stage. Each branch indicate an attribute result. At final, the path from leaf to root represents rules of classification. Naïve Bayes <cit.> classification is a method used to estimate the probability that a particular set of features belongs to a particular class. It uses the Bayes theorem to calculate the probability of each class, and then selects the class with the highest probability. This method is much faster than more complex methods, and is often used to quickly determine which class a particular set of features belongs to. Ensemble methods are a way to combine the predictions of several different estimators to improve the accuracy and reliability of predictions. One of the most known and used ensemble methods is majority voting. In this approach, different models perform predictions and the prediction with the most votes is determined as the final decision. Another popular ensemble approach is stacking <cit.>. This approach includes a two-level learning process, Base (Level 0) and Meta (Level 1). The base classifiers run in parallel and their estimations are combined into a metadata. Then these estimations become input into the meta classifier. Basically, stacking approach tries to figure out the best way to combine the input predictions to get a more accurate output prediction. §.§ Deep learning based methods In this work, 3 different deep learning approaches are used: conventional CNN, LSTM and CNN-LSTM. Deep learning is considered as the cutting edge of artificial intelligence today. It is basically inspired by the human learning system. With its hierarchical connections and its multi-layered structure, it enables the learning from the lowest level features to the highest level features. The values of the weights of the connections in the layers are the most important point of the learning function. Convolutional Neural Networks (CNN) is developed based on the specialization of neurons in the visual and perception system of humans. While 2-dimensional filters are used in 2-dimensional images, learning function is performed with 1-dimensional filters in 1-dimensional data such as sound or time series. Recurrent Neural Networks (RNNs) are a type of Deep Learning structures used to predict the next step. RNNs use the output of the previous step as the input of the current step, whereas classical deep learning networks work independently of each other. As a result, the RNN ensures that each output it generates follows the previous step. As a result, it tries to store the results of the previous steps in its memory. However, they are successful in predicting short-term dependencies, they are not successful enough in long-term dependencies. Because of these fundamental RNN problems, later variants of Long Short-Term Memory (LSTM) <cit.> networks have been proposed. LSTM is proposed as a solution to the short-term memory problem. It solves this problem using Cell State and various gates. Cell State is a line that carries meaningful information across cells, and the gates use the sigmoid activation function to squash the data between 0 to 1. If the values that the sigmoid activation function can have are taken into consideration, 0 means that the information will be forgotten, and 1 means that it will continue to be used as it is. LSTM has 3 gates as forget gate, input gate, and output gate. The Forget Gate determines amount of information in a memory will be forgotten or kept. The Input Gate updates the Cell State based on the results of the sigmoid process. The Output Gate decides what the next cell's input will be. The proposed CNN model which is depicted in Figure <ref> consists of one-dimensional convolutional layers consolidated with activation, batch normalization and dropout layers. The first layer is consisted of 256 filters with the kernel size of 1 × 5 and stride 1. After that, the output is activated by using rectified linear unit (ReLU) and dropout layer with the ratio of 0.25 is performed. The second layer and the third layer is also constructed with 256 filters, and the same stride and kernel size are processed similarly preceding layer. In these 2 layers after convolution, batch normalization is carried out sending its output to the dropout layer with the ratio of 0.25. After that, convolution layers with 128 filters of size 1 × 5 is applied in fourth and fifth layers and before fifth convolutional layer ReLU activation and dropout layers is used. Then, 2 Fully Connected Layers (FCN) with 2432 and 8 are pursued after the last convolutional layer. Finally, output layer has 8 output for emotions and it uses Softmax function. The proposed LSTM network used in this work shown in Figure <ref>. First, LSTM block with 512 node is performed on input data. Then batch normalization, dropout layer with ratio of 0.25 and FCN with 256 size is applied, respectively. Next, batch normalization, dropout layer with ratio of 0.25 and FCN with 128 size is constructed. After that dropout and FCN with 64 is carried out. Then, batch normalization is carried out sending its output to the dropout layer with the ratio of 0.25. Afterwards, it is continued 2 Fully Connected Layers (FCN) with 2000 and 8 after the last LSTM with 50 nodes. Finally, output layer has 8 output for emotions and it uses Softmax function. The CNN-LSTM method shown in Figure <ref> has been applied to combine the feature extraction of CNN networks and the long term dependencies of LSTM. The first layer is consisted of 256 filters and second layer is consisted of 128 filters with the kernel size of 1 × 5 and stride 1. After that, batch normalization is carried out sending its output to the dropout layer with the ratio of 0.25. Then 2 set of convolutional layer, batch normalization and dropout is carried out. Filter sizes of this convloutional layer is 128 and 64 respectively. Next, LSTM block with 512 node is performed. It is continued 2 Fully Connected Layers (FCN) with 2000 and 8. Finally, output layer has 8 output for emotions as same as the other 2 architectures. Using convolutional neural network architecture, deep features are fed into the three different deep and hybrid deep neural networks, namely convolutional neural network (CNN), long short-term memory network (LSTM), CNN-LSTM. Thence, speech emotion recognition procedure is implemented using deep features. § EXPERIMENTAL RESULTS In this study, 6 different dataset is used to demonstrate performance and generalization capability of proposed method. Details of these datasets are given in Section 3.1. All datasets are divided into 80% training and 20% test. The performance of the proposed methods is calculated using the accuracy formula given in (1). In (1), every class can be named as C_x (happy, neutral etc.). TP (True Positive) denotes audio belonging to the C_x is correctly classified as C_x. FP (False Positive) represents all non-C_x samples classified as C_x. TN (True Negative) is all non- C_x samples not classified as C_x. FN (False Negative) represents all C_x samples not classified as C_x. Accuracy = TP+TN/TP+TN+FP+FN Evaluation results of machine learning based methods are given in Table <ref> and Table <ref>. Table <ref> shows the results where MFCC is used as the feeature extraction in machine learning-based methods, and Table <ref> shows the results where Mel-Spectrogram is the feature extraction. In Table <ref>, SVM is can be considered to be more successful as it gives the best result in 4 of 6 datasets. According to In Table <ref>, Random Forest has the best results in all datasets. When Table <ref> and Table <ref> are evaluated together, usage of Mel-Spectrogram and Random forest jointly gives more successful results. Considering that machine learning-based methods did not show sufficient performance, MFCC and Mel feature extraction methods are used together with the deep learning architectures given in Section 3.4. The MFCC and Mel-spectrogram features of the raw audio signals are extracted and these features are given as an input to the deep networks. Table <ref> and Table <ref> show results using MFCC features and Mel-spectrogram features as inputs, respectively. It can be understood that, using MFCC and CNN together gives best results in all datasets. In addition, it is clearly seen that this approach is superior to machine learning-based methods. Instead of extracting distinctive features of the audio signal using feature extraction methods such as MFCC or Mel-Spectrogram, the use of raw audio as input for deep learning models is analyzed. Here, the most important aim is to eliminate human intervention and to determine the most characteristic features automatically with the deep learning approach. Table <ref> shows end-to-end deep learning result of models given in Section 3.4. The training parameters of the deep architectures are given in Table <ref>. It can be seen in Table 7, the CNN-based deep learning approach give the best results in 5 out of 6 datasets. Intriguingly, these results show that long-term dependencies features seem to play a minor role at Speech Emotion Recognition. Table <ref> shows the comparison of the recent and proposed methods for sound emotion recognition in the literature. The comparison is carried out on the EMO-DB, RAVDESS, SAVEE, TESS, CREMA, TESS+RAVDESS datasets. The literature results used in comparison in Table <ref> are provided on their papers. In this table, proposed method refers CNN method with raw audio. As seen in Table <ref>, proposed method has superior performance results in 5 out of 6 datasets and has second best result in the other. The confusion matrices obtained using our proposed method for various datasets with CNN are shown in Fig. <ref>. The confusion matrices show how many times an observation's predicted class (the column in the table) matches its true class (the row in the table). Thus, it is possible to examine the mixing ratio between classes in detail. As seen on the confusion matrices, it is clear that the proposed method does not have a definite and continuous confusion tendency between classes. § CONCLUSION In this work, an end-to-end deep learning based approach is proposed for speech emotion recognition. In the proposed method, raw audio signals are employed as input. Three different deep learning architecture named as CNN, LSTM, and CNN-LSTM are composed. Feature extraction capability and performance of proposed architectures are compared with MFCC and Mel-Spectrogram. Also, traditional machine learning-based approaches are combined with MFCC and Mel-Spectrogram and their performances are compared with proposed method. Experimental results show that the CNN based deep learning approach reveals remarkable results with maximum 99,46% of accuracy. In future, transfer learning techniques will be used to improve performance. § COMPETING INTERESTS The authors declare that they have no conflict of interest. cas-model2-names
http://arxiv.org/abs/2307.01088v1
20230703150828
Empirically Validating Conformal Prediction on Modern Vision Architectures Under Distribution Shift and Long-tailed Data
[ "Kevin Kasa", "Graham W. Taylor" ]
cs.LG
[ "cs.LG", "cs.CV", "stat.ML" ]
[ Empirically Validating Conformal Prediction on Modern Vision Architectures Under Distribution Shift and Long-tailed Data equal* Kevin Kasauofg,vector Graham W. Tayloruofg,vector uofgSchool of Engineering, University of Guelph, Guelph, Canada vectorVector Institute for Artificial Intelligence, Toronto, Canada Kevin Kasakkasa@uoguelph.ca conformal prediction, uncertainty estimation, out-of-distribution 0.3in ] Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. Yet, its performance is known to degrade under distribution shift and long-tailed class distributions, which are often present in real world applications. Here, we characterize the performance of several post-hoc and training-based conformal prediction methods under these settings, providing the first empirical evaluation on large-scale datasets and models. We show that across numerous conformal methods and neural network families, performance greatly degrades under distribution shifts violating safety guarantees. Similarly, we show that in long-tailed settings the guarantees are frequently violated on many classes. Understanding the limitations of these methods is necessary for deployment in real world and safety-critical applications. § INTRODUCTION Deep learning models have shown the ability to complete a diverse range of tasks with exceedingly high performance <cit.>. However, high performance metrics (e.g., accuracy) alone are insufficient for deployment in safety-critical applications, where uncertainty measures and safety guarantees that experts can trust are required <cit.>. Conformal prediction (CP) <cit.> is a promising method for addressing these limitations. Conformal prediction turns heuristic notions of uncertainty into reliable ones through a post-training adjustment, which can then be used to predict confidence sets that are guaranteed to contain the true class with some user specified error rate. Various conformal prediction methods <cit.> perform well on a number of complex tasks such as image classification and object detection <cit.>. However, these results thus far are largely limited to in-distribution and class-balanced data regimes. This is problematic since data encountered in real-world settings is often imbalanced <cit.> or subject to distribution shift <cit.>, and robustness to these settings is necessary for the safe deployment of ML <cit.>. Despite the importance of understanding performance in these real-world settings, there has thus far been no comprehensive investigation of the performance of popular conformal prediction methods under distribution shift and long-tailed data. Since conformal prediction assumes identically distributed data and guarantees provided are based on micro- rather than macro-averages, it is unsurprising that performance would degrade under shifted and long-tailed distributions. This phenomenon has been observed in small-scale datasets <cit.>. Nonetheless, the recent adoption of conformal prediction into deep learning and safety-critical domains <cit.> warrants specific investigation of these methods using modern neural network architectures and large-scale datasets that are more characteristic of data found “in the wild”. In this study, we evaluate four different conformal prediction methods on numerous distribution-shifted and long-tailed datasets and thoroughly characterize their performance under these conditions. We investigate across three deep learning model families, while also controlling for model size. Our primary findings are: * Safety guarantees in terms of coverage (Eq. <ref>) are violated even under small distribution shifts. * Class-conditional coverage is frequently violated in long-tailed distributions. * The size of the confidence sets, with smaller being more desirable, increases under both these settings. * The above results hold across all CP methods and model architectures. § METHODS In this study, four conformal prediction methods were evaluated across five distribution-shifted datasets and one long-tailed dataset, for image classification tasks. Three neural architecture families were used as the base classifier, to determine their affect on CP performance, which was evaluated using several metrics. §.§ Conformal Prediction Methods The common classification paradigm involves training a model π_θ(x) to predict a single label Y ∈ [K] := {1, ..., K}. In contrast, conformal prediction is a statistical method that can be used to predict confidence sets for machine learning models <cit.>. Formally, it aims to construct a confidence set 𝒞⊆ [K] such that the true class is included with some user specified error rate α: ℙ(Y_test∈𝒞(X_test)) ≥ 1 - α. This is done through a two step post-processing procedure. In the calibration step, a score function s(x,y) is used on held-out data to transform a provisional uncertainty measure (e.g., softmax values) into conformity scores. The 1-α quantile of the conformity scores is then used to determine a threshold τ. In the prediction step, sets 𝒞(X) are constructed on new unseen data by including all the labels whose conformity scores fall within the threshold, guaranteeing 1-α coverage. Importantly, this guarantee is known as marginal coverage, since it holds in expectation unconditionally across all data points rather than per-class. The returned confidence sets can also be used as an uncertainty estimate, with larger confidence sets | 𝒞(X) | suggesting greater uncertainty in the predictions. The threshold conformal prediction (THR) method <cit.> generally produces the smallest average set sizes. Here, the confidence sets are constructed as: 𝒞(x; τ) := {k ∈ [K]: s(x,k) > τ} Here, the score function is defined as s(x, y) = π_θ(x)_y, and the threshold τ is computed as the α(1+1N_cal) quantile of the calibrated conformity scores. During calibration, the softmax value corresponding to the true class y of the input x is used in the conformity scores. At test time, this method includes in the set those classes whose softmax score is greater than the calibrated threshold. Although THR produces small set sizes, it may lead to “uneven” coverage, with difficult classes achieving worse coverage. Adaptive prediction sets (APS) <cit.> were developed to improve conditional coverage, with the trade-off of larger set sizes. In the APS method, the conformity scores are calculated by accumulating softmax values: s(x,y) = ∑_j=1^y π̂_θ (x)_j, Where π̂_θ (x) is the sorted softmax values for input x from greatest to smallest. Subsequently, sets are constructed by including values less than the threshold τ: 𝒞(x;τ) := {k ∈ [K]: s(x,k) < τ}, Similarly to THR, the conformity scores with respect to the true class y_i are used for calibration, and the (1-α)(1 + 1N_cal) quantile is used to find the value τ that ensures marginal coverage on test examples. Regularized adaptive prediction sets (RAPS) <cit.> build on APS by modifying the conformity scores to include a penalty λ to classes beyond some specified threshold k_reg. Specifically, the score function is defined as: s(x,y) := ∑_j=1^k π_θ (x)_y + λ· (o_x(y) - k_reg)^+, where o_x(y) is the ranking of label y among the sorted probabilities, and (·)^+ indicates the positive part of the expression. The confidence sets are then defined the same as in Equation <ref>. The regularization helps to exclude probabilities that are deep in the tail that would otherwise have been included, since labels now require a greater score to be included in the set. This helps to produce smaller prediction sets than APS (albeit not as small as THR), and has been shown to work well on large datasets like ImageNet <cit.>. In our experiments, convolution-based networks use values of λ=0.01 and k_reg=5, and transformer-based networks use λ=0.1 and k_reg=2. The CP methods described thus far are implemented after a model is trained, which does not directly optimize the underlying model to produce high performing confidence sets. Conformal training (ConfTr) <cit.> was proposed to address this, by simulating the conformal prediction process during training. This is done by splitting each training batch B into a calibration B_cal and prediction B_pred subset. Just like in regular CP, B_cal is used to calibrate the threshold τ, and confidence sets are formed on B_pred. To perform the thresholding step, differentiable sorting <cit.> is used to find the quantiles of the conformity scores in a way that can be back-propagated during training. The size of the confidence sets is then used as the loss function to be minimized during training: ℒ_size = max(0, ∑_k=1^K E_θ,k(x;τ) -κ). In Equation <ref>, E_θ,k(x;τ) is a “smooth” assignment of class k to the confidence set, calculated as E_θ,k(x;τ) := σ(s(x,y) - τ/T), where σ(·) is the Sigmoid function and T ∈ [0,1] is a temperature parameter controlling the smoothness. This penalizes the set sizes, and the hyper-parameter κ∈{0,1} determines whether or not sets of size one are penalized (i.e., κ=1 means that singleton sets will incur no loss). An additional classification loss can be included to ensure the true label is included in the confidence sets: ℒ_class = ∑_k=1^K[(1-C_θ, k(x;τ)) ·1[y=k] ]. A weighted combination ℒ = ℒ_class + λℒ_size can then be used to train the model. For this method, a ResNet-50 pre-trained on ImageNet <cit.> was used as the base model. The training methodology and hyper-parameters closely follow that used by the original authors on the CIFAR-100 dataset <cit.>. This included re-initializing the final fully connected layer, and training one baseline model using cross-entropy loss and one with the combined ℒ_size and ℒ_class losses, defined in Equation <ref> and Equation <ref>. Any CP method can be used to predict the confidence sets during training, however in practise THR has been shown to produce better results, so that is used in this study for the ConfTr experiments. Because ConfTr relies on smooth sorting / assignment operations, post-training conformal prediction is still performed to ensure the formal guarantees are maintained. §.§ Evaluation Metrics The primary metrics used for evaluation are coverage and inefficiency. Coverage measures the fraction of true labels that are actually included in the confidence set: Cover := 1/N_test∑_i = 1^N_test1 [y_i ∈𝒞(x_i)]. The conformal prediction process guarantees that ℙ(Y_test∈𝒞(X_test)) ≥ 1 - α, thus the Cover metric should be ≥ 1-α on average. However, conformal prediction does not guarantee class conditional coverage: ℙ(Y_test∈𝒞(X_test) | Y_test = y) ≥ 1 - α. We can capture conditional performance using a “macro” coverage metric. First we can consider Cover(k) to be the the coverage computed only on test points from class k ∈ [K]. The macro coverage is then: Macro Cover := 1/K∑_k=1^K Cover(k). The non-conditional guarantees of conformal prediction mean that although across an entire dataset the desired coverage may be maintained, there may be classes which violate the desired coverage level. This is especially pertinent for long-tailed datasets. Thus, the number of classes that violates the coverage level is found: Cover Violation := ∑_k=1^K 1[ Cover(k) < 1-α]. Inefficiency is a measure of the size of the confidence sets. The prediction sets must both provide adequate coverage (contain the right class), and be informative; very large prediction sets are of little use. Inefficiency is measured as: Ineff := 1/N_test∑_i=1^N_test| 𝒞(x_i) |. The macro inefficiency is also calculated, to determine if some classes tend to return particularly large sets. Similarly to Equation <ref>, we define Ineff(x) as the inefficiency on class k, and the macro inefficiency as: Macro Ineff := 1/K∑_k=1^K Ineff(k). The macro coverage and inefficiency metrics will be used to characterize performance on the long-tailed datasets. §.§ Datasets Distribution Shift. We use the ImageNet <cit.> dataset to train our neural networks and calibrate the CP classifiers. Following previous works on conformal prediction <cit.>, we reserve 50% of the ImageNet validation set to find the threshold τ. This same threshold is used to form prediction sets on the remaining ImageNet validation set, as well as the following distribution-shifted datasets: * ImageNetV2 <cit.> is a new ImageNet test set collected by closely following the same format and collection process as ImageNet, with the goal of mimicking the original data distribution.[It is difficult to conclude whether this dataset represents a true distribution shift in the absence of convincing generalization error bounds for ImageNet-scale DNNs, however, we adopt 's hypothesis that it indeed represents a small shift.] * ImageNet-C <cit.> applies common visual corruptions to the ImageNet validation set. In this study, the Gaussian noise, motion blur, brightness, and contrast corruptions are investigated, representative of the four main categories — noise, blur, weather, and digital, respectively. * ImageNet-A <cit.> contains naturally adversarial images that a ResNet-50 incorrectly classifies, but can be correctly classified by humans. * ImageNet-R <cit.> consists of rendered versions of ImageNet classes, such as drawings, cartoons, etc. The details of these datasets are summarized in Table <ref>. Metrics are reported as the average across ten trials, to account for variation in the calibration split. Long-tailed labels. Conformal prediction performance on long-tailed data distributions was evaluated on the PlantNet-300k dataset <cit.>. This is a highly imbalanced dataset, with 80% of classes accounting for only 11% of the total number of images. In addition to the 243,916 training examples, PlantNet-300k has defined validation and test sets, each with 31,118 examples and at least one image of each class in each set. The validation set is used to calibrate the conformal prediction methods and find the threshold, and the test set is used to form confidence sets and evaluate performance. Here, all three data splits (train, validation, and test) are long-tailed, meaning that the conformal calibration process is conducted on highly imbalanced data. §.§ Deep Learning Models To account for differences in model architecture and training algorithms, three distinct model families were evaluated: * ResNets <cit.> are prototypical convolutional neural networks. * Vision Transformers (ViT) <cit.> are transformer-based architectures that are pre-trained on ImageNet-21k <cit.>, before being fine-tuned on ImageNet-1k. * Data efficient image Transformers (DeiT) <cit.> are also transformer networks, however they are trained only on ImageNet-1k following a carefully designed training procedure. § EXPERIMENTS AND RESULTS §.§ Distribution Shift Our results on alternate ImageNet test sets are summarized in Figure <ref>. We can see that the desired coverage is consistently violated across all models. Distribution shift also leads to increased inefficiency — a proxy for the increased uncertainty of the underlying model. The coverage target is violated even on small distribution shifts, such as ImageNet-V2, which was purposefully and carefully constructed to match the original ImageNet distribution as closely as possible. The inability of these methods to maintain coverage even on minor distribution shifts highlights the risks of deployment in real world situations, without additional safety features. Smaller models exhibit worse inefficiency, and often lower coverage rates. The larger ViT / DeiT models perform best overall with the smallest degradation under distribution shift. These results highlight the value of combining conformal prediction with modern, high-performing deep learning models. It affirms that efforts to improve the performance of the base model may improve the performance of conformal prediction methods under distribution shift. Refer to Appendix <ref> for detailed results on these datasets, as well as ImageNet-C results at each corruption level. Further, Appendix <ref> shows the relationship between model accuracy and CP coverage, and Appendix <ref> includes results on the recent ImageNet-W <cit.> dataset. Table <ref> shows the results of the conformal training method. As expected, the ConfTr method leads to smaller sets on the in-distribution data, however, this does not translate to improved coverage on distribution-shifted data. §.§ Long-tailed Label Distributions Table <ref> shows the results on the long-tailed PlantNet-300k dataset. Although the target coverage of 0.90 is maintained marginally across the entire dataset, it is frequently violated on a class-conditional basis. Indeed, there are often hundreds of classes with violated coverage levels, leading to a violation of coverage on up to 70% of the classes in the worst case. This is consistent across all models and methods, and highlights the difficulty of applying conformal prediction methods to long-tailed data distributions.The ineffectiveness of approximating class-conditional coverage on PlantNet-300k is further demonstrated in the Appendix (see Table <ref>). The Appendix also includes the results of experiments on the iNaturalist-2018 <cit.> and -2019 <cit.> datasets (see Table <ref>). § CONCLUSION In this paper, we studied the performance of conformal prediction methods under distribution shift and long-tailed data, on large-scale datasets and modern neural architectures. We show that performance degrades in these regimes, and coverage guarantees are frequently violated. We also observed increased inefficiency, the average size of the conformal sets. While violation of coverage guarantees is undesirable, inefficiency indicates model uncertainty. A good model should exhibit heightened uncertainty with OOD examples. There have been several recent methods developed in dealing with distribution shift <cit.> and class-conditional coverage <cit.>. However, these have thus far been developed mostly on small-scale datasets, and it remains to be seen how they translate to the large-scale datasets studied here. This is something future works may tackle, and we hope that our results will serve as baselines upon which new conformal prediction methods and novel algorithms and architectures for deep learning can improve. Ultimately, this work highlights the challenges that conformal prediction methods may face in real world applications, where class imbalance is common and data distributions are ever-shifting. Developing and empirically evaluating conformal prediction methods that are more robust to these admittedly difficult settings is a key requirement to their adoption in safety-critical environments. icml2023 45 urlstyle [Amodei et al.(2016)Amodei, Olah, Steinhardt, Christiano, Schulman, and Mané]amodei2016concrete Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. Concrete problems in ai safety, 2016. [Amoukou & Brunel(2023)Amoukou and Brunel]amoukou_adaptive_2023 Amoukou, S. I. and Brunel, N. J. B. Adaptive Conformal Prediction by Reweighting Nonconformity Score, March 2023. URL <http://arxiv.org/abs/2303.12695>. arXiv:2303.12695 [cs, stat]. [Angelopoulos et al.(2022a)Angelopoulos, Bates, Malik, and Jordan]angelopoulos_uncertainty_2022 Angelopoulos, A., Bates, S., Malik, J., and Jordan, M. I. Uncertainty Sets for Image Classifiers using Conformal Prediction, September 2022a. URL <http://arxiv.org/abs/2009.14193>. arXiv:2009.14193 [cs, math, stat]. [Angelopoulos et al.(2022b)Angelopoulos, Bates, Candès, Jordan, and Lei]angelopoulos_learn_2022 Angelopoulos, A. N., Bates, S., Candès, E. J., Jordan, M. I., and Lei, L. Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control, September 2022b. URL <http://arxiv.org/abs/2110.01052>. arXiv:2110.01052 [cs, stat]. [Barber et al.(2023)Barber, Candes, Ramdas, and Tibshirani]barber_conformal_2023 Barber, R. F., Candes, E. J., Ramdas, A., and Tibshirani, R. J. Conformal prediction beyond exchangeability, February 2023. URL <http://arxiv.org/abs/2202.13415>. arXiv:2202.13415 [stat]. [Bhatnagar et al.(2023)Bhatnagar, Wang, Xiong, and Bai]bhatnagar_improved_2023 Bhatnagar, A., Wang, H., Xiong, C., and Bai, Y. Improved Online Conformal Prediction via Strongly Adaptive Online Learning, February 2023. URL <http://arxiv.org/abs/2302.07869>. arXiv:2302.07869 [cs, math, stat]. [Blondel et al.(2020)Blondel, Teboul, Berthet, and Djolonga]blondel2020fast Blondel, M., Teboul, O., Berthet, Q., and Djolonga, J. Fast differentiable sorting and ranking, 2020. [Brown et al.(2020)Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal, Neelakantan, Shyam, Sastry, Askell, Agarwal, Herbert-Voss, Krueger, Henighan, Child, Ramesh, Ziegler, Wu, Winter, Hesse, Chen, Sigler, Litwin, Gray, Chess, Clark, Berner, McCandlish, Radford, Sutskever, and Amodei]LLM Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL <https://arxiv.org/abs/2005.14165>. [Castro et al.(2020)Castro, Walker, and Glocker]Castro_2020 Castro, D. C., Walker, I., and Glocker, B. Causality matters in medical imaging. Nature Communications, 110 (1), jul 2020. 10.1038/s41467-020-17478-w. URL <https://doi.org/10.10382Fs41467-020-17478-w>. [Cauchois et al.()Cauchois, Gupta, and Duchi]cauchois_knowing_nodate Cauchois, M., Gupta, S., and Duchi, J. C. Knowing what You Know: valid and validated confidence sets in multiclass and multilabel prediction. [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei]deng2009imagenet Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. [Deng et al.(2023)Deng, Ardeshir, and Hsu]deng_group_2023 Deng, S., Ardeshir, N., and Hsu, D. Group conditional validity via multi-group learning, March 2023. URL <http://arxiv.org/abs/2303.03995>. arXiv:2303.03995 [cs, math, stat] version: 1. [Dosovitskiy et al.(2020a)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby]Dosovitskiy2020AnII Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv, abs/2010.11929, 2020a. [Dosovitskiy et al.(2020b)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby]ViT Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. CoRR, abs/2010.11929, 2020b. URL <https://arxiv.org/abs/2010.11929>. [Dunn et al.(2022)Dunn, Wasserman, and Ramdas]dunn_distribution-free_2022 Dunn, R., Wasserman, L., and Ramdas, A. Distribution-Free Prediction Sets for Two-Layer Hierarchical Models, February 2022. URL <http://arxiv.org/abs/1809.07441>. arXiv:1809.07441 [math, stat]. [Fisch et al.(2021)Fisch, Schuster, Jaakkola, and Barzilay]fisch_few-shot_2021 Fisch, A., Schuster, T., Jaakkola, T., and Barzilay, R. Few-shot Conformal Prediction with Auxiliary Tasks, July 2021. URL <http://arxiv.org/abs/2102.08898>. arXiv:2102.08898 [cs]. [Garcin et al.(2021)Garcin, Joly, Bonnet, Affouard, Lombardo, Chouet, Servajean, Lorieul, and Salmon]plantnet-300k Garcin, C., Joly, A., Bonnet, P., Affouard, A., Lombardo, J., Chouet, M., Servajean, M., Lorieul, T., and Salmon, J. Pl@ntNet-300K: a plant image dataset with high label ambiguity and a long-tailed distribution. In NeurIPS Datasets and Benchmarks 2021, 2021. [Gendler et al.(2022)Gendler, Weng, Daniel, and Romano]gendler_adversarially_2022 Gendler, A., Weng, T.-W., Daniel, L., and Romano, Y. ADVERSARIALLY ROBUST CONFORMAL PREDICTION. 2022. [Gibbs & Candès(2022)Gibbs and Candès]gibbs_conformal_2022 Gibbs, I. and Candès, E. Conformal Inference for Online Prediction with Arbitrary Distribution Shifts, October 2022. URL <http://arxiv.org/abs/2208.08401>. arXiv:2208.08401 [cs, stat]. [Gibbs et al.(2023)Gibbs, Cherian, and Candès]gibbs2023conformal Gibbs, I., Cherian, J. J., and Candès, E. J. Conformal prediction with conditional guarantees, 2023. [He et al.(2015)He, Zhang, Ren, and Sun]He2015DeepRL He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2015. [Hendrycks & Dietterich(2018)Hendrycks and Dietterich]Hendrycks2018BenchmarkingNN Hendrycks, D. and Dietterich, T. G. Benchmarking neural network robustness to common corruptions and perturbations. ArXiv, abs/1903.12261, 2018. [Hendrycks et al.(2021a)Hendrycks, Basart, Mu, Kadavath, Wang, Dorundo, Desai, Zhu, Parajuli, Guo, Song, Steinhardt, and Gilmer]hendrycks2021many Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., Song, D., Steinhardt, J., and Gilmer, J. The many faces of robustness: A critical analysis of out-of-distribution generalization. ICCV, 2021a. [Hendrycks et al.(2021b)Hendrycks, Zhao, Basart, Steinhardt, and Song]hendrycks2021nae Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., and Song, D. Natural adversarial examples. CVPR, 2021b. [iNaturalist 2018 competition dataset()]inaturalist18 iNaturalist 2018 competition dataset. iNaturalist 2018 competition dataset.  <https://github.com/visipedia/inat_comp/tree/master/2018>, 2018. [iNaturalist 2019 competition dataset()]inaturalist19 iNaturalist 2019 competition dataset. iNaturalist 2019 competition dataset.  <https://github.com/visipedia/inat_comp/tree/master/2019>, 2019. [Jung et al.(2022)Jung, Noarov, Ramalingam, and Roth]jung_batch_2022 Jung, C., Noarov, G., Ramalingam, R., and Roth, A. Batch Multivalid Conformal Prediction, September 2022. URL <http://arxiv.org/abs/2209.15145>. arXiv:2209.15145 [cs, math, stat]. [Krawczyk(2016)]Krawczyk2016LearningFI Krawczyk, B. Learning from imbalanced data: open challenges and future directions. Progress in Artificial Intelligence, 5:0 221 – 232, 2016. [Li et al.(2023)Li, Evtimov, Gordo, Hazirbas, Hassner, Ferrer, Xu, and Ibrahim]li_2023_whac_a_mole Li, Z., Evtimov, I., Gordo, A., Hazirbas, C., Hassner, T., Ferrer, C. C., Xu, C., and Ibrahim, M. A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others. June 2023. URL <https://arxiv.org/abs/2212.04825>. [Lu et al.(2022)Lu, Angelopoulos, and Pomerantz]10.1007/978-3-031-16452-1_52 Lu, C., Angelopoulos, A. N., and Pomerantz, S. Improving trustworthiness of ai disease severity rating in medical imaging with ordinal conformal prediction sets. In Wang, L., Dou, Q., Fletcher, P. T., Speidel, S., and Li, S. (eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, pp. 545–554, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-16452-1. [Muthali et al.(2023)Muthali, Shen, Deglurkar, Lim, Roelofs, Faust, and Tomlin]muthali2023multiagent Muthali, A., Shen, H., Deglurkar, S., Lim, M. H., Roelofs, R., Faust, A., and Tomlin, C. Multi-agent reachability calibration with conformal prediction, 2023. [Ovadia et al.(2019)Ovadia, Fertig, Ren, Nado, Sculley, Nowozin, Dillon, Lakshminarayanan, and Snoek]ovadia2019trust Ovadia, Y., Fertig, E., Ren, J., Nado, Z., Sculley, D., Nowozin, S., Dillon, J. V., Lakshminarayanan, B., and Snoek, J. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift, 2019. [Recht et al.(2019)Recht, Roelofs, Schmidt, and Shankar]Recht2019DoIC Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? In International Conference on Machine Learning, 2019. [Ridnik et al.(2021)Ridnik, Ben-Baruch, Noy, and Zelnik-Manor]ridnik2021imagenet21k Ridnik, T., Ben-Baruch, E., Noy, A., and Zelnik-Manor, L. Imagenet-21k pretraining for the masses, 2021. [Romano et al.(2020)Romano, Sesia, and Candès]romano_classification_2020 Romano, Y., Sesia, M., and Candès, E. J. Classification with Valid and Adaptive Coverage, June 2020. URL <http://arxiv.org/abs/2006.02544>. arXiv:2006.02544 [stat]. [Sadinle et al.(2019)Sadinle, Lei, and Wasserman]sadinle_least_2019 Sadinle, M., Lei, J., and Wasserman, L. Least Ambiguous Set-Valued Classifiers with Bounded Error Levels. Journal of the American Statistical Association, 1140 (525):0 223–234, January 2019. ISSN 0162-1459, 1537-274X. 10.1080/01621459.2017.1395341. URL <http://arxiv.org/abs/1609.00451>. arXiv:1609.00451 [cs, stat]. [Silver et al.(2017)Silver, Schrittwieser, Simonyan, Antonoglou, Huang, Guez, Hubert, Baker, Lai, Bolton, Chen, Lillicrap, Hui, Sifre, van den Driessche, Graepel, and Hassabis]silver2017mastering Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., and Hassabis, D. Mastering the game of go without human knowledge. Nature, 550:0 354–, October 2017. URL <http://dx.doi.org/10.1038/nature24270>. [Stutz et al.(2022)Stutz, Krishnamurthy, Dvijotham, Cemgil, and Doucet]stutz_learning_2022 Stutz, D., Krishnamurthy, Dvijotham, Cemgil, A. T., and Doucet, A. Learning Optimal Conformal Classifiers, May 2022. URL <http://arxiv.org/abs/2110.09192>. arXiv:2110.09192 [cs, stat]. [Teng et al.(2023)Teng, Wen, Zhang, Bengio, Gao, and Yuan]teng2023predictive Teng, J., Wen, C., Zhang, D., Bengio, Y., Gao, Y., and Yuan, Y. Predictive inference with feature conformal prediction, 2023. [Tibshirani et al.(2020)Tibshirani, Barber, Candes, and Ramdas]tibshirani2020conformal Tibshirani, R. J., Barber, R. F., Candes, E. J., and Ramdas, A. Conformal prediction under covariate shift, 2020. [Touvron et al.(2022)Touvron, Cord, and J'egou]Touvron2022DeiTIR Touvron, H., Cord, M., and J'egou, H. Deit iii: Revenge of the vit. In European Conference on Computer Vision, 2022. [Vazquez & Facelli(2022)Vazquez and Facelli]vazquez_facelli_2022 Vazquez, J. and Facelli, J. C. Conformal prediction in clinical medical sciences. Journal of Healthcare Informatics Research, 60 (3):0 241–252, 2022. 10.1007/s41666-021-00113-8. [Vovk(2012)]vovk2012conditional Vovk, V. Conditional validity of inductive conformal predictors, 2012. [Vovk et al.(2005)Vovk, Gammerman, and Shafer]conformal_prediction_vovk Vovk, V., Gammerman, A., and Shafer, G. Algorithmic Learning in a Random World. 01 2005. 10.1007/b106715. [Wightman(2019)]rw2019timm Wightman, R. Pytorch image models. <https://github.com/rwightman/pytorch-image-models>, 2019. § DETAILED RESULTS ON IMAGENET DISTRIBUTION SHIFT The detailed results on alternate ImageNet test sets are reported in Table <ref>. As mentioned, the target coverage level of 0.90 is violated in nearly all circumstances. We see that THR indeed provides the smallest set sizes, while the adaptability of RAPS generally results in better, but imperfect, coverage. Further, the basic APS method often leads to impractically large set sizes. In addition to degrading coverage, the inefficiency also increases on these datasets; a proxy for the increased uncertainty of the underlying model. Table <ref> further highlights the brittleness of conformal prediction methods. Here, we can see that even minor corruption levels frequently lead to a violation of the target coverage. This is especially noticeable in the combination of smaller networks such as ResNet-50 and the THR method, where the smallest corruption level leads to coverage violations across all corruption types. We can also see that some corruption types lead to a greater degradation than others: motion blur tends to perform worse on average and brightness the best. In spite of the frequent degradation, the combination of DeiT-B / ViT-B and the RAPS algorithm performs consistently better across many settings, maintaining coverage levels only a few percent below the target up to corruption level 3 on most datasets. § INEFFECTIVENESS OF CLASS-BALANCED CP ON PLANTNET-300K As demonstrated in Table <ref>, performing conformal prediction on long-tailed data leads to large violations in class-conditional coverage. Class-conditional coverage can be approached through class balanced conformal prediction, which aims to ensure that the specified error rates are guaranteed for every class. <cit.> propose a method that calibrates a threshold for each class, then including classes in the confidence set based on their class-specific thresholds: C(x;τ) = {k ∈ [K] : s(x,k) <τ^(k)}. This method can be used in conjunction with the other post-hoc CP methods described in Appendix <ref>. We investigated class-conditional conformal prediction on PlantNet-300k, and summarize the results in Table <ref>. It results in better macro-coverage and fewer coverage violations than the regular conformal prediction, yet it still leads to class-conditional coverage violations. This is partly because CP coverage holds in expectation across an infinite test set. Where ample data per class is available, like on ImageNet, this can be simulated by repeated random data splits. PlantNet-300k has fixed calibration / test sets and as noted, some classes have very little representation. Further, coverage follows a Beta distribution with α and β terms reliant on the validation set size <cit.>, thus a smaller calibration set leads to greater variance in coverage across the (infinite) test set. Thus, when class-balanced conformal prediction is performed on PlantNet-300k, both the calibration and test sets for each class are very small due to the long-tailed label distribution. This leads to a high class-conditional variance in coverage and thus does not resolve the coverage violations. Although this is a challenging setting, it is nonetheless reflective of possible scenarios that can be encountered in the real world. One may imagine many data-constrained environments such as medicine where gathering a large number of examples for rare (yet still important) classes is a challenging feat. If conformal prediction is to be deployed in these settings, this is a hurdle that must be addressed. § INATURALIST RESULTS The iNaturalist-2018 <cit.> and iNaturalist-2019 <cit.> datasets both feature long-tailed training sets and class-balanced test sets. They are comprised of 8,142 and 1,010 classes, respectively. Here, 50% of the test set is used to calibrate the conformal threshold, and the remainder is used to predict confidence sets. Unlike the PlantNet-300k dataset, the conformal calibration process is conducted on a class-balanced dataset. We can see in Table <ref> that this results in a considerably lower percentage of classes with violated coverage. § THE RELATIONSHIP BETWEEN ACCURACY AND COVERAGE Figure <ref> plots the relation between coverage / inefficiency performance and the accuracy of the underlying model, on on the different distribution-shifted datasets. We can observe that coverage generally increases along with accuracy. Inefficiency also improves, albeit the THR method seems to have larger inefficiency improvements. Similarly, Figure <ref> plots the coverage / inefficiency relation with accuracy for various corruption levels and types. There is a marked improvement in coverage when the underlying model is more accurate, which seems especially pronounced for greater levels of corruption. Interestingly, the relation between the accuracy of the underlying neural network and coverage / inefficiency appears to vary with the CP method used. For example, RAPS generally demonstrates a near linear increase in coverage with increased accuracy, however inefficiency gains seem to diminish. Conversely, the inefficiency of the THR algorithm consistently improves with accuracy, and coverage gains are less pronounced. § RESULTS ON IMAGENET-W Recent work <cit.> has found a reliance on translucent watermarks as a shortcut in current vision models, and the addition of a watermark on the ImageNet validation set leads to large decreases in performance. We investigate this dataset, called ImageNet-W, in the conformal prediction setting and similarly find a general decrease in coverage across most models and methods. As seen in Table <ref>, the APS method combined with vision transformers is able to maintain coverage on this dataset, at the expense of larger set sizes. This reemphasizes both the brittleness of some conformal prediction methods; a simple watermark is sufficient in violating coverage guarantees, as well as the potential for improvement using better deep learning models and different CP methods.
http://arxiv.org/abs/2307.02659v1
20230705212318
Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity
[ "Lysea Haggie", "Thor Besier", "Angus McMorland" ]
q-bio.NC
[ "q-bio.NC" ]
Many-objective Optimization via Voting for Elites Nick Cheney August 1, 2023 ================================================= Computational models of cortical activity can provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernible movements and is thought to be linked to the topology of the underlying cortical circuitry <cit.>. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model by <cit.>. A local connectivity scheme was implemented to introduce more physiological plausibility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements, and a reduction in the variability in power spectrum measures. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model took steps towards replicating the macroscopic network of the motor cortex, replicating realistic spatiotemporal firing to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain. § INTRODUCTION The motor cortex is critical for producing movement. However, neurons in the motor cortex show patterns of ongoing activity even during resting state periods with no discernible movement <cit.>. Baseline spontaneous activity, which is observed in awake and conscious experimental subjects, is differentiable from task-induced states in recordings of neuron spiking behaviour <cit.>. Spontaneous activity of the brain at rest has spatially structured patterns of activity and is considered to play an important role in neural encoding for brain processes <cit.>. Previously, activity in the cortex was thought to be driven by external sensory input, but it recently has been suggested that spontaneous activity arises from the dynamics of the neuronal circuitry and inputs modulate and modify the dynamics of the network rather than drive it <cit.>. Thus, there is an interest in understanding the ongoing behaviour of neural activity, how it reflects the cortical circuitry and shapes the variable functions of the cortex <cit.>. Spontaneous activity in the motor cortex, similar to other cortical areas, is characterised by irregular firing of individual neurons, with low average firing rates (<10 Hz) but a wide frequency range (0 - 100 Hz) <cit.>. Firing rates in cortical networks measured in vivo are reported to have highly positively skewed, long-tail distributions <cit.>. Neuron populations in the cortex also display oscillatory firing activity covering a broad frequency spectrum ranging from less than one hertz to hundreds of hertz <cit.>. In the motor cortex, beta oscillations (13–30 Hz) are consistently observed in electroencephalography (EEG) measurements during resting states or prior to movement, and disappear during movement <cit.>. The origin of the variability in the firing of cortical neurons is unknown and could be due to morphology of dendrites resulting in non-linear integration of inputs, or the properties of the network and synaptic coupling <cit.>. The variability in neuronal activity is also believed to play a significant role in information encoding in the cortex <cit.>. Recently, the generation of neocortical activity has been addressed through computational models of somatosensory cortical circuits based on anatomical and physiological data, which generate realistic spontaneous dynamics <cit.>. However, specific model structure or features have not been clearly linked to cortical firing patterns. The cortex is thought to have a consistent laminar organisation and patterns of connectivity across different regions <cit.>. A canonical circuit which generally defines recurrent connections in each layer, layer 4 and 6 as input layers, and layers 2/3 and 5 as output layers, was based originally on <cit.>'s work on the cat visual cortex and a similar pattern observed in other cortical areas in primate studies including the auditory cortex, somatosensory cortex and motor cortex <cit.>. <cit.> quantified connectivity of the cortical circuit through in vivo intracellular recordings and morphometry of cell types and laminar distribution. This and other photostimulation and optogenetic studies of the mouse motor cortex have shown a dominant connection of layer 2/3 neurons to layer 5 neurons <cit.>. Experiments involving in vivo extracellular injections of neuronal tracers in the cortex observe 'patchy' projections, in which synapses are highly localised <cit.> (also see <cit.> for review). Recent photostimulation experiments and digital image reconstruction suggest a distance-based connectivity model with strong connectivity within 0.2 mm of the soma and connectivity decreasing as a function of distance <cit.>. Diffusion tensor imaging also shows small-world, patchy connections in the cortex and greater local connectivity than "random" connections <cit.>. Localised connections are also thought to be optimal in regards to minimising wiring but maximising connectivity between nodes in a network. Local connections with some long range connections may be an efficient method of information transfer in the brain <cit.>. Structural variations in network topology influence the spatio-temporal activity of the cortex, with changes in connectivity patterns resulting in different dynamics <cit.>. Large-scale neural network models can be used to explore how network structure influences the generation of cortical activity. <cit.> modelled the horizontal connectivity in the neocortex and showed complex activity patterns arise in structured, spatially clustered synapses. There recently has also been the development of larger-scale cortical models which contain tens of thousands of spiking neuron models <cit.>. However, previous models of the motor cortex have been limited in replicating physiological detail and mainly been focused on generating movement dynamics or responses to stimulation <cit.>. The role of network connectivity in generating spontaneous cortical activity in physiologically-based spiking neural network models has not been explored. Computational models can aid in the understanding of the electrophysiological mechanisms which underlie the generation of spontaneous activity. In this study, a large-scale spiking neural network containing over 38,000 neurons and 150 million synapses, based on previous modelling work by <cit.>, was implemented in the Python-based neural network simulator, Brian2 <cit.>. The model was developed to replicate the spontaneous firing behaviour of the motor cortex and explore the effect of local connectivity on the network behaviour. The aim of this work was to provide insights into resting state motor cortex dynamics using a spiking neural network model and investigate connectivity as a potential source of variability and efficient information transmission in spontaneous firing behaviour. § METHODS This model was based on previous work by <cit.> and implemented in Brian2 with reference to source code from <cit.>. The original cortical circuit model was adapted to represent a 1 mm^2 surface area of the motor cortex. The model follows a laminar structure, grouping the neurons into four layers, 2/3, 4, 5, and 6. Each layer was further divided into excitatory and inhibitory cell groups. Layer 1 was ignored due to its low density of neuronal cell bodies <cit.>. Connection weights in the circuit are shown in figure <ref> below. Individual neuron dynamics were governed by leaky-integrate-and-fire equations simulated using the linear state updater with a time step of 0.1 ms. Equation <ref> describes the membrane potential (V) of each neuron, where τ_m is the time constant, C_m is the membrane capacitance, V_r is the reset value for the membrane potential following a spike and I_syn is the total input current described by equation <ref>. Action potentials were fired whenever V(t) became more positive than the threshold (θ). On the firing of a presynaptic neuron, the synaptic current of the postsynaptic neuron (I_syn^post) was changed by the value determined by the conductance (g) multiplied by the weight (w) after a delay (d) which accounts for the finite time interval of an action potential propagation in a presynaptic neuron (equation <ref>). The delay between the spiking activity of a presynaptic neuron and postsynaptic neuron was drawn from a normal distribution with a mean of 1.5 ms for excitatory neurons and 0.8 ms for inhibitory neurons; the standard deviation of delay times was half of the mean value. A shorter inhibitory delay was used in the model as inhibitory neurons generally make more local connections over smaller distances <cit.>. In unmyelinated axons, each millimetre could introduce a conduction delay of at least 2 ms <cit.>. Cortical neurons exhibit a range of myelination, though inhibitory interneurons also show more myelination, thus also potentially increasing conduction speed <cit.>. Our model only incorporated 1 mm of cortical area and so delays were defined as less than 2 ms with shorter delays for inhibitory neurons, and were also independent of layer as in <cit.>. Following a spike being fired, the membrane voltage was reset to -65 mV. The refractory period of the neuron was 2 ms meaning another spike could not be fired within that period of time following a previous spike. As in the original <cit.> model, the conductance value in the layer 4E to layer 2/3E connection was doubled compared to the other excitatory connections. Parameter descriptions and values are given in table <ref>. dV(t)/dt = -(V(t) - V_r)/τ_m + I_syn(t)/C_m dI_syn/dt = -I_syn/τ_syn I_syn^post(t + d ) = I_syn^post(t + d ) + g · w when presynaptic neuron V(t)≥θ Neurons were distributed over eight populations, according to the numbers in table <ref>. Neuron numbers were adjusted from the original <cit.> model, switching the proportion of layer 4 and 5 cells based on the motor cortex having a dominant layer 5 connectivity and sparser cell distributions than other layers of the cortex (ie. the somatosensory and visual cortices on which the original model was based) <cit.>. Previously, the motor cortex has thought to lack a distinct layer 4 but recent studies support a functional, albeit small, layer 4 in the motor cortex with similar connections to layer 4 in the somatosensory cortex, notably as an input pathway from the thalamus <cit.>. The superficial layers of 2/3 and 4 contain approximately 40% of the neurons, layer 5 has 35% of the neurons and layer 6 has 25% of the neurons, which is similar to reported physiological experimental data <cit.>. The E/I ratio in this model is on average 24%. With inhibitory neurons making up 28.2% 24.8%, 22.4%, 20.5% for layers 2/3, 4, 5 and 6, respectively. The proportion of GABA cells in motor area has been reported as 24.2% with slightly lower percentage of GABA in deeper layers <cit.>. A sensitivity analysis was carried out to investigate the effect of the total number of neurons in the network, to select a minimal number of neurons at which the rate and irregularity of firing activity converges. <cit.> original connectivity map integrated the findings of multiple anatomical and electrophysiological studies to estimate the probabilities of connection and number of synapses formed between each neuron group. Combining these data from different species and areas, including rat visual and somatosensory areas and cat visual striate cortex, builds on the theoretical framework suggesting an equivalency across cortical areas. Comparative studies of primary motor cortex also show functional preservation across mammals <cit.>. The connectivity profile was based on a modified version of Peter's rule which proposes that the number of synapses is dependent on the number of neurons collocated in the presynaptic and postsynaptic layers and a probability value of connection between layers, derived from experimental data <cit.>. Based on Peter's rule and probabilities derived from anatomical and physiological studies, the number of connections (K) was calculated for neuron groups using equation <ref> (Equation 3 in <cit.>) where C_a is the connection probability defined in table <ref>, and N_pre and N_post are the sizes of the presynaptic and postsynaptic populations, respectively. The C_a values in the connectivity matrix (Table <ref>) describing the probabilities of connections between groups were adapted from the original <cit.> study in this motor cortex model based on changes to the number of neurons in each layer, while maintaining the average relative number of connections in each neuron target group as the original model. The resulting connectivity was similar to experimental investigations of layer specific wiring in the motor cortex <cit.>, notably the dominant layer 2/3 to layer 5 connection pathway (Figure <ref>). K = log(1 - C_a)/log(1 - 1/(N_preN_post)) The original <cit.> cortical model implemented a random connectivity scheme. This was reimplemented and adapted here to model the motor cortex, as described above, and a new local connectivity scheme was developed. In the random and local connectivity schemes, the total number of synapses in both models was kept constant for fair comparison. Neurons were given spatial parameters (X, Y, and Z coordinates) to be distributed over a 1 mm^2 surface and a depth range based on the group's layer. The likelihood of a neuron connecting with another neuron was determined by a Gaussian distribution over distance, where x and y are a neuron's spatial location and the radius parameter defines the extent of the spread (Equation <ref>). This connectivity is compatible with the synapse definition of the original random model described earlier but the added spatial description of neurons allowed constraining the synapses to a localised connectivity. The spatial connectivity description was also adapted from previous cortical modelling work <cit.>. Intralaminar connectivity had a greater radius of connectivity than interlaminar connectivity and inhibitory connections had a lower connectivity radius than excitatory connections as supported by physiological data <cit.>. The radius value for different connection types is given in table <ref>. Multiple connections could be established between two neurons in both the random and local connectivity schemes and self connections in the local connectivity model were not allowed. Figure <ref> shows the postsynaptic connections of a single neuron (red) on the random connectivity model and the connectivity of a single neuron in the distance based connectivity model. Probability = exp(-(x_pre - x_post)^2 + (y_pre - y_post)^2/2*radius^2) External input was applied to each neuron group as Poisson spike trains at a frequency of 8 Hz. The number of these inputs for each neuron was constant at 2000 inputs for excitatory neuron groups and 1850 inputs for inhibitory neuron groups. This followed the layer-independent input protocol from the original <cit.> model. A sensitivity analysis on the effect of the input on the resulting firing rates was also performed by manipulating the number of inputs and frequency. To characterise the activity of the network, spike trains and firing rates of neurons and neuron groups were monitored. A spike was defined by the neuron reaching the threshold value of -50 mV. Firing rates were calculated by counting the total number of spikes in each time step of the simulation (0.1 ms). Simulations were run for 500 ms and the first 50 ms was ignored to allow the model to reach steady state. The interspike intervals (ISIs) for each neuron were calculated as the time between spikes. Irregularity was characterised by the coefficient of variation (CV) of the ISI distribution. CVs were calculated as the ratio of the standard deviation to the mean ISIs (Equation <ref>) for each individual neuron in the population, as defined previously <cit.>. Firing rates and CVs reported are the mean across neurons in the neuron group, and over time. Notably, firing rates and CVs closely matched values reported by the original model when the settling period was included, consistent with the analyses conducted by <cit.>, but due to the inconsistency in the spiking behaviour in the first 50 ms, this settling period was excluded for the analyses carried out in this study. CV = σ_ISI/μ_ISI § RESULTS The motor cortex receives connections from multiple brain regions such as the thalamus, premotor cortex and somatosensory cortex; this complexity makes selection of an appropriate surrogate input challenging. The effect of the input number and frequency to the model was explored while keeping the ratio of excitatory and inhibitory input and relative input between the different neuron groups constant (ie. 2000 for excitatory neurons, 1850 for inhibitory neurons). A single input frequency was used across the model though physiologically there may be variation in the number of inputs and range of input frequencies. Networks with local connectivity show greater sensitivity to the frequency and number of inputs, with a greater range of resultant average firing rates. Figure <ref> shows a heat map of the effect of input number and frequency on the average firing rates of neurons in the model, note the change in scale in each colour map. An analysis of the effect of the number of neurons in the model showed that consistent frequencies and CVs were produced by a random network containing 35,000 or more neurons. Neuron counts by isotropic fractionator and flow fractionator methods of the motor cortex in old and new world primate species lie in the estimated range of 50,000–90,000 neurons per mm^2 <cit.>. Figure <ref> shows the firing rates and CVs in the network over a range of 0–80,000 neurons for both random and local connectivity, with standard deviation measures over ten simulations. The firing rates and CVs were more stable in the local connectivity model at lower number of neurons. In both models, layer 6E neuron CVs were not able to be calculated or were highly variable, probably due to very low firing rates in the neuron group, and were thus excluded from the figures. The firing rates of the model were within an appropriate range of baseline physiological recordings of a number of studies, with lower firing rates in layer 2/3 and layer 6 of the model than layer 5 <cit.>. <cit.> presented microelectrode measurements of the motor cortex in awake rabbits, with mean discharge rates of 0.6 Hz in corticocortical neurons, prevalent in layer 2/3, 5.7 Hz in layer 5 neurons and 0.4 Hz in layer 6 neurons. <cit.> published similar values from whole cell patch clamp recordings in mice. <cit.> recorded firing rates in two awake monkeys and firing rates in a resting state were 6.3 Hz and 7.81 Hz and CV measures of 0.83 and 0.79 which lies closer to the irregularity of a random poisson distribution (CV = 1). These were recorded using a 100 electrode Utah Array in the hand motor area of macaque monkeys, at a depth of 1.5 mm, which would correspond to deep layer 3 or superficial layer 5 <cit.>. <cit.> reported a similar mean value of 7.1 Hz firing in layer 5 motor cortex cells at rest, from micro-electrode recordings in rats. CVs of spiking activity in cortical neurons have been consistently reported in previous literature as being close to 1 <cit.>. Based on these experimentally reported values, the firing rates of the neural network model with random connectivity were closer to the physiological range, though with slightly higher firing in layer 2/3 and lower in 6. The local connectivity model showed an increase in mean firing rates over all the neuron populations and a wider range in mean CV. The mean CV values in the local connectivity model more closely matched experimental recordings in the middle-deep layers, notably in 5E which is the dominant neuron group. Firing frequencies were also observed to be higher in inhibitory neuron groups. Table <ref> reports the mean values and standard distributions of firing rates and irregularity (CV) in the random connectivity model and local connectivity model. The activity of neurons in the model showed a different pattern of activity in the local connectivity model compared to the random connectivity. Figure <ref> shows a raster plot of 10% of the neurons in the model and a plot of the frequencies across the neuron groups. Firing rates of the neuron groups generally remained <100 Hz but the random connectivity model showed a greater tendency for sudden high frequency firing activity across the network. This synchrony of firing activity could be neuronal avalanches, bursts of firing, which are a noted feature of brain dynamics and indicator of scale-free critical dynamics in complex systems <cit.>. The range of firing rates in recorded neurons also varies with long-tailed distributions of firing rates <cit.>. The range of firing is reported to be 0–40 Hz in measurements of awake mice and cats <cit.>, while recordings in monkeys ranged from 0–100 Hz <cit.>. The firing rates of the model also showed long tailed distributions (Figure <ref>), with longer tailed distributions, closer to physiological data, in the local connectivity model. A power spectrum analysis shows a peak between 10–30 Hz which lies within the beta oscillation range (13–30 Hz). Experimentally, beta waves are observed by EEG, magnetoencephalography (MEG) or electrocorticography (ECoG), though the origin and role of this rhythm is still unclear <cit.>. This model suggests that oscillatory rhythms could be emergent properties of the cortical network since the input to this model is non-oscillatory. The standard deviation, taken over ten simulations, was slightly reduced in the local connectivity model suggesting a potential regularisation of the power spectrum and beta peak (Figure <ref>). § DISCUSSION This study builds on previous cortical modelling work, and investigates the effects of a physiologically-realistic, spatially defined local connectivity scheme on neuron activity, while focusing on replicating the spontaneous firing in the motor cortex. We replicated spontaneous firing behaviour in regards to neuron firing rates, irregularity, and power spectrum peaks, as recorded by previous experimental data. Local and random connectivity was compared in the model to provide insight to the topological network properties that influence population-based firing behaviour. Local connectivity showed qualitatively different temporal patterns of firing and more realistic irregularity (CV). It also resulted in a wider tailed distribution of firing and a narrower standard deviation in the power spectrum. Spontaneous activity in the cortex has been characterised by low firing rates and asynchronous activity. The model with random connectivity replicated previously measured values with lower firing rates in layers 2/3 and 6 and higher firing rates in layer 5. The increase in firing rates in the local connectivity model may be due to more coordinated inputs within a local cluster resulting in a greater likelihood to reach threshold. This could be broadly explained by the theory of neurons as coincidence detectors, which states that information is encoded and propagated by the timing of action potentials <cit.>. Though local connectivity increased the firing rates of the network, similar proportions of firing rate values between layers were maintained, which suggests additional tuning of input or overall connectivity density might resolve the differences in absolute firing rate and be able to reduce firing to match experimental results. The local connectivity model replicated experimentally derived CV values in the deeper layers of 5 and 6, with similar CV values to the random connectivity model in the superficial layers 2/3 and 4. As such, the topological structure played a key role in the variability of firing activity. The local connectivity model also exhibited a lower standard deviation in the power spectrum, compared to the random connectivity model. The narrower range in the power spectrum suggests a more reliable occurrence of beta wave frequencies that are consistently observed in EEG recordings. The more realistic, local connectivity model could therefore be a topological pattern in the cortex which contributes to the occurrence of the beta wave. Compared to the random connectivity model, the local connectivity scheme showed more sensitivity and instability in response to input activity. Within the range of changes in the input, the local connectivity model covered a larger range of frequencies in the firing behaviour of the network. The greater sensitivity to input could be a key factor in the highly variable activity in the cortex which might be necessary for phase transitions or changes in state, a notion described by criticality <cit.>. This critical state is hypothesised to be functionally beneficial for efficient information transmission in the cortex <cit.>. In the motor cortex, being in the critical state may play a role in the generation of a wide range of voluntary muscle movements. The E/I ratio is critical in determining the model dynamics and emergent behaviour <cit.>. The balance of E/I activity is linked to scale-free dynamics and operating near a critical point of activity, which does not extinguish or explode into seizure <cit.>. The overall level of excitation and inhibition, as well as the detailed dynamics of excitatory and inhibitory synaptic conductance, has a large effect on circuit activity. Our model contained 24% excitatory neurons, with post-synaptic conductance of inhibitory connections modelled with a conductivity 4 times stronger than excitatory connections, resulting in a stable E/I balance. However, this E/I ratio could be an area of further exploration as recently a study by <cit.> looked at rat, marmoset and human cell types and found a proportion of GABAergic neurons as 16% in mouse primary motor cortex, 23% in marmosets and 33% in human primary motor cortex. The random connectivity model has been used in previous studies of cortical dynamics <cit.>. Random networks are less clustered in their connections, have longer range connections, and are considered to be inefficient with regards to information transfer <cit.>. <cit.> implemented a locally connected random network (LCRN), with distance-based connectivity in a large scale neural network model containing 100,000 neurons. <cit.> also used a LCRN to investigate the propagation of synchronous and asynchronous activity. Typically, these models have two populations, excitatory and inhibitory neuron groups with recurrent intra- and inter-group connectivity. Previous investigations of connectivity have been limited to 2D networks which have not aimed to replicate physiological behaviour <cit.>. However, to further investigate the spatio-temporal activity of the cortex, particularly with longer range connections, across a cortical network a larger spatial surface area is still needed and this will require increased computing power. Whether the connection of neurons in the cortex are specific or random has not yet been fully resolved, though recent studies have suggested that neuron morphology and patterns of recorded activity support a notion of specific connections <cit.>. Axons do not simply connect to neurons based on spatially overlapped locations but selectively target specific neuron types or groups, for example in layer-specific connectivity patterns <cit.>. There also appears to be both local and long-range connections in the cortex which is suggested to be efficient in regards to wiring cost and information transfer <cit.>. A combination of short and long-range connections were included within the local connectivity model with horizontal, intralaminar connections spanning the surface area of the model, while vertical, interlaminar connections were narrower in radius. Over 70 years ago, <cit.> made experimental recordings of neurons in the cat somatosensory cortex and proposed an organisational unit of a `vertical group of cells extending through all cortical layers' known as a cortical column. Columns are groups of neurons, in the range of 0.5 mm in diameter, tuned to a specific stimulation or attribute in the range of 0.5 mm in diameter, with the arrangement of columns patterned like a 'mosaic' <cit.>. However, the function of this columnar organisation remains unclear and there is still no agreed upon function or definition <cit.>. In the motor cortex, recordings of neurons show directional tuning during upper limb movements with activity <cit.>. Multi-unit recordings also support functional clusters which exhibit directional tuning, with widths of the tuning field reported in the range of 50-250 μm in diameter <cit.>. Individual neurons can make connections outside of columns, with horizontal connections spanning up to 1.5 mm <cit.>. Although this model only represented 1 mm^2 of cortical surface area, future development could explore an area representation with and without an added constraint to model cortical columns. The topographic structure of the motor cortex and its relation to muscle movement parameters is still undefined and with this model, we can begin to explore these intricate connections in the motor cortex. The effect of neuron types on cortical dynamics might also be an interesting area for future investigation using this model. Spontaneous activity may physiologically reflect a wider range of neuron types or synaptic dynamics, such as bursting or faster and slower transients, than the simplified excitatory and inhibitory neurons captured in this model <cit.>. The diversity of inhibitory interneurons also plays a significant role in the modulation of cortical dynamics, though there still currently lacks a clear consensus on the classification of neurons <cit.>. Though our model incorporated a simplified model of neuron dynamics and input, we showed that more realistic patterns of spontaneous activity can be replicated in a cortical circuit with local connectivity. Incorporation of different cell types would certainly have dynamical consequences at the network level. Our approach increases the biological plausibility of previous cortical modelling work and contrasts with previous models of the motor cortex which have used continuous-value recurrent neural networks <cit.>. We recognise that we have not considered other properties such as neuron types, dendritic processing and synaptic plasticity that are likely to play a role in firing dynamics. Biological learning paradigms such as reinforcement learning with spike-timing dependent plasticity (STDP) have previously been incorporated in spiking neuronal models of motor control, though not specifically replicating motor cortex activity <cit.>. Concurrently, experimental data will also be vital for continued model development and validation, a synergy of computational and experimental techniques will be required to elucidate the complex connectivity of cortical circuits and how it contributes to the generation of dynamic activity. This work is the beginning of a larger scale exploration of neural control in the motor system with scope to extend the model. The proposed model could be incorporated into feedforward and feedback circuits in the neuromusculoskeletal system involving the spinal cord and alpha-motoneuron pools. Motor-unit and muscle recruitment might be task-specific <cit.>, however, the role of the motor cortex in executing movement commands has not yet fully been elucidated. Our model could be used to explore the generation of muscle activity <cit.>. The model could also fit into a thalamocortical circuit framework to explore mechanisms of movement generation <cit.>. Dynamical motifs in the activity could be further explored through dynamical systems frameworks, incorporating dimensionality reduction techniques to look at patterns of ‘trajectories’ in neural population activity which may be task-specific <cit.>. In summary, the implementation of a local connectivity scheme in a spiking neural network model has shown that the topology of the network plays a critical role in the resulting cortical dynamics. Our results support theories of structured cortical circuitry and local, patchy connectivity in the generation of spontaneous activity patterns in the motor cortex. Random networks appear to be more regular or synchronous in their behaviour, while the local connectivity model showed more realistic irregularity, particularly in the large pyramidal, output neurons of layer 5. Our model builds on previous work incorporating local connectivity in a more complex laminar structure and cortical circuit, and reproduces firing patterns comparable to those measured in vivo in the motor cortex. The output from our model indicates the importance of including physiologically-based local network topology, which resulted in an increased range and irregularity in firing as well as a slight regularisation in the power spectrum, compared to a random network. § CONCLUSION In developing this model, we have aimed to build on previous modelling work and keep parameters as physiologically realistic as possible to replicate the spontaneous firing activity in the motor cortex. To our knowledge, this is the first physiologically based spiking neural-network model to explore the effects of 3D spatially realistic connectivity on spontaneous neuron firing activity in the motor cortex. The pattern of connectivity is shown to play an important role in the generation of irregular firing dynamics and long-tailed firing distributions of cortical neurons, which could have an impact on criticality and information transmission. With the implementation of local connectivity in a structured cortical circuit, this model links neuroscientific theories of structure to functional network dynamics. § CODE AVAILABILITY & LICENSE Code and data available at <https://github.com/MunozatABI/MotorCortex>. This work is licensed under a "https://creativecommons.org/licenses/by-nc/4.0/"Creative Commons Attribution-NonCommercial 4.0 International License. § ACKNOWLEDGEMENTS The authors thank Gonzalo Maso Talou and Gurleen Singh for their valuable discussions and Mark Sagar for initial project funding and administration. § FUNDING L.H. was supported by funding from Callaghan Innovation. The funders had no role in the study design or decision to publish. § CONFLICT OF INTEREST The authors declare no conflicts of interests.
http://arxiv.org/abs/2307.01376v2
20230703221132
Particle acceleration by magnetic reconnection in geospace
[ "Mitsuo Oka", "Joachim Birn", "Jan Egedal", "Fan Guo", "Robert E. Ergun", "Drew L. Turner", "Yuri Khotyaintsev", "Kyoung-Joo Hwang", "Ian J. Cohen", "James F. Drake" ]
physics.space-ph
[ "physics.space-ph", "astro-ph.EP", "physics.plasm-ph" ]
Particle acceleration by magnetic reconnection in geospace]Particle acceleration by magnetic reconnection in geospace [1]Mitsuo Okamoka@berkeley.edu 2,3]Joachim Birn 4]Jan Egedal 3]Fan Guo 5,6]Robert E. Ergun 7]Drew L. Turner 8]Yuri Khotyaintsev 9]Kyoung-Joo Hwang 7]Ian J. Cohen 10]James F. Drake *[1]Space Sciences Laboratory, University of California Berkeley, 7 Gauss Way, Berkeley, 94720, CA, USA [2]Center for Space Plasma Physics, Space Science Institute, 4765 Walnut Street, Boulder, 80301, CO, USA [3]Los Alamos National Laboratory, Los Alamos, 87545, NM, USA [4]Department of Physics, University of Wisconsin-Madison, 1150 University Avenue, Madison, 53706, WI, USA [5]Laboratory for Atmospheric and Space Physics, University of Colorado, 1234 Innovation Drive, Boulder, 80303, CO, USA [6]Department of Astrophysical and Planetary Sciences, University of Colorado, 2000 Colorado Avenue, Boulder, 80309, CO, USA [7]The Johns Hopkins Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, 20723, MD, USA [8]Swedish Institute of Space Physics, Uppsala, 75121, Sweden [9]Southwest Research Institute, 6220 Culebra Road, San Antonio, 78238, TX, USA [10]Department of Physics, the Institute for Physical Science and Technology and the Joint Space Science Institute, University of Maryland, College Park, 20742, MD, USA Particles are accelerated to very high, non-thermal energies during explosive energy-release phenomena in space, solar, and astrophysical plasma environments. While it has been established that magnetic reconnection plays an important role in the dynamics of Earth’s magnetosphere, it remains unclear how magnetic reconnection can further explain particle acceleration to non-thermal energies. Here we review recent progress in our understanding of particle acceleration by magnetic reconnection in Earth’s magnetosphere. With improved resolutions, recent spacecraft missions have enabled detailed studies of particle acceleration at various structures such as the diffusion region, separatrix, jets, magnetic islands (flux ropes), and dipolarization front. With the guiding-center approximation of particle motion, many studies have discussed the relative importance of the parallel electric field as well as the Fermi and betatron effects. However, in order to fully understand the particle acceleration mechanism and further compare with particle acceleration in solar and astrophysical plasma environments, there is a need for further investigation of, for example, energy partition and the precise role of turbulence. [ [ August 1, 2023 ================== § INTRODUCTION §.§ Motivation and Structure Particles are accelerated to very high, non-thermal energies during explosive energy-release phenomena in space, solar, and astrophysical plasma environments. Unlike remote-sensing measurements of distant astrophysical objects that are often difficult to resolve spatially, in-situ measurements of Earth's magnetosphere provide unique opportunities to directly study particle acceleration and its spatial and temporal variations down to the kinetic scale. In fact, through decades of study, it is now established that magnetic reconnection — a plasma process that converts magnetic energy into particle energy — plays an important role in the dynamics of the energy-release process in the magnetotail <cit.>. However, it remains unclear how magnetic reconnection can further explain particle acceleration to non-thermal energies (typically ≳ 10 keV) during explosive energy-release phenomena in Earth's magnetosphere, although significant progress has been made in the past decades with spacecraft missions such as Geotail, WIND, Cluster, THEMIS/ARTEMIS, and MMS, combined with theories and simulations. Thus, the main purpose of this paper is to review the most recent advances in our understanding of particle acceleration by magnetic reconnection in geospace which includes the magnetotail and the dayside magnetosphere. Many observational reports of particle acceleration come from the magnetotail probably because the environmental parameter m_iV_A^2 can be much larger in the magnetotail, where m_i is the ion mass and V_A is the Alfvén speed and therefore the energization, both heating and acceleration to non-thermal energies, becomes significant <cit.>. It should be noted that the term `particle acceleration' typically refers to the process of energizing particles to non-thermal energies and does not include the meaning of heating, an increase of the plasma temperature. Therefore, a discussion of particle acceleration usually involves a power-law form of energy spectrum. However, in some cases, the term `acceleration' is used in its literal sense, as shown in the equation of motion, ma=F where m, a, and F represent the particle mass, acceleration, and force, respectively. This usage does not differentiate between thermal and non-thermal components. For example, Fermi acceleration in the guiding-center approximation (which will be discussed in the following section) applies to both thermal and non-thermal particles. In this paper, we have attempted to use the phrase `acceleration to non-thermal energies' when the discussion pertains to the non-thermal component. Also, we sometimes used the term `energization' when we do not differentiate thermal and non-thermal components. There are already relevant review articles on particle acceleration in geospace that focus on theories <cit.> and specific topics such as power-law index <cit.> and dipolarization front <cit.>. However, this paper will provide a more general overview of observations and simulations of particle acceleration to non-thermal energies both near the `reconnection region' (highlighted in yellow in Fig. <ref>), which is referred to as X-line in simplified (e.g., two-dimensional or north-south symmetric) geometry, and at large scale where the intrinsic dipole field of the magnetosphere becomes important (i.e., the `collapsing region' as highlighted in blue in Fig. <ref>). For an up-to-date overview of the relevant context of magnetic reconnection at global scales and its associated cross-scale aspects, readers are referred to <cit.> and <cit.> in this collection, respectively. The paper is structured as follows: <ref>. Introduction <ref>: Motivation and structure <ref>: Key theories <ref>: Example observations and challenges <ref>. Particle acceleration near the X-line <ref>: Active vs quiet <ref>: Fermi vs betatron <ref>: Parallel electric field <ref>: Waves and turbulence <ref>. Particle acceleration at large scales <ref>: Overview <ref>: Anisotropies in dipolarization events <ref>: Acceleration mechanisms <ref>: Sources and seeding <ref>: Diamagnetic cavities <ref>. Outstanding problems <ref>: Energy partition <ref>: Precise role of turbulence <ref>. Summary and conclusion §.§ Key theories §.§.§ Particle acceleration mechanisms in the guiding-center approximation Observations <cit.> and simulations have shown that the thickness of a current sheet ought to be less than a typical ion gyroradius or inertial length to enable reconnection or other activity. Thus, ions are expected to be non-adiabatic near the reconnection sites. In contrast, electrons may show adiabatic behavior much closer to an X-line, such that a guiding center approach seems more reasonable. In the guiding-center approximation, the main acceleration mechanisms are Fermi acceleration, betatron acceleration, and the direct acceleration by the parallel electric field <cit.>. Fermi acceleration occurs when a particle encounters a dynamically evolving, curved magnetic field. The betatron acceleration describes the process where the increasing magnetic field leads to the energy gain in the perpendicular direction due to the conservation of the first adiabatic invariant, whereas during the direct acceleration particles stream along the magnetic field and gain energy if a significant parallel electric field exists. Fig. <ref>a shows several patterns where these acceleration mechanisms may happen. The main energy gain of a single particle under the guiding-center limit is: dε/dt = q E_∥·v_∥ + μ/γ(∂ B/∂ t + 𝐮_𝐄·∇ B) + γ m_e v^2_∥ (𝐮_𝐄·κ) Here, q is the particle charge, m_e is the electron mass, 𝐯_∥ and 𝐯_⊥ are the parallel and perpendicular component of the particle velocity, μ is the magnetic moment, γ is the Lorentz factor, and 𝐮_𝐄 = E×B/B^2 is the electric drift velocity, κ is the curvature of magnetic field lines. The first term on the right is the parallel electric field acceleration, the second term corresponds to the betatron acceleration, and the third term is associated with the Fermi acceleration, corresponding to the curvature drift acceleration. In Fig. <ref>a, the Fermi acceleration is assumed to be driven by the curved magnetic field that drifts at the Alfvén speed u_A. It is to be noted that Fermi acceleration is traditionally viewed as bouncing between converging magnetic mirror points (1st order Fermi acceleration of type A) <cit.> when the energization can be inferred from conservation of the second adiabatic invariant. However, parallel, i.e., Fermi acceleration may also result from an encounter with a strongly curved, moving magnetic field structure, akin to a slingshot effect (1st order Fermi acceleration of type B) <cit.>. Thus, Fermi `reflections' may occur when a particle encounters a sudden change in magnetic topology and/or field strength, resulting in a sudden change in pitch angle and potentially a reflection. If the sudden change in topology is associated with a moving magnetic structure, then the particle gains energy corresponding to the speed of the structure in the particle’s rest frame (i.e., the rate of energy gain is nearly proportional to the particle’s energy). For electrons, a single encounter produces only a relatively small energy gain, because the speed of the structure is small compared to the electron thermal speed. However, multiple encounters can add up to substantial energy gains. Such Fermi reflections can occur in collapsing closed field regions (Fig. <ref>) <cit.> or in contracting or merging islands <cit.>. It is also useful to note that betatron and first-order Fermi acceleration can be viewed as 𝐄×𝐁 drift toward increasing magnetic field strength or in the direction of a magnetic field curvature vector, respectively (Eq. (<ref>)), but equivalently also as grad B drift or curvature drift, respectively, in the direction of an electric field (opposite for electrons) <cit.>. The former indicates that the relative role of the two mechanisms depend on the magnetic field geometry, while the latter indicates a pitch angle dependence (as the curvature drift speed depends on the parallel particle energy and grad B drift depends on the perpendicular one) . One can statistically evaluate the importance of the mechanisms by ensemble averaging particle motions, where the collective perpendicular particle current density for each species s is: J_s⊥ = p_s∥B× (B·∇) B/B^4 + p_s⊥B×∇B/B^3 - [∇×p_s⊥B/B^2 ] + ρ_su_E - n_sm_s dv_s/dt×B/B^2 where p_s∥ and p_s⊥ are parallel and perpendicular pressures to the local magnetic field, respectively, ρ_s is the charge density, n_s is particle number density, m_s is particle mass, 𝐯_𝐬 is the species flow velocity, and d/dt ≡∂_t + 𝐯_𝐬·∇. The terms on the right shows the current due to curvature drift, grad B drift, the perpendicular magnetization, electric drift, and the polarization drift. The total energization can be shown with J·E. Another equivalent expression for the J·E, after a rearrangement, is J_s⊥·E_⊥ = ∇· (p_s⊥u_E) - p_s∇·u_E - (p_s ∥ - p_s ⊥) b_ib_j σ_ij where σ_ij = 0.5 (∂_i u_Ej + ∂_j u_Ei - (2∇·u_E δ_ij/3)) is the shear tensor of 𝐮_𝐄 flow, p_s ≡ (p_s∥ + 2p_s⊥)/3 is the effective scalar pressure, and we have ignored the effect of the polarization drift <cit.>. This expression shows the role of fluid compression and velocity shear in the energy gain <cit.>. These can be connected with the recent work of pressure-strain terms for gaining insight in turbulent plasmas <cit.>. Over the past decade, particle-in-cell simulations and test-particle simulations have been widely used to evaluate these acceleration mechanisms. Several particle kinetic simulations <cit.>, modeling the formation of multiple magnetic islands or flux ropes and their merging, indicated an overall dominance of Fermi acceleration over betatron acceleration. In contrast, a test-particle simulation of electron drifts in a collapsing magnetic arcade with strong guide field indicated a dominance of betatron acceleration <cit.>. This confirms that the relative role of the two mechanisms depends on the field geometry, for instance, betatron acceleration may be expected to be important particularly inside of reconnection jet fronts and collapsing magnetic traps. In addition, parallel electric fields are shown to contribute to particle energization and modify the distribution functions <cit.>. As the guide field increases, the Fermi acceleration becomes less efficient, but acceleration by the parallel electric field is not very sensitive to the guide field <cit.>. §.§.§ Formation of nonthermal power-law energy spectra in reconnection acceleration Power-law energy spectra are a main feature of nonthermal acceleration and are of great interest to reconnection studies <cit.>. There have been debates and some confusion about the formation of nonthermal power-law energy spectra during particle acceleration in magnetic reconnection; therefore this issue is worth clarifying. As shown by Fig. <ref>b, we illustrate a simple case where the main acceleration term is a Fermi-like relation ε̇ = αε (α is the acceleration rate) in an energy continuity equation <cit.>, which represents the case when the first-order Fermi acceleration dominates the acceleration process: ∂ f/∂ t + ∂/∂ε (ε̇ f) = f_inj/τ_inj - f/τ_esc As reconnection proceeds, the ambient plasma is continuously injected into the reconnection layer through an inflow speed u_in. τ_inj is the timescale for the injection of low-energy particles f_inj, and τ_esc is the escape timescale. For illustration purposes, we assume that the upstream distribution is a Maxwellian distribution f_inj = (2N_inj/√(π))√(ε_0)exp(-ε_0), where ε_0 = ε/ε_th is energy normalized by the thermal energy. With these assumptions, the solution to Eq. (<ref>) can be written as f(ε, t) = 2 N_inj/√(π)(ατ_injε_0^1+β)[Γ_3/2+β(ε_0 e^-α t) - Γ(3/2+β)(ε_0)], where β = 1/(ατ_esc) and Γ_s(x) is the upper incomplete Gamma function. Fig. <ref>c illustrates this simple solution for a few different escape time and ατ_inj. As reconnection proceeds, new particles are injected and accelerated in the reconnection, and a power-law distribution can form when ατ_inj is large. Note that the derivation also shows that power-law distribution can still form even for the case with no escape term as shown in Fig. <ref> (red curve). However, if the population of particles is initially in the current sheet, it can be shown the distribution remains a Maxwellian <cit.>. It is often argued that some loss mechanism is needed to form a power-law distribution, but the simple analytical solution does not support it. Here the main physics for forming a power-law is due to the continuous injection and Fermi acceleration. Meanwhile, it is still important to understand the escape term, as it can strongly change the shape of the distribution. Other acceleration can, in principle, form a power-law, and the steady-state solution has the spectral index p = 1 + 1/ατ_esc + ∂lnα/∂lnε . This equation includes the case where the acceleration rate has an energy dependence. When the product of ατ_esc does not depend on energy and α has a power-law dependence on energy ε over a certain energy range, p is a constant across this range (power-law energy spectra). In the context of magnetic reconnection, magnetic islands (or flux ropes in 3D) can play an important role in particle acceleration <cit.>. Using a more formal, particle transport equation that captures the essential physics of particle acceleration in multi-island region, the possibility of compressible flux-rope contraction and merging in a turbulent media was considered in <cit.>. It was shown theoretically that both curvature drift and betatron acceleration, due to an increasing flux-rope magnetic field strength, contribute to kinetic energy gain, and the particle acceleration is a first-order Fermi acceleration process when the particle distribution is isotropic or nearly isotropic. §.§.§ Beyond guiding-center approximation Although numerical simulations have shown that the guiding-center approximation can well describe the acceleration of particles in the reconnection region, the particle dynamics in the reconnection region can be more complicated. Close to the X-line, it is well known that the gyrotropic approximation is not valid. Although the X-line may not be the region with the strongest acceleration, they may support local acceleration that is of interest to in situ observations. The X-line region with a weak guide field can support chaotic orbits of particles <cit.>. During particle motion, waves and turbulence can modify the particle distribution. However, as long as the particle distribution is gyrotropic, Eq. (<ref>) can still statistically describe the acceleration, even if significant pitch-angle scattering occurs <cit.>. Therefore, some caution is needed when interpreting the results of the analysis. In addition, waves and turbulence may lead to stochastic heating and acceleration <cit.>. The guiding-center description does not distinguish electrons and ions, meaning multiple species can be accelerated <cit.>. However, in the context of the magnetosphere, ions may not be well described by the guiding center approximation, as their gyroradii can be fairly large compared to characteristic scales at which the fields evolve. In particular, close to the center of the magnetotail current sheet, the gyroradii can approach the curvature radius of the magnetic field, and the ions experience strong scattering when crossing the current sheet <cit.>. Furthermore, instabilities, waves, and turbulence that are generated during magnetic reconnection may lead to more efficient acceleration of particles <cit.>. §.§ Example observations and challenges The possible limitation of the guiding-center theory and the importance of turbulence may be glimpsed in recent examples of magnetotail reconnection. Fig. <ref> shows two cases of particle acceleration during magnetotail reconnection, obtained by Magnetospheric MultiScale (MMS). The 2017 July 11 event (left column) is a case with less-enhanced heating and turbulence and has been studied by many authors <cit.>. On the other hand, the 2017 July 26 event (right column) is a case with significantly enhanced heating and turbulence and has also been studied intensively <cit.>. A puzzle is that, some properties of magnetic reconnection (e.g., heating and turbulence) appear differently in these two cases, and yet particles (both ions and electrons) are accelerated to non-thermal energies in both cases. For electrons, the non-thermal, power-law tail may even be softer in the significantly heated and turbulent case <cit.> but it remains unclear how the observed power-law index can be explained. For ions, the energy spectrum could be more complicated. While the bulk flow component could peak around 1 keV, the higher-energy end of the spectrum may be influenced by the physical size of the energization region, as often argued in the shock physics <cit.>. In the Earth's magnetotail, the gyroradii of ions with energies greater than ∼100 keV may exceed several ion skin depths. In any case, the similarities and differences of these two cases lead to questions such as `What is the precise condition of particle acceleration?', 'What is the precise role of turbulence?', `How particle energies are partitioned between thermal and non-thermal energies?', and ultimately `How are particles heated and accelerated to non-thermal energies?'. By reviewing recent progress in more detail in this paper, we hope to clarify what we know so far, what ideas have been discussed, and what we need to work on in the near future. § PARTICLE ACCELERATION NEAR THE X-LINE §.§ Active vs. Quiet Early magnetotail studies showed that, in the plasma sheet, the energy spectra become non-thermal above ∼10 keV for ions and ∼1 keV for electrons and are often represented by the kappa distribution <cit.>. The power-law index κ is often in the range of κ≳ 4, as shown in Fig. <ref>a. <cit.> showed that these energetic particles may be the result of energization occurring during reconnection in the tail. They found that the hardest particle spectrum (i.e., the most energetic particles) was observed near the center of an ion diffusion region traversed by the Wind spacecraft in the deep (60 R_E) terrestrial magnetotail. This work was followed up by <cit.>, who explored whether a similar result would be found for a set of six electron diffusion regions discovered by MMS. Comparing the results to a statistical dataset of 133 quiet-time (i.e., AE^* 300 nT and no fast (|v_ion avg| ≥ 100 km/s) flows) plasma sheet crossings (PSC), the authors found that the electron diffusion region (EDR) events did in fact have harder spectra (i.e., more energetic particles) (Fig. <ref>b). The result suggests that these energetic electrons are coming from a local source associated with active reconnection <cit.>. In fact, an observational study reported significant heating within the EDR, followed by an appearance of the non-thermal tail in the immediate downstream of the EDR <cit.>. MMS observations also reported a significantly enhanced flux of energetic electrons within the EDR, although the non-thermal, power-law tail was soft with the power-law index of ∼8 as measured in the phase space density <cit.>. Interestingly, <cit.> reported coherent gyrophase bunching of > 50 keV electrons in the immediate downstream of the EDR and argued that it can be caused by the first-order Fermi acceleration Type B off of the outflowing exhaust structure, evidencing electron acceleration at the reconnection site and possibly also in the outflowing exhaust jets of the active reconnection. Despite the possible importance of magnetic reconnection, it has also been reported that the non-thermal component is significant even during periods of low geomagnetic activity (AE 100 nT) <cit.>. This is also illustrated by the overlap of the PSC and EDR histograms in Fig. <ref>. <cit.> argued that such energetic particles may be sourced by remote down-tail reconnection sites or processes not directly related to reconnection at all. This is consistent with an earlier report of significant non-thermal tail during low geomagnetic activity <cit.>. Similarly, <cit.> examined the spatial variation across the reconnection region and reported that the non-thermal power-law tail can exist even outside the reconnection region (Hall region) where there is no significant plasma flows and turbulence. Therefore, the relationship between the production of energetic electrons and the geomagnetic activity remains unclear, let alone the importance of the EDR. §.§ Fermi vs betatron As reviewed in Section <ref>, the main acceleration mechanisms in the guiding-center approximation are Fermi acceleration, betatron acceleration, and the direct acceleration by the parallel electric field. While the parallel electric field might be important for heating (as separately reviewed in Section <ref>) or for acceleration to non-thermal energies in some cases <cit.>, many studies argue that Fermi and betatron acceleration are predominantly important during magnetic reconnection. In observational studies, a pitch angle anisotropy has been the key feature for diagnosing Fermi and betatron acceleration <cit.>. Particles experiencing Fermi and betatron acceleration tend to exhibit parallel and perpendicular anisotropy, respectively. However, with the launch of MMS in 2015, electron data with the time resolution of ∼100 times higher than its predecessors became available. Such data sets, combined with the multi-spacecraft approach which is necessary to estimate the magnetic field curvature, have enabled us to evaluate each term in Eq. (<ref>), providing a more direct diagnostics of the acceleration mechanism, i.e., Fermi acceleration, betatron acceleration, and the direct acceleration by the parallel electric field. Significant progress has been made with such analysis and will be reviewed below. It is to be noted that most of the discussion in this subsection is focused on electron acceleration, although there have been some studies of ion acceleration by simulations <cit.> and observation <cit.>. §.§.§ Outflows near the X-line Early studies argued that, in the outflow region immediately downstream of the X-line, magnetic field magnitude increases and that electrons are accelerated by the gradient B and/or curvature drift <cit.>. However, it was also argued that, above a few keV, the κ value of electrons can approach ∼ 1 where κ^2 is the ratio of the magnetic field curvature and the particle gyro-radius. In such a condition, a non-adiabatic behavior or scattering becomes important. <cit.> proposed that electrons are first pre-energized at the X-line, accelerated non-adiabatically in the pileup region in the immediate downstream region, and then further accelerated adiabatically in association with burst bulk flows (BBFs) in the outflow region. While energetic electron events tend to be rare tailward of the X-line in the tail, <cit.> reported three cases of outflow jets on the tailward side and argued based on anisotropy that electrons were accelerated adiabatically by both Fermi and betatron effects. The observations were made on the tailward (or `unconfined') side where the effect of the intrinsic, dipole magnetic field can be neglected, but the outflow speeds were increasing in time (or `growing') leading to the compression or strengthening of the magnetic field. More recently, MMS has enabled to study the acceleration mechanism with the guiding-center approximation described by Eq. (<ref>). A case study of tailward outflows reported that the dominant mechanism, both on average and the peak values, was Fermi acceleration with a peak power density of about +200 pW/m^3 <cit.>. During the most intense Fermi acceleration, the magnetic field curvature was comparable to the electron gyro-radius (i.e., κ∼ 1), suggesting electrons were being scattered efficiently. Fig. <ref> shows the schematic illustration of their interpretation. In the current sheet center, the power-density of electron acceleration due to Fermi acceleration, W_Fermi, and betatron acceleration, W_betatron, are positive because the magnetic field magnitude increases with the increasing distance from the X-line. At the edges of the current sheet, however, incoming electrons experience decreasing magnetic field and hence negative values of W_betatron. Interestingly, some of these findings (such as moderately non-adiabatic behaviors, energy loss at the edges, etc.) are consistent with earlier simulation results <cit.>. §.§.§ Flux ropes A magnetic flux rope is one of the key structures associated with magnetic reconnection. It is often referred to as a magnetic island especially in 2D theoretical pictures <cit.>. A distinction is typically made based on the absence or presence of a magnetic field component along the center of the island or rope structure. Many observations indicate that electrons are accelerated to non-thermal energies within the flux ropes both in the magnetotail <cit.> and in the magnetopause <cit.>. Also, multi-island coalescence may be a key process for the energy conversion during reconnection and associated acceleration of particles <cit.>. The standard theory for electron acceleration in flux ropes is the contracting island mechanism, whereby particles receive a Fermi-type energization kick at each end of an actively contracting magnetic island <cit.> <cit.>. The process requires an escape process in order to explain the observed spectral indices of energetic particles. Observations also indicate the importance of Fermi acceleration in addition to betatron acceleration <cit.>. <cit.> studied electron acceleration within ion-scale flux ropes by evaluating the equation for adiabatic electrons with the guiding center approximation (GCA, Eq. (<ref>) in Section <ref>). Their analysis indicated that the lower energy (<10 keV), field-aligned electrons experienced predominantly Fermi acceleration in a contracting flux rope, while the higher energy (>10 keV) electrons with perpendicular anisotropy gained energy mainly from betatron acceleration. They argued that the dominance of betatron acceleration at high energies could be a consequence of the 3D nature of the flux rope. The field-aligned electrons that can experience Fermi acceleration would quickly escape along the axis of the flux rope. Because of the successful application of the GCA theory, the study was positively commented by <cit.>. In another case study of a pair of tailward traveling flux ropes, <cit.> reported that, while Fermi and parallel potential is strong near the X-lines between the flux rope pair, betatron is strong on flux rope boundaries. For electron acceleration at the magnetopause, <cit.> reported an interaction of two filamentary currents (FCs) within a flux rope and argued that the electrons were mainly accelerated by the betatron mechanism in the compressed region caused by the FC interaction. However, electron acceleration in flux ropes might not always be adiabatic <cit.>, and the parallel electric fields might become important, particularly for small-scale, secondary flux ropes that form at and around the primary X-line with intensified current <cit.>. There can also be intense wave activities, turbulence, and current filaments inside flux ropes <cit.> that can lead to stochastic acceleration. Recent 3D simulations have demonstrated that such turbulence and associated induced electric field can result in strong heating of electrons <cit.>. §.§ Parallel electric field In the earlier years of magnetic reconnection studies, it was proposed that electrons are accelerated directly in the reconnection electric field along the magnetic X-lines <cit.>. However, recent studies of magnetic reconnection have revealed a new adiabatic picture in which the parallel electric field plays an important role, as reviewed in this subsection. Here, it is worth noting that, while the rate of energy gain is roughly proportional to the particle’s energy for the cases of Fermi and Betatron acceleration, the rate of energy gain scales only with the particle speed v for the case of direct acceleration by parallel electric field (Section <ref>). Nevertheless, the acceleration by parallel electric field can boost thermal particles by orders of magnitude in energy and hereby provide a preenergized seed populations subject to further Fermi and Betatron energization <cit.>. For many plasma physics problems, it is important to understand how rapidly thermal (and super-thermal) electrons travel along the magnetic lines. As an example, we may consider the July 11, 2017, reconnection event recorded at about 20R_E into the Earth's magnetotail with a typical electron temperature of 1 keV (See Fig. <ref>, left column, in Section <ref>). It follows that the electron thermal speed (v_te≃ 20·10^6m/s) is about 400 times faster than the expected reconnection inflow speed v_in≃ v_A/10 ≃ 50·10^3m/s. Thus, during the course of a fluid element (say, initially 1d_i≃ 1·10^6m upstream of the reconnection site) traversing the reconnection region, a typical electron will travel a distance of about (v_te/v_in) d_i ≃ 80R_E (larger than the distance from Earth to the moon). This means that electrons, once energized, would escape instantly from the energization site and would not exhibit a localized, enhanced flux at and around the energization site, if there were no confinement or trapping. In reality, however, energetic electrons are observed in the localized region of magnetic reconnection (See, for example, Fig. <ref> and other studies reviewed elsewhere in this paper). Therefore, we need a model for electron confinement or trapping to explain the observations. Due to the fast streaming of the electrons along the magnetic field lines, their parallel action, J=∮ v_ dl, is typically a well conserved adiabatic invariant. As illustrated in Fig. <ref>, this J-invariance has been explored in a range of theoretical models for electron heating. The model of <cit.> considers a 2D periodic and incomprehensible system and in essence applies Jeans' theorem <cit.> that the gross evolution of the electrons is governed by a double adiabatic assumption, f=f(J,μ) where μ is the magnetic moment, augmented with phenomenological pitch angle scattering. This Fermi heating model is only concerned with the large-scale energization of the electrons and ignores any variation in f along magnetic field lines. Meanwhile, <cit.> assumes that the reconnection region is embedded in a large open system, where the plasma in the ambient regions provides fixed sources of electrons, f_∞. Electrons may here be characterized as either passing or trapped. The passing electrons instantaneously travel along the field lines with their total energy conserved, U = E-eΦ_, where Φ_=∫_x^∞ E_ dl is the acceleration potential <cit.>. Meanwhile, the trapped electrons again follow Jeans' theorem, f_T=f_∞(J,μ). Furthermore, with the imposed boundary conditions it can be shown that f_T≃ f_∞(μ B_∞), and a relatively simple analytical form is obtained: f=f_∞( E - eΦ_) (passing) , f= f_∞(μ B_∞) (trapped) . These types of distributions are common in measurements within reconnection regions and have been observed by multiple spacecraft missions including Wind, Cluster, THEMIS and MMS <cit.>. The global model by Drake et al. and the local model in Eq. (<ref>) can be obtained as two separate limits of the more general framework recently developed by <cit.>. The model by Drake et al. follows directly by imposing the conditions of n= constant and ∇_B=0, while Eq. (<ref>) is recovered in the limit K≫ L, where as illustrated in Fig. <ref>, K is the size of the periodic domain and L is the typical length scale for electron trapping in the reconnection region. Observations suggest that the bulk electron heating for a range of reconnection scenarios is largely governed by Eq. (<ref>). The trapped electrons have negligible heat-exchange with the ambient plasma, and when the majority of the thermal electrons are trapped the pressure components along and perpendicular to the magnetic field follow the CGL <cit.> scaling laws p_∝ n^3/B^2 and p_⊥∝ nB. This is also the asymptotic limit (at large n/B) of the equation of state derived directly from Eq. (<ref>) by <cit.> (hereafter referred to as Lê2009 EoS). Consistent with E_≃ -∇ p_/(en), <cit.> found that eΦ_/T_e∞∝ n^2/B^2. This dependency is much stronger than the typical Boltzmann scaling of eΦ_/T_e∞∝log(n/n_0) and within reconnection regions Φ_ typically becomes large and is responsible for trapping and heating the majority of thermal electrons. Fig. <ref> shows profiles of Φ_ recorded in a range of numerical simulations. A value of β_e∞ = nT_e∞/(B_∞^2/2μ_0)≃ 0.1 is often applicable to reconnection within Earth's magnetosphere, yielding the profiles of Φ_ displayed in Fig. <ref>(a, b). Meanwhile, on occasions in the Earth's magnetotail when lobe plasma reaches a reconnection region, the normalized pressure can drop dramatically with β_e∞≪ 0.1 (see Fig. <ref>(d)). From the principle of quasi-neutrality, it can be shown that the required parallel streaming of electrons then exceed their thermal speed. The dynamics then enter a non-adiabatic regime with enhanced values of eΦ_/T_e∞≫ 10 over much extended spatial regions (see Fig. <ref>(c) as well as <cit.>), likely relevant to recent MMS observations <cit.>. In Fig. <ref>(e), for asymmetric reconnection the largest values of Φ_ and p_/p_⊥ are observed in the low-β_e∞ inflow <cit.>. During island coalescence (in Fig. <ref>(f) with a guide magnetic field), the effect of Φ_ is also noticeable, and for this case the p_ values in Fig. <ref>(g) are enhanced by Fermi acceleration of the contracting island <cit.> above the levels predicted by Lê2009 EoS outlined in Fig. <ref>(h). The red line in Fig. 7(g), represents the predictions by the Le2009 EoS (see B/B_∞=1 in Fig. 7(h)) when including Fermi heating enhancing the parallel temperature of f_∞ by about a factor of 2. Again, these two effects are both captured by the formalism in <cit.>. The Lê2009 EoS has been verified directly by MMS during exhaust crossings of guide field reconnection, both close to <cit.> and far <cit.> from the X-line. For example, the data in Fig. <ref>(a) is from the event far from the X-line first studied in <cit.>, where T_e measured in the two inflows (green and blue) as well as in the reconnection exhaust (red) is observed to follow the aforementioned CGL limit of the Lê2009 EOS where T_e∝ (n/B)^2. Slightly asymmetric inflow conditions set different values of proportionality, and the exhaust comprised of a mixture of the two populations falls in the middle. The black lines represent the Lê2009 EoS predictions (which also accurately account for the T_e⊥ observations, not shown here). For guide-field reconnection, the Lê2009 EoS has been implemented as a closure for the electrons in two-fluid simulations <cit.> and as shown in Fig. <ref>(b), the predicted heating levels as a function of β_e∞ are consistent with THEMIS observations <cit.>. Likewise, for anti-parallel reconnection, the Lê2009 EoS has been applied <cit.> to derive theoretical scaling laws for the total electron energization as electrons approach and pass through the EDR. The theory is also consistent with kinetic simulation results as well as THEMIS observations in the reconnection exhausts (see Fig. <ref>(c, d)). In Fig.8(d), compared to the empirical scaling by the red line, the black theoretical curve predicts reduced heating at large β_e∞. Both curves fall mostly within the error bars of the measurements. §.§ Waves and turbulence Magnetic reconnection in pre-existing turbulence is often referred to as `turbulent reconnection' <cit.>. Earth's magnetosheath is such an environment, where turbulence appears to drive (smaller-scale) magnetic reconnection <cit.>. On the other hand, magnetic reconnection can generate waves and turbulence in return <cit.>. In-situ observations in Earth's magnetotail indicate that strong waves and turbulence exist in the reconnecting plasma sheet even though the upstream, lobe region is quiet, indicating that magnetic reconnection itself excites waves and turbulence <cit.> <cit.>. In this section, we provide a brief review on particle energization associated with waves and turbulence observed near the X-line, including outflow jets. Similar waves and turbulence are also found at large scales near the flow-braking region, as reviewed in Section <ref>. §.§.§ Ion acceleration and turbulence during magnetic reconnection As suggested by MMS observations <cit.>, the physical process of ion and electron acceleration can differ. Because ions have larger scale sizes (skin depth and gyroradii), they are the first in line to absorb the magnetic energy. In a region of turbulence, E spectra (Fig. <ref>) have high enough energy density to explain the high ion energization rates though cyclotron-resonance <cit.>. However, cyclotron resonance alone does not explain an accelerated tail or other details in the ion distributions (Fig. <ref>). Instead, a stochastic process needs to be considered, and it requires waves and turbulence that span a wide frequency range. In principle, ions can undergo Speiser-like orbits at the neutral sheet during magnetic reconnection. Nevertheless, when strong turbulence coexists within the neutral sheet, unmagnetized ions, which do not necessary follow Speiser-like orbits, are more likely to gain energy from large impulses in the turbulent electric fields. It is worth noting that ions with initially high energies not only absorb more powerful, larger-scale electromagnetic energy, but also have a higher probability to be unmagnetized and pass through the neutral sheet. As a result, energization favors ions with initially higher energies and an accelerated tail in the ion distributions could emerge. The kinetic process of ion energization in turbulence is an active, ongoing study in which MMS observations have given good insight <cit.> . §.§.§ Electron acceleration and turbulence during magnetic reconnection Recent observations <cit.> and simulations <cit.> have provided convincing evidence that turbulence plays a significant role in accelerating electrons to non-thermal energies in the magnetotail (See Section <ref> for further discussion). The observations are so detailed that the specific process of interaction between the turbulence electric field and electrons can be discussed, as summarized below <cit.>. Perpendicular electron energization requires circumvention of the first adiabatic invariant (μ=p_⊥^2/2γ m_0 B). Contrary to the case with ions, there is little power at or above the electron cyclotron frequency (Fig. <ref>) and E_|| is small (written on plot) which suggests that electron energization should be negligible. It is found, however, that energization can occur if the correlation length scale (d_corr) in the E turbulence is sufficiently small <cit.>. If an electron’s parallel velocity is high enough that d_corr/v_|| < 1/f_ce, it experiences changes in E in less than 1/f_ce in its frame and therefore can be energized perpendicular to 𝐁. Furthermore, if an electron's gyroradius is such that ρ_e ≥ d_corr, it can experience enhanced parallel energization, perpendicular energization, and pitch-angle scattering. Fig. <ref> illustrates the underlying process of electron acceleration by turbulent and electrostatic 𝐄. As it gyrates, a low-energy electron (2 keV in the figure) experiences a nearly constant E whereas a higher-energy electron (20 keV in the figure) transits regions of changing E during its gyration. Even though 𝐄 is primarily electrostatic, the particle does not necessarily return to the same location in the perpendicular plane or in the same location along B and therefore can experience energy change. A finite ∇×𝐄 can enhance acceleration. The velocity dependence is such that, once again, electrons with initially higher energies are favorably energized, which results in acceleration. Interestingly, the electron energization process can be greatly enhanced by trapping in magnetic depletion <cit.>. Electrons can transit a turbulent region in the magnetotail in a matter of seconds, which greatly limits its energization. If trapped, the electron experiences energization for a significantly longer time, leading to much higher energization. This kinetic picture of ion an electron acceleration suggests that further study is needed. §.§.§ Electron energization associated with waves It is instructive to discuss more specifics of what constitutes turbulence. Previous observations have shown that waves are excited over a broad range of frequency during magnetic reconnection and that they can be identified as lower hybrid waves, Langmuir waves, electrostatic solitary waves, and whistler waves <cit.>. Perpendicular anisotropies in the region behind a dipolarization front (Section <ref>) could act as a source of whistler waves <cit.> or electron-cyclotron waves <cit.>. Many studies have shown that a specific type of waves can play an important role in particle heating. For example, Debye-scale electrostatic waves and structures have been detected and discussed in the context of electron heating (or energization below ∼ 1 keV) near the X-line both at the magnetopause <cit.> and the magnetotail <cit.>. Also, an association between whistler waves and intense bursts of energetic (10s to a few 100 keV) electrons near the reconnection separatrix has been reported in the context of magnetopause reconnection <cit.>, followed by a statistical study <cit.>. <cit.> analyzed the energy spectrum carefully and showed that such energetic electrons are not contaminated by the magnetospheric population and yet indeed non-thermal. § PARTICLE ACCELERATION AT LARGE SCALES §.§ Overview While magnetic reconnection ultimately occurs at `microscopic', electron-kinetic scales within a plasma, reconnection results in macroscopic to global scale reconfiguration of the magnetic field topology and dynamics within a plasma. In the inner magnetotail, this involves inductive electric fields that are responsible for particle acceleration far removed from the actual reconnection site itself. The earthward exhaust region is characterized by transient or more persistent increases in the northward magnetic field, called dipolarizations. Transient events are typically associated with rapid flow bursts, which come to rest and/or get diverted azimuthally in a `flow-braking region' near or inside of about 10 R_E distance downtail. This is not a fixed distance, however. The fact that dispersionless energetic particle flux increases at tens to hundreds of keV (denoted `injections') are frequently observed at geosynchronous orbit <cit.>, or even inside, is an indication that impulsive electric fields can often penetrate more deeply than the fast flows. The transient dipolarization events and their associated flows are related to motional electric fields, which may exceed the electric field defining the rate of reconnection. These electric field enhancements are sometimes referred to as ‘rapid flux transport’ (RFT) events <cit.>. The dipolarization events typically include sharp increases of the northward magnetic field B_z, called `dipolarization fronts' <cit.>, followed by an interval of increased B_z, denoted `dipolarizing flux bundle' <cit.> or `Flux Pileup Region' <cit.>. Further details on the terminology and properties of particle acceleration are given in recent reviews <cit.>. Transient DFs typically separate a colder denser population in the pre-existing plasma sheet from the hotter, more tenuous population in the DFB, presumably ejected out from the reconnecting X-line <cit.>. Similar structures are detected for tailward flows as well (with B_z<0), and thus a more generalized term `reconnection front' is also used to combine both earthward and tailward cases <cit.>. Dipolarizations in the flow-braking region tend to show more persistent increases in B_z <cit.> as well as low or decreasing earthward flow speeds, which may include tailward bounces and oscillations <cit.>. They are commonly accompanied by strong electric fields, which may exceed the motional electric field of the transient events by one or more orders of magnitude up to about 100 mV/m <cit.>. In contrast to the RFT electric fields, which are typically duskward, the high-frequency fields also include significant field-aligned components. Numerous investigations have confirmed that the inductive electric fields associated with dipolarization events are the eminent cause of energetic particle flux increases, including injections observed at geosynchronous orbit. Their properties are briefly reviewed in Sections <ref> – <ref>. The effects of the fluctuating strong electric fields in the flow-braking region are not as well documented. They presumably arise from the turbulence associated with the flow-braking and diversion of the earthward flow and may provide a mechanism for particle energization, separate from, or in addition to, the effects of the transient fields, and contribute a source population for the outer radiation belt <cit.>, as well as a mechanism for energy dissipation <cit.>. §.§ Anisotropies in dipolarization events Many observations indicate that particles can be accelerated to non-thermal energies at and around the transient dipolarization events <cit.>, with anisotropies of the energetic particle distributions providing major clues of the underlying mechanism. Fig. <ref> shows two example observations by Cluster reported by <cit.>. One event was obtained when the bulk flow speed was decreasing and thus the main magnetic structure (denoted FPR, in this case) was considered decaying (left column). The other event was obtained when the bulk flow speed was increasing, and thus the FPR was considered growing (right column). The energetic (> 40 keV) electrons showed parallel anisotropy (indicating Fermi acceleration) and perpendicular anisotropy (indicating betatron acceleration) in the decaying and growing cases, respectively. Based on a statistical analysis of pitch-angle anisotropy, <cit.> consistently argued that, because outflow jets have higher speeds in the mid-tail region (X ≲ -15 R_E), there could be more efficient compression of the local magnetic field, leading to more frequent formation of the perpendicular anisotropy by betatron acceleration in the mid-tail region. It is important to distinguish full particle acceleration, which involves the history of a particle motion, from the local acceleration rate. Estimating the latter, several investigations concluded that locally betatron acceleration was dominant at the dipolarization front (DF) proper <cit.> and that the Fermi acceleration would be more effective at a larger spatial scale. Using MMS data, <cit.> showed that betatron acceleration rate dominates at many dipolarization fronts in the magnetotail in the X < -10 R_E range. Such a conclusion is consistent with the earlier, global-scale picture in which electrons are expected to experience predominantly Fermi acceleration in the stretched magnetic field in the magnetotail but undergo betatron acceleration as the magnetic field increases <cit.>. <cit.> also used observations from NASA’s MMS mission to demonstrate how electron acceleration associated with a dipolarization structures and BBFs in the magnetotail were energy-dependent but consistent with betatron acceleration (Fig. <ref>). <cit.> examined 13 dipolarization events using Cluster data and concluded that the electron acceleration up to 90 keV was consistent with betatron acceleration. <cit.> used Cluster data in the magnetic structures (flux rope and dipolarization) of an Earthward reconnection jet, and found that in the dipolarization structure, electron acceleration was generally consistent with betatron acceleration, while within the flux rope, electron acceleration was more consistent with Fermi acceleration. While most conclusions are from single point measurements, <cit.> used a constellation between MMS and Cluster satellites to infer consistency with adiabatic acceleration of electrons trapped within a dipolarization structure. Electron anisotropies may vary not only with distance from the Earth or from the reconnection site but also with respect to the distance from the neutral sheet (B_x ∼ 0). <cit.> reported pancake type distributions (90^ o peaked) near the neutral sheet and mostly cigar type (0^ o and 180^ o peaked) distributions away from the neutral sheet, consistent with a predominance of betatron acceleration of ∼90^ o particles close to the neutral sheet and Fermi acceleration for field-aligned electrons reaching higher latitudes. Ions can also be accelerated in association with BBFs and dipolarization events, as studied by recent MMS observations <cit.> and simulations <cit.>. Because the ion gyro-radii are relatively large, they do not conserve the adiabatic moment, except in some average sense, and often behave non-adiabatically. Ion acceleration is further discussed in Section <ref>. §.§ Acceleration mechanisms §.§.§ Electrons The spatially and temporally localized cross-tail electric field associated with earthward propagating dipolarization fronts can result in trapping, earthward transport, and rapid acceleration of energetic particles, leading to the betatron effect from drift toward increasing B-fields. Various models, which capture the essential localization of the E-field, have been based on the adiabatic drift approximation, concentrating on equatorial drift orbits. They clearly demonstrated how the motional, azimuthally oriented electric field associated with a magnetotail dipolarization and corresponding bursty bulk flow (BBF) of 100s km/s can accelerate energetic particles and transport them rapidly radially inward with the BBF itself <cit.>, and yield flux increases consistent with energetic particle observations. The localized electric field in RFT events can also cause parallel Fermi acceleration of ions and electrons bouncing through this region once or (for electrons) multiple times. Studies of this effect require orbit tracing in three-dimensional magnetic and electric fields, which are usually obtained from MHD simulations <cit.>. These studies confirmed the mechanism of temporal magnetic trapping within the magnetic field structures of DFBs, not only for electrons but also for ions <cit.>, and showed the rapid acceleration via betatron (perpendicular to the B-field) and/or Fermi (parallel to B) effects (Section <ref>). They demonstrated not only parallel and perpendicular anisotropies of energetic electron distributions, but also so-called `rolling pin' distributions <cit.> with peaks at 0^ o, 90^ o, and 180^ o pitch angles <cit.>, depending on energy, time and location. Here, it is worth emphasizing again that even electron motion is not necessarily always adiabatic, especially at and around the X-line, in strongly curved low-B fields, or in regions of strong waves and turbulence, e.g. near the reconnection site. In such cases, the parallel electric field carried by whistler waves (Section <ref>) or kinetic Alfvén waves <cit.> might be important in addition to Fermi and betatron acceleration. The fate of DFBs and associated energetic particles has also been investigated within the Rice-Convection-Model <cit.>, covering the energy-dependent drift of depleted magnetic flux tubes (also denoted `bubbles') within a quasi-static inner magnetosphere model <cit.>. In this regard, accelerated electrons were demonstrated to be important as a likely seed population of the Earth’s radiation belt. <cit.> conducted test-particle simulations of electrons in high-resolution, dynamic MHD fields to show how energetic electron injections from the magnetotail likely contribute a significant and possibly even dominant source of outer radiation belt electrons in the 100s of keV range in the inner magnetosphere. <cit.> conducted a phase space density analysis using a combination of Van Allen Probes in the outer radiation belt and MMS in the magnetotail plasma sheet to demonstrate also that relativistic electron acceleration in the plasma sheet can result in sufficient intensities to serve as a direct source for outer radiation belt electrons. Here, it is worth emphasizing that the intensities in the magnetotail can get up to radiation belt levels yet the residence time of those electrons in the tail is only a few minutes, in contrast to the several days residence times in the outer radiation belt. §.§.§ Ions Acceleration of ions in dipolarization events can be similar to that of electrons. Details are summarized in recent reviews by <cit.> and <cit.> with references therein. Simulations by <cit.> showed how the acceleration of protons in the central plasma sheet (CPS) is generally consistent with the betatron effect (with an average conservation of the first adiabatic invariant in the presence of an increase in magnetic field strength). This is consistent with conclusions of <cit.>, which were based on test particle tracing in high-resolution global MHD simulations. The simulations also demonstrated parallel acceleration (similar to Fermi acceleration of type B) by single (or, in rarer cases, multiple) encounters of a dipolarization front. In contrast to electrons, a single encounter of, or reflection at, a dipolarization front may result in observable, albeit moderate-energy proton beams or precursor populations preceding a DF <cit.>. Presumably, a similar process can also happen at reconnection fronts on the tailward/anti-earthward side of magnetotail reconnection. Due to the mass dependence of the gyroradius that characterizes the encounter or reflection, this energization is even more effective for heavier ions, such as oxygen. As the energy gain essentially results from picking up the speed of the moving structure, it was also likened to a `pick-up' process <cit.>. The simulations have yielded characteristics of ion distributions, dominated by protons, that are consistent with observations right after passage of a DF. At low distance from the neutral sheet, in the central plasma sheet, distributions show perpendicular anisotropy <cit.>, consistent with the betatron effect, which may be accompanied by lower intensity, lower energy field-aligned counter-streaming beams. At larger distance from the neutral sheet, close to the plasma sheet boundary, the distributions consist of crescent-shaped earthward field-aligned beams <cit.>. At the distance slightly away from the plasma sheet boundary layer (PSBL) and closer to the neutral sheet, such crescent-shaped earthward beam can be accompanied by tailward beams, which apparently result from mirroring closer to Earth. In such a region, sometimes multiple earthward and tailward beams are observed, which may be considered the counterparts of the field-aligned electron populations, however, involving only few bounces <cit.>. It is noteworthy that crescent-shaped earthward ion beams (including their tailward streaming counterparts) can also result from reconnection deeper in the tail <cit.>. At higher energies, or for heavier ions, the gyroradius becomes comparable to, or larger than the size of the dipolarizing acceleration region, and the ions may encounter this region exhibiting Speiser-type orbits or even traverse the acceleration region of the enhanced electric field in a demagnetized fashion <cit.>. The energy gain is essentially given by Δ W = q ∫ E_y dy where q and E_y is the particle charge and the enhanced electric field <cit.>. This provides an upper limit to the possible acceleration of a given species, which is higher for multiply-charged ions. In agreement with that conclusion, the E/q dependence of particle fluxes in MMS observations of energetic particle events associated with fast flows <cit.> indicated that He^++ and O^6+ of solar wind origin dominated the particle fluxes at highest energies (> 400 keV). The non-adiabatic acceleration effects also lead to non-gyrotropic, phase bunched, velocity distributions of heavy ions <cit.>. §.§ Sources and seeding Observations do not give direct information about the sources of the accelerated particles. The fact that transient DFs typically separate a hotter, more tenuous population inside a DFB from the colder, denser plasma ahead of it indicates that the pre-DF population is not the source of the energized population inside the DFB. More definite conclusions about the sources come from modeling, particularly from particle tracing in fields modeling the inward propagation of DFBs. Through backward tracing in dynamic MHD fields, <cit.> demonstrated how particles are seeded onto the reconnected field lines inside a DFB, thus gaining access to the acceleration processes, via two mechanisms: i) local cross-tail particle drifts in the plasma sheet configuration, and ii) direct entry enabled by remote reconnection of field lines (Fig. <ref>). The entry mechanisms are energy-dependent: at low energies, charged particles are closely tied to the field lines that undergo reconnection before participating in the inner tail collapse, whereas at higher energies, cross-tail drifts or even non-adiabatic cross-tail motions become more important and particles can enter the acceleration region from the flanks earthward of the reconnection site. <cit.> examined MMS observations of a series of dipolarizations associated with magnetotail reconnection and found that for electrons with energy >10 keV, extending into the relativistic range, the observed acceleration was largely consistent with betatron acceleration, and one important consequence of those observational results was that the source of electrons in the ambient, background plasma sheet must have been relatively uniform over a large portion of the magnetotail surrounding the MMS spacecraft. The upper energy limitation of electron flux increases found by <cit.> and earlier by <cit.> is consistent with the change of particle motion and source regions at high energies mentioned above. Using again MHD/test particle simulations, <cit.> further demonstrated energy and space dependence of source regions of accelerated electrons. Consistent with earlier conclusions, they explained the drop in fluxes observed at energies < 10 keV (consistent also with the results of <cit.>) as being related to the drop in density from the seed populations in the plasma sheet boundary layer (PSBL) and lobes, despite the fact that these particles were also adiabatically accelerated. Figure <ref>, modified after Fig. 4 of <cit.>, illustrates some important conclusions from modeling electron pitch angle distributions (PADs) right after the passage of a DF. The MHD configuration is indicated in the top panel (a). * Panels b and e demonstrate characteristic anisotropies of cigar-type (field-aligned) and pancake-type (perpendicular) away from, and close to, the neutral sheet, respectively; the two locations are indicated by the crosses in panel a. This result agrees with observations by Runov et al. (2013). * Panels c and f show the origins of the particles contributing to these PADs, demonstrating that they are composed of different sources: At the highest energies particles originate from the inner CPS, as illustrated in Fig. 13a, whereas at lower energies the outer CPS, the PSBL, and the lobes contribute, as shown in Fig. <ref>b. * Panels d and g show the relative energy gain along the phase space trajectory, represented by the ratio between the final energy and the energy at the source location. These panels illustrate the effects of `heating', increasing particle energies by a similar factor over a wide energy range, versus the acceleration of particles in a limited energy (and pitch angle) range. The distribution away from the neutral sheet in panel d shows the Fermi ‘heating’ at pitch angles around 0 and 180 degrees, whereas the distribution near the neutral sheet in panel g shows the betatron heating near 90^ o pitch angles. Both panels show an accelerated field-aligned population at v∼4 - 5 (corresponding to ∼80 - 130 keV for the chosen units), although in panels b, d this is distinct from the `heated' population mainly by the source locations in the inner CPS, where densities are higher. * The three peaks near 0^ o, 90^ o, and 180^ o in panels e and g also illustrates the formation of the `rolling-pin' distribution, documented observationally <cit.>. It is a combination of dominantly parallel (`cigar'-shaped) and perpendicular (`pancake'-shaped) distributions. The model particle tracing provides information on the immediate source regions, such as plasma sheet vs. lobes, which cannot easily be inferred from observations. Ultimately, particles originate from two sources, the solar wind and the ionosphere. The distinction between the two source regions was made traditionally on the basis of ion composition experiments, with H^+ indicating solar wind origin, while the presence of O^+ indicated an ionospheric source <cit.>. This view has been extended and modified significantly. On one hand, detailed test particle tracing studies in global MHD models of storm time magnetosphere evolution have demonstrated that ionospheric H+ can also populate the plasma sheet and provide a seed population <cit.>. On the other hand, detailed studies of the energy/charge dependence of enhanced energetic particle fluxes showed that the contribution of heavy energetic ions to enhanced fluxes is not a fixed percentage but rather depends on energy and charge status, with O^+ (of ionospheric origin) dominating at lower energies of tens of keV, while multiply charged oxygen, particularly O^6+ of solar wind origin, was found to dominate at energies of hundreds of keV <cit.>. Reconnection on the dayside might also contribute to seeding of energetic particles in the near-Earth space environment. <cit.> reported on `microinjections' of relativistic electrons observed by MMS; microinjections are frequent and rapid, energy-dispersed to dispersionless enhancements of electron intensities observed around ∼ 10 R_E geocentric distance along the tailward-flanks of the magnetosphere. By tracing dispersed particle signatures back to their dispersionless origins, <cit.> demonstrated that dispersed microinjection observations along the dusk-side of the magnetosphere map back to near the subsolar and early afternoon magnetopause. <cit.> showed that the observed periodicity of microinjection electrons is consistent with a combination of Kelvin-Helmholtz (KH) waves and flux transfer events (FTEs) along the dayside magnetopause. Those results indicate that microinjected electrons might result from bursts of reconnection associated with KH instability and FTEs along the dayside magnetopause. Conversely, the drops in fluxes around microinjection electrons might also be the signature of losses of energetic electrons through the magnetopause and to the magnetosheath; however that electron loss process too is only enabled via reconnection resulting in magnetic connectivity across the magnetopause <cit.>. Tracing test particles in a dynamically evolving MHD model has reproduced even the salient features of losses (including detailed variations both in space and time and the depth of penetration and persistence of particles in the magnetosheath) for different species in agreement with MMS observations <cit.>. §.§ Diamagnetic cavities An interesting topic that drew some attention in the recent decade is the diamagnetic cavities that form at high magnetic latitudes in the cusp region as a consequence of large-scale, dayside magnetopause reconnection <cit.>. This region has a substantially reduced magnetic field magnitude and is filled with dense, sheath-like plasma with high-energy (> 30keV) electrons and ions (including heavy ions). The high-energy particles exhibit perpendicular anisotropy <cit.>, and test-particle simulations suggest that those high-energy particles are produced locally via betatron and/or Fermi mechanisms while being trapped in the magnetic bottle like configuration associated with the cavity <cit.>. The relatively large size of the diamagnetic cavities, i.e., 3-5 R_E in width <cit.> indicates that they can be a major source of plasma (electrons, protons and oxygen ions) into Earth’s magnetosphere as well as providing a high-energy particle source <cit.>. § OUTSTANDING PROBLEMS There remain unsolved problems in the topic of particle acceleration by magnetic reconnection in geospace. Here, we describe two topics, energy partition and the precise role of turbulence. These problems are very relevant to particle acceleration in solar flares. §.§ Energy partition For solar flares, it has been reported that non-thermal electrons alone carry up to 50% of the released magnetic energy <cit.>. In fact, more detailed studies argue that thermal electrons can indeed carry much less energy than non-thermal electrons, even in coronal sources <cit.>. This is in stark contrast to the case of Earth's magnetotail (in particular the reconnection region) where non-thermal electrons appear to carry only a minuscule fraction of released energy <cit.>. While the plasma parameters in the magnetotail differ greatly from those in the solar atmosphere, it is still instructive to know how energy is partitioned between thermal and non-thermal components in the magnetotail. A caveat is that the typical particle energy spectrum in the magnetotail does not exhibit a clear spectral break, and it is difficult to separate those components at a certain energy E_c <cit.>. Fortunately, the energy spectrum is often well approximated by the kappa distribution and the non-thermal fraction of particle energy (and also density) can be calculated analytically without introducing a sharp boundary at E_c <cit.>. Based on the kappa distribution model, it was shown that, for the above-the-looptop (ALT) hard X-ray coronal sources in solar flares, the fraction of non-thermal electron energies was at most ∼50%, indicating equipartition between thermal and non-thermal components <cit.>. Similar values of non-thermal fraction were obtained by self-consistent particle simulations of magnetic reconnection <cit.>, as well as in situ observations of electron energy spectra during magnetotail reconnection <cit.>. A puzzle is that, even when electrons are significantly heated (for example, the event of 2017 July 26, Fig. <ref> right), the non-thermal tail does not necessarily become harder <cit.>. This is counter-intuitive because the non-thermal tail is often expected to be enhanced as the temperature increases. When electrons are not significantly heated (for example, the event of 2017 July 11, Fig. <ref> left), the non-thermal tail becomes harder (softer) as the spacecraft approaches toward (moves away from) the X-line. The two distinct types of reconnection events, i.e., less heated and much heated events, can be interpreted by the concept of `plasma sheet reconnection' and `lobe plasma reconnection', respectively <cit.>. However, it remains unclear, at least from the observational point of view, what controls the energy partition between thermal and non-thermal components of electrons. One caveat that has to be considered in the magnetotail events is that the particle distribution observed prior to an event is generally not (or not identical to) the source of the population observed afterward, as discussed in Section <ref>. For ions, the energy partition between thermal and non-thermal components is much less studied in the magnetotail, although ions do form a clear power-law tail in the magnetotail <cit.>. Recent particle simulations of magnetic reconnection have shown that ions and electrons form a very similar power-law tail, but non-thermal protons gain ∼ 2 × more energy than non-thermal electrons <cit.>. It was argued that the primary mechanism of acceleration is Fermi acceleration and that the strong field-line chaos associated with the flux-rope kink instability allows particles to be transported out of flux ropes for further acceleration. It is to be noted that energetic ions in the magnetotail especially in the dipolarization region can have multiple sources and thus the process of energy partition might be a little more complex. <cit.> argued that an enhanced flux of energetic ions can result from not only acceleration of thermal ions in the reconnection region but also a drift entry of pre-energized ions from the magnetotail flanks. Observational validation of these scenarios of ion acceleration and associated partition of energy is left for future work. §.§ The precise role of turbulence Many theoretical and simulation studies have shown that the guiding-center approximation is effective in explaining particle acceleration during magnetic reconnection and that particle acceleration is achieved by a Fermi-type mechanism involving curvature drift (Section <ref>). However, turbulence may also play a significant role (Section <ref>), although its importance and specific role in particle acceleration are not fully understood, at least from an observational standpoint. For example, <cit.> argued theoretically that the turbulence with high-frequency electric fields in a magnetic depletion region can energize electrons up to non-thermal energy. <cit.> have demonstrated that the flux-rope kink instability leads to strong field-line chaos, allowing particles to be transported out of flux ropes for further acceleration by other flux ropes. Also, <cit.> have shown that the turbulence-induced electric field at the core of flux ropes can scatter electrons, resulting in heating rather than acceleration to non-thermal energy. The turbulence in these theoretical models has different roles, and such roles have not been fully explored in observational studies. It is to be noted again that an enhanced turbulence may not necessarily lead to an enhanced non-thermal tail in the reconnection region (Sections <ref> and <ref>), although turbulence appears correlated with enhancements of non-thermal tail in the flow braking region <cit.>. Also, hard electron spectra have been found even in a quiet-time plasma sheet (Section <ref>), raising a question whether turbulence can confine electrons. After all, what makes reconnection more turbulent and how important turbulence is for the process of particle acceleration remain unsolved. § SUMMARY AND CONCLUSION In the past decade, a key theme of particle acceleration studies was whether the guiding-center approximation can describe particle acceleration and which of the key mechanisms, i.e., Fermi acceleration, betatron acceleration, and the direct acceleration by parallel electric field, is more dominant. The MMS mission has enabled the evaluation of each term and supported the earlier idea that both Fermi and betatron acceleration are important in many cases of electron acceleration during reconnection. In the collapsing region where the intrinsic dipole field becomes more important, the betatron acceleration dominates in the central plasma sheet. While some populations originate from the flank of the magnetotail without much increase in energy, other populations experience energization at localized dipolarization while being transported earthward from the reconnection region. In addition to the Fermi and betatron acceleration, a parallel potential develops near the reconnection X-line and traps incoming electrons, resulting in a significant energization. The electric field associated with turbulence can also accelerate electrons but such process might invalidate the assumption of adiabatic particle motion. Ions are more likely to behave non-adiabatically even near Earth, away from the reconnection region. Outstanding problems remain regarding, for example, energy partition between thermal and non-thermal components and the precise role of turbulence in the particle acceleration process. Solving these problems might be helpful for understanding the particle acceleration mechanism in other plasma environments, such as the solar corona. Acknowledgments We thank Seiji Zenitani for proofreading the manuscript prior to submission. This work was initiated and partly carried out with support from the International Space Science Institute (ISSI) in the framework of a workshop entitled `Magnetic Reconnection: Explosive Energy Conversion in Space Plasmas', led by Rumi Nakamura and James L. Burch. § DECLARATIONS Funding MO was supported by NASA grants 80NSSC18K1002, 80NSSC18K1373, and 80NSSC22K0520 at UC Berkeley. JB acknowledges support from NASA grants 80NSSC18K1452 and 80NSSK0834, and NSF Grant 1602655. FG acknowledges supports in part from NASA grant 80HQTR20T0073, 80HQTR21T0087 and 80HQTR21T0104. YK acknowledges support from the Swedish National Space Agency. Conflict of interest The authors have no conflict of interest to declare that is relevant to the content of this article.
http://arxiv.org/abs/2307.03274v1
20230706202317
It is not Sexually Suggestive, It is Educative. Separating Sex Education from Suggestive Content on TikTok Videos
[ "Enfa George", "Mihai Surdeanu" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "I.2.10; I.4.9; I.2.7; I.5.4" ]
To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology Tushar Kataria1,2Beatrice Knudsen3 Shireen Elhabian 1,2 August 1, 2023 ======================================================================================================================= We introduce SexTok, a multi-modal dataset composed of TikTok videos labeled as sexually suggestive (from the annotator's point of view), sex-educational content, or neither. Such a dataset is necessary to address the challenge of distinguishing between sexually suggestive content and virtual sex education videos on TikTok. Children’s exposure to sexually suggestive videos has been shown to have adversarial effects on their development <cit.>. Meanwhile, virtual sex education, especially on subjects that are more relevant to the LGBTQIA+ community, is very valuable <cit.>. The platform's current system removes/punishes some of both types of videos, even though they serve different purposes. Our dataset contains video URLs, and it is also audio transcribed. To validate its importance, we explore two transformer-based models for classifying the videos. Our preliminary results suggest that the task of distinguishing between these types of videos is learnable but challenging. These experiments suggest that this dataset is meaningful and invites further study on the subject. § INTRODUCTION In short-form videos such as in TikTok, accurately identifying sexually suggestive and sex education content amidst a sea of diverse video types poses a significant challenge. In this paper, we delve into this problem, focusing specifically on TikTok, the most downloaded app in 2022, which has a sub­ stantial user base of early adolescents and young individuals (10-19: 32.5%, 20-29: 29.5%) [<https://wallaroomedia.com/blog/social-media/tiktok-statistics/>] The distinction between suggestive videos and virtual sex education holds crucial significance on multiple fronts. Adolescent sex education in the United States is delivered in a fragmented and often inadequate system, which has long been the subject of intense criticism and is vulnerable to political influence <cit.>.In this context, TikTok presents a novel and promising avenue to conveying comprehensive and accessible sexual health information to adolescents, offering a convenient, private, and inclusive space for learning and discussion <cit.>. At the same time, children's exposure to sexual media content has been found to influence attitudes and contribute to the formation of adversarial sexual beliefs <cit.>. Unfortunately, efforts to moderate explicit content had unintended consequences, as studies have demonstrated the misidentification of non-explicit content due to flawed algorithms and filtering tech­ niques . In addition to the above issue, video/video creators (referred to as creators from now on) may also be susceptible to mass reporting. Creators from marginalized communities, partic­ ularly those within the LGBTQIA+ community, face heightened risks of having their educational content wrongfully flagged or removed [https://mashable.com/article/tiktok-sex-education-content-removal]. The classification of sexually suggestive and sex education videos presents a complex task, as demonstrated by the examples shown in Table <ref>. In example 1, we see that p*n*s illustration is not suggestive, while the video with a man holding a pumpkin in example 2 is suggestive. When we look at the transcripts, we see that in example 3, the creator is talking about myths around p*n*s sizes for pleasurable sex, and in example 4, the audio is suggestive. Considering these complexities, accurately categorizing sexually suggestive and sex education videos necessitates a nuanced understanding of contextual cues, subjectivity, evolving language, and robust algorithmic solutions. The contributions of the paper are as follows: * Introduction of SexTok: A collection of 1000 TikTok videos labeled as Sexually Suggestive, Sex Education, or Others, along with perceived gender expression and transcription. * Baselines Evaluation: We evaluate two transformer-based classifiers as baselines for the task of classifying these videos. Our results indicate that accurately distinguishing between these video types is a learnable yet challenging task. §.§ Trigger Warning: Sexual Content and Explicit Language Please be advised that this research paper and its associated content discuss and analyze sexually suggestive and sex education videos. The examples and discussions within this paper may contain explicit or implicit references to sexual acts, body parts, and related topics. The language used may sometimes be explicit. This material is intended for academic and research purposes and is presented to address challenges in content identification and classification. § RELATED WORK Automatic detection of sexually explicit videos is an area of active study. In a recent survey, classified the methods into four broad separate strategies. Nudity detection, Analysis of image descriptors ( such as Bag of Visual Words), Motion analysis, and other deep learning techniques. Most works around nudity detection are focused on skin-colored region segmentation to identify nudity. This methodology has been extensively explored in the image domain <cit.>, <cit.> <cit.>, <cit.>,<cit.>. <cit.>'s work, apart from focusing on the percentage of skin exposure, also gave attention to the body posture of the human in the image and the person's gestures and facial expressions. An alternative strategy is the Bag of Visual Words model, in which the idea is to minimize the existing semantic gap between the low-level visual features and the high-level concepts about pornography. <cit.>, <cit.>, <cit.>, <cit.>. Approaches based on motion analysis, apart from other features, also capture motion, such as using the periodicity in motion, such as in <cit.>. <cit.> uses a Gaussian mixture model (GMM) to recognize porno-sounds, a contour-based image recognition algorithm to detect pornographic imagery, and are combined for the final decision. Yet still, sexual activity where the human is mostly clothed or has minimal movement is still challenging. studied issues surrounding publicly deployed moderation techniques and called for reconsidering how platforms approach this area, especially due to it's high false positive rates and/or low precision rates for certain types of actions. § SEXTOK DATASET This section presents the SexTok dataset [Data and the experiment codebase will be shared at github.com/enfageorge/SexTok. Videos are shared as links to avoid any potential licensing issues. ], a collection of 1000 TikTok video links accompanied by three key features: Class Label, Gender Expression, and Audio Transcriptions. §.§ Terminology and Definitions §.§.§ Class Label The first feature, Class Label, is a categorical variable with three possible values: Sexually Suggestive, Sex Education, and Others: Sexually Suggestive: This category encompasses videos that purposefully intend to elicit a sexual response from viewers. Determining the presence of sexually suggestive content is subjective. Sex Education: This category encompasses videos aimed at enhancing viewers' knowledge, skills, and attitudes concerning sexual and reproductive health. It covers various topics, including but not limited to sexual orientation, gender, and gender-affirming care. Others: This category encompasses videos that do not fall within the aforementioned sexually suggestive or sex education categories. §.§.§ Gender Expression Gender expression is a form of self-expression that refers to how people may express their gender identity <cit.>. In this paper, we focus solely on the physical visual cues associated with gender expression. We provide five gender expression labels in the dataset: Feminine, Masculine, Non-conforming, Diverse, and None. Feminine and Masculine represent predominantly feminine or masculine expressions, while Non-conforming refers to expressions that deviate from traditional norms. Diverse applies to videos with varying gender expressions among multiple individuals. The None label is for videos without people or only limited visual cues like hands. The information for the vast majority is not self-reported. When available through the video itself, profile descriptions, or hashtags, we incorporate that information. Otherwise, the annotation is based on the perception of the annotator. This feature is provided only to serve the purpose of evaluating bias in models built on the dataset. §.§ Dataset Construction §.§.§ Data Collection The data collection process involved the primary annotator creating a new TikTok account and interacting with the platform in various ways to collect the video links. They carefully watched and hand-selected videos. Two important considerations were taken into account during the dataset construction process: (a) Limit a maximum of five videos per creator in the dataset. (b) Creators appearing in one split of the dataset (train, validation, or test) were excluded from all other splits to ensure independence and prevent data leakage. Detailed information regarding the specific methods used, as well as limitations and ethical considerations, can be found in Appendix <ref>. §.§.§ Annotator Agreement A 10% sample of the dataset was independently annotated by a second author to ensure reliability. Cohen's Kappa scores <cit.> were used to assess annotator agreement. For Gender Expression, the Kappa score was 0.89, indicating substantial agreement. For Class Label, the Kappa score was 0.93, indicating high agreement. These scores validate the consistency and quality of the dataset's annotations. §.§.§ Data Processing: Video download and Audio transcription The videos were downloaded without the TikTok watermark using a TikTok downloader.[<https://github.com/anga83/tiktok-downloader>]. The watermark was removed to reduce unnecessary noise in the data. A smaller sample of videos was first transcribed using OpenAI’s whisper (medium) <cit.> and was manually checked for accuracy. The transcriptions were mostly perfect, with a word error rate of 1.79%. After this, all the videos were automatically transcribed using Open AI's Whisper (medium). §.§ Dataset Properties In this section, we provide some general statistics about the SexTok dataset. The dataset comprises 1000 TikTok video links with three features: Class Label, Gender Expression, and Audio Transcriptions. A breakdown by label and dataset split is given in Table 1. A separate breakdown by Gender Expression and dataset split is given in Table 2. When the audio was transcribed, a percentage of videos were found not to have any text in the audio transcription, specifically → Suggestive - 15.85%, Educative - 3.97%, Others - 8.4%. We also observe that suggestive videos tend to be shorter (median duration: 7.86 secs), and have shorter audio transcriptions (median number of words: 14 words), compared to educative videos that are longer (median duration: 50.80 secs) and have longer audio transcriptions (median number of words: 171.5 words). Detailed dataset video length and transcription length are given in Appendix <ref>.) § EXPERIMENTAL SETUPS In this section, we evaluate the performance of pre-trained transformer-based models on the SexTok dataset to assess its significance. The experiments are divided into two subsections: text classification using video transcripts and video classification. For both transformer-based setups, we utilized models downloaded from Hugging Face Transformers <cit.>, initializing them with three random numbers. Details on hyperparameters are in Appendix <ref>. The reported results are the average of three runs. To assess the performance, we employed four sets of metrics: (1) accuracy, (2) micro precision, recall, and F1 (excluding Others as a negative class from the scores), (3) macro precision, recall, and F1, and (4) overall F1 for each class. §.§ Text Classification using Video Transcript We fine-tuned bert-base-multilingual-cased <cit.> to perform text classification on the video transcripts. Since we observed that a small percentage of videos do not yield any text in their transcription, we experimented with two setups. One with all video transcriptions and the other with non-empty transcriptions. §.§ Video Classification We fine-tuned MCG-NJU/videomae-base, a VideoMAE base model <cit.> for video classification. The image clips were randomly sampled and preprocessed to align with the default configurations of the model. § RESULTS AND ERROR ANALYSIS The average performance and standard deviation of the models are presented in Tables <ref> and <ref>. Based on these results, we draw the following observations: * The most accurate model is the text classifier that evaluated videos with a transcription (75%). It demonstrates relatively better performance in identifying educative content but often struggles to differentiate between suggestive content and others, and vice versa. However, it should be noted that this implementation is not realistic in a real-world scenario, as TikTok videos can vary in terms of sound presence and spoken language. * Both text-based classifiers exhibit higher F1 scores than the video classifier for the Educative and Others classes. But their performance in detecting suggestive content is is comparatively lower than that of the video classifier. * Notably, neither of the text-based classifiers misclassifies suggestive content as educative, or vice versa, as evident from the confusion matrices in Appendix <ref>. * The video classifier achieves the highest F1 score for the Suggestive class. However, it frequently confuses Educative and Other videos with each other. To further understand the hard examples for the model, we manually categorized the errors in both text and video classification experiment setups. We analysed 54 errors in text classification model. If more than one option was applicable, the video was counted in both: (a) Audio unrelated to class label (50.00%): The audio in these videos consisted of popular songs or speeches that did not contain any words typically associated with the class label. (b) Context clues and Euphemism (25.07%) : These videos relied on context clues or employed euphemistic language (9.26%) or required audio analysis considering the tone and intonation to predict the class label (14.81%). (c) No or partial transcription (14.81%): Approximately 9.26% of the videos had no audio that could be transcribed, while 5.56% had only partial transcriptions available. We analyzed 52 errors in video classification. All educative videos that were classified as others, and vice versa, had the same format that both classes do, i.e., a person looking at the camera speaking. Of the 11 suggestive videos that were not classified correctly, in 63% of videos, some or all of the video frames had fully or mostly clothed people featured in the video. A detailed analysis using Transformers-interpret <ref> <cit.> also shows that the text classification shows some signs of overfitting to text. § DISCUSSION The results highlight the complexity of accurately identifying sexually suggestive and educative videos on platforms like TikTok. While the results indicate that text analysis can contribute to detecting educative videos, music clips unrelated to the video topic are commonly used, making reliance on transcription alone insufficient. While existing work in pornographic content detection primarily focuses on visual analysis, our results indicate the need for a multi-modal approach since detecting sexual content requires a more comprehensive understanding encompassing multiple senses, including audio, speech, and text. Addressing these challenges is crucial for developing effective content moderation systems, ensuring appropriate access to sex education, and creating a safer and more inclusive online environment. It is also crucial to be mindful of potential gender expression bias commonly found in visual datasets <cit.>. Moreover, for tasks like this, developing scalable solutions suitable for large-scale systems with millions of users is crucial for effective implementation. Further exploration and investigation of these aspects are left for future research and development. § CONCLUSION This paper introduces a novel task of identifying sexually suggestive and sex-educative videos and presents SexTok, a multi-modal dataset for this purpose. The dataset includes video links labeled for sexual suggestiveness, sex-educational content, and an other category, along with gender expression and audio transcription. The results highlight the challenging and multi-modal nature of the task and suggest that while the dataset is meaningful and the task is learnable, it remains a challenging problem that deserves future research. This work contributes to promoting online safety and a balanced digital environment. § ACKNOWLEDGEMENT This work was partially funded by the LGBTQ+ Grad Student Research Funds by The Institute for LGBTQ Studies at the University Of Arizona. We deeply appreciate the invaluable contributions of Shreya Nupur Shakya throughout this work. § LIMITATIONS We address the limitations of the SexTok dataset and the accompanying experiments here. §.§ SexTok Dataset * The TikTok account was created and used from a specific geographic location (which will be disclosed in the final version if accepted). This is important to note since the content recommendation of TikTok is influenced by geographic location,[<https://support.tiktok.com/en/account-and-privacy/account-privacy-settings/location-services-on-tiktok>] among other things; hence a geographic bias may be expected, i.e., certain demographics may be more represented than others, especially in terms of languages used, race, ethnicity, etc. * The data gathered only represents a small sample of the content available on TikTok and may not represent the entire population of TikTok users or videos. * Sexual suggestiveness is treated as a discrete class label in the project, whereas in the real world, it has two important properties. 1) The perception of what is sexually suggestive may vary depending on the individual's sexual orientation, worldview, culture, location, and experiences and is highly subjective. 2) Some are more suggestive than others, and we do not account for the variation in the strength of suggestiveness here. * The dataset is a small snapshot of the TikTok videos from October 2022 to January 2023. Patterns, slang, and other cues may change over time. * Gender expression has many variations but is referred to as discrete labels here, but in real life, it is not. Additionally, this is as perceived by one annotator and, for the majority, not self-reported by the person in the video. Additional expert annotators may be needed to strengthen the confidence in the label. * Despite best efforts, it may be possible that the same creator appears more than five times. This is because creators often create multiple accounts to serve as a backup in case TikTok takes down the original account. This is observed to be increasingly common in the sexually-suggestive and sex-ed domains. We show an example in Figure <ref> Other details : The audio content of the TikTok videos comprises various elements, including background music, spoken dialogue (not necessarily from the video creator), or a combination of both. Notably, TikTok provides voice effects that enable users to modify their voices using predefined options. §.§ Experiments * The audio transcription of the videos was created automatically using Open AI's Whisper-medium <cit.>. Hence this is subject to errors, which may impact the performance of the models. * For training the models, GPU computing power was used. § ETHICS STATEMENT We address the ethical considerations and consequences of the SexTok dataset and the accompanying experiments here. * The study's focus is on the technical aspects of the problem. It does not address the broader societal and ethical implications of censorship and of regulating sexually suggestive content on social media platforms. The work only aims to detect sexually suggestive content and sex education content against other video topics but makes no stand on censorship or content regulation of sexually suggestive videos. * Sexual suggestiveness, as well as perceived gender expression, is a subjective matter and is hence susceptible to annotators' bias. * Gender expression, specifically visual cues only, was annotated and offered only to evaluate bias based on visual cues since such biases are known to exist within large-scale visual datasets <cit.>. The authors do not condone the practice of assigning gender identity based on a person's external appearance since gender is an internal sense of identity <cit.>. This dataset is not intended to be used for any such practices. * Due to the nature of the problem, and potential licensing issues, the publicly-collected data is not anonymized. acl_natbib § DETAILS OF METHODS USED TO COLLECT VIDEOS For sexually suggestive and sex education videos, the annotator interacted with the platform to collect the data in many ways, including search (hashtags, names of people), people reusing the same audio, stitches, duets, the public liked videos of certain profiles pages and the “For you” page. Any video that did not appear to belong to either sexually suggestive or sex education was collected and labeled as Others. §.§ Sexually Suggestive and Sex ed Videos Videos * Search : Hashtags ( including slang usages like #spicyaccountant), Phrases, and Names of popular creators in a domain (discovered through blogs that talk on the subject). * Audio Sharing: TikTok offers multiple people to share and reuse the same audio. So, when a video is found to be, say, sexually suggestive, new creators were discovered by looking into who else used this audio for their video. * Stitches and Duets: A Duet allows one creator to post their video side-by-side with a video from another creator on TikTok. A duet contains two videos on a split screen that play at the same time.A Stitch is a creation tool on Tiktok that allows a creator to combine another video on TikTok with the one they are creating. Certain videos added in the dataset were discovered as stitches or duets with another creator. * Public liked videos: It is possible to see all videos a certain profile likes by visiting that tab on their profile. By default, this is private but can be set to public. Some profiles share videos of a topic by redirecting visitors to their liked videos. Many videos were found and added to the dataset through this method. * "For you" Page: It's a recommended feed of videos from creators the user might not follow. The annotator liked and saved videos of sexually suggestive nature, so some similar videos were recommended on the For you Page. §.§ Other Videos There are three main strategies for collecting these videos. * Videos that appeared on the TikTok home page when no user was logged in * Videos shared with #learnontiktok hashtag * Videos that reused audio that was also used in a sexually suggestive video. Each makes up one-third of the total videos collected. § DETAILED STATS FOR TRANSCRIPT LENGTH AND VIDEO LENGTH § HYPERPARAMETERS Hyperparameters not mentioned below, are default values from Huggingface. § TRANSFORMER INTERPRET Refer to Figure 3 on the next page.
http://arxiv.org/abs/2307.00949v1
20230703114529
Greedy Minimum-Energy Scheduling
[ "Gunther Bidlingmaier" ]
cs.DS
[ "cs.DS", "68", "F.2.2" ]
A Cross-Chain Query Language for Application-Level Interoperability Between Open and Permissionless Blockchains Felix Härer0000-0002-2768-2342 August 1, 2023 =============================================================================================================== We consider the problem of energy-efficient scheduling across multiple processors with a power-down mechanism. In this setting a set of n jobs with individual release times, deadlines, and processing volumes must be scheduled across m parallel processors while minimizing the consumed energy. Idle processors can be turned off to save energy, while turning them on requires a fixed amount of energy. For the special case of a single processor, the greedy Left-to-Right algorithm <cit.> guarantees an approximation factor of 2. We generalize this simple greedy policy to the case of m ≥ 1 processors running in parallel and show that the energy costs are still bounded by 2 + P, where is the energy consumed by an optimal solution and P < is the total processing volume. Our algorithm has a running time of 𝒪(n f log d), where d is the difference between the latest deadline and the earliest release time, and f is the running time of a maximum flow calculation in a network of 𝒪(n) nodes. § INTRODUCTION Energy-efficiency has become a major concern in most areas of computing for reasons that go beyond the apparent ecological ones. At the hardware level, excessive heat generation from power consumption has become one of the bottlenecks. For the billions of mobile battery-powered devices, power consumption determines the length of operation and hence their usefulness. On the level of data centers, electricity is often the largest cost factor and cooling one of the major design constraints. Algorithmic techniques for saving power in computing environments employ two fundamental mechanisms, first the option to power down idle devices and second the option to trade performance for energy-efficiency by speed-scaling processors. In this paper we study the former, namely classical deadline based scheduling of jobs on parallel machines which can be powered down with the goal of minimizing the consumed energy. In our setting, a computing device or processor has two possible states, it can be either on or off. If a processor is on, it can perform computations while consuming energy at a fixed rate. If a processor is off, the energy consumed is negligible but it cannot perform computation. Turning on a processor, i.e. transitioning it from the off-state to on-state consumes additional energy. The problem we have to solve is to schedule a number of jobs or tasks, each with its own processing volume and interval during which it has to be executed. The goal is to complete every job within its execution interval using a limited number of processors while carefully planning idle times for powering off processors such that the consumed energy is minimized. Intuitively, one aims for long but few idle intervals, so that the energy required for transitioning between the states is low, while avoiding turned on processors being idle for too long. Previous work This fundamental problem in power management was first considered by <cit.> for a single processor. In their paper, they devise arguably the simplest algorithm one can think of which goes beyond mere feasibility. Their greedy algorithm Left-to-Right () is a 2-approximation and proceeds as follows. If the processor is currently busy, i.e. working on a job, then greedily keeps the processor busy for as long as possible, always working on the released job with the earliest deadline. Once there are no more released jobs to be worked on, the processor becomes idle and greedily keeps the processor idle for as long as possible such that all remaining jobs can still be feasibly completed. At this point, the processor becomes busy again and proceeds recursively until all jobs are completed. The first optimal result for the case of a single processor and jobs with unit processing volume was developed by <cit.>. He devised a dynamic program that runs in time 𝒪(n^7), where n denotes the number of jobs to be scheduled. Building on this result, <cit.> solved the case of general processing volumes on a single processor in time 𝒪(n^5). Their sophisticated algorithm involves the computation of multiple dynamic programming tables, the introduction of a special method for speeding up the computation of these tables, and a final post-processing phase. The first result for an arbitrary number of processors m was given by <cit.> for the special case of unit processing volumes. They solved this special case in time 𝒪(n^7m^5) by building on the original dynamic programming approach of <cit.> while non-trivially obtaining additional structure. Obtaining good solutions for general job weights is difficult because of the additional constraint that every job can be worked on by at most a single processor at the same time. Note that this is not an additional restriction for the former special case of unit processing volumes since time is discrete in our problem setting. It is a major open problem whether the general multi-processor setting is NP-hard. It took further thirteen years for the first non-trivial result on the general setting to be be developed, i.e. an algorithm for the case of multiple processors and general processing volumes of jobs. In their breakthrough paper, <cit.> develop the first constant-factor approximation for the problem. Their algorithm guarantees an approximation factor of 3 + ϵ and builds on the Linear Programming relaxation of a corresponding Integer Program. Their algorithm obtains a possibly infeasible integer solution by building the convex hull of the corresponding fractional solution. Since this integer solution might not schedule all jobs, they develop an additional extension algorithm EXT-ALG, which iteratively extends the intervals returned by the rounding procedure by a time slot at which an additional turned on processor allows for an additional unit of processing volume to be scheduled. <cit.> improve this approximation factor to 2 + ϵ by incorporating into the Linear Program additional constraints for the number of processors required during every possible time interval. They also modify the rounding procedure based on their concept of a multi-processor skeleton. Very roughly, a skeleton is a stripped-down schedule which still guarantees a number of processors in the on-state during specific intervals and which provides a lower bound for the costs of an optimal feasible schedule. Building on this concept of skeletons, they also develop a combinatorial 6-approximation for the problem. This algorithm first computes the lower bounds for the number of processors required in every possible time interval starting and ending at a release time or deadline using flow calculations. Based on these bounds, they define for every processor a single-processor scheduling problem with 𝒪(n^2) artificial jobs. For each of these single-processor problems they construct a single-processor skeleton using dynamic programming. These in turn are then combined into a multi-processor skeleton, which is extended into a feasible schedule by first executing EXT-ALG, and then carefully powering on additional processors since EXT-ALG is not sufficient for ensuring feasibility here. As presented in the papers, both Linear Programs of <cit.> and <cit.>, respectively, run in pseudo-polynomial time. By using techniques presented in <cit.>, the number of time slots which have to be considered can be reduced from d to 𝒪(n log d), allowing the algorithms to run in polynomial time. More specifically, the number of constraints and variables of the Linear Programs reduces to 𝒪(n^2 log^2 d). However, this improved running time comes at the price of the additive ϵ in the approximation factors of the two LP-based algorithms. The running time of the EXT-ALG used by all three approximation algorithms is reduced to 𝒪(F m n^3 log^3 d), where F refers to a maximum flow calculation in a network with 𝒪(n^2 log d) edges and 𝒪(n log d) nodes. Contribution In this paper we develop a greedy algorithm which is simpler and faster than the previous algorithms. The initially described greedy algorithm Left-to-Right of <cit.> is arguably the simplest algorithm one can think of for a single processor. We naturally extend to multiple processors and show that this generalization still guarantees a solution of costs at most 2 + P, where P < is the total processing volume. Our simple greedy algorithm Parallel Left-to-Right () is the combinatorial algorithm with the best approximation guarantee and does not rely on Linear Programming and the necessary rounding procedures of <cit.> and <cit.>. It also does not require the EXT-ALG, which all previous algorithms rely on to make their infeasible solutions feasible in an additional phase. Indeed, only relies on the original greedy policy of Left-to-Right: just keep processors in their current state (busy or idle) for as long as feasibly possible. For a single processor, ensures feasibility by scheduling jobs according to the policy Earliest-Deadline-First (EDF). For checking feasibility if multiple processors are available, a maximum flow calculation is required since EDF is not sufficient anymore. Correspondingly, our generalization uses such a flow calculation for checking feasibility. While the algorithm we describe in Section <ref> is very simple, the structure exhibited by the resulting schedules is surprisingly rich. This structure consists of critical sets of time slots during which only schedules the minimum amount of volume which is feasibly possible. In Section <ref> we show that whenever requires an additional processor to become busy at some time slot t, there must exist a critical set of time slots containing t. This in turn gives a lower bound for the number of busy processors required by any solution. Devising an approximation guarantee from this structure is however highly non-trivial and much more involved than the approximation proof of the single-processor algorithm, because one has to deal with sets of time slots and not just intervals. Our main contribution in terms of techniques is a complex procedure which (for the sake of the analysis only) carefully realigns the jobs scheduled in between critical sets of time slots such that it is sufficient to consider intervals as in the single processor case, see Section <ref> for details. Finally, we show in Section <ref> that the simplicity of the greedy policy also leads to a much faster algorithm than the previous ones, namely to a running time 𝒪(n f log d), where d is the maximal deadline and f is the running time for checking feasibility by finding a maximum flow in a network with 𝒪(n) nodes. Formal Problem Statement Formally, a problem instance consists of a set J of jobs with an integer release time r_j, deadline d_j, and processing volume p_j for every job j ∈ J. Each job j ∈ J has to be scheduled across m ≥ 1 processors for p_j units of time in the execution interval E_j [r_j, d_j] between its release time and its deadline. Preemption of jobs and migration between processors is allowed at discrete times and occurs without delay, but no more than one processor may process any given job at the same time. Without loss of generality, we assume the earliest release time to be 0 and denote the last deadline by d. The set of discrete time slots is denoted by T {0, …, d}. The total amount of processing volume is P ∑_j ∈ J p_j. Every processor is either completely off or completely on in every discrete time slot t ∈ T. A processor can only work on some job in the time slot t if it is in the on-state. A processor can be turned on and off at discrete times without delay. All processors start in the off-state. The objective now is to find a feasible schedule which minimizes the expended energy E, which is defined as follows. Each processor consumes 1 unit of energy for every time slot it is in the on-state and 0 units of energy if it is in the off-state. Turning a processor on consumes a constant amount of energy q ≥ 0, which is fixed by the problem instance. In Graham's notation <cit.>, this setting can be denoted with m | r_j; d_j; pmtn| E. Costs of busy and idle intervals We say a processor is busy at time t ∈ T if some job is scheduled for this processor at time t. Otherwise, the processor is idle. Clearly a processor cannot be busy and off at the same time. An interval I ⊆ T is a (full) busy interval for processor k ∈ [m] if I is inclusion maximal on condition that processor k is busy in every t ∈ I. Correspondingly, an interval I ⊆ T is a partial busy interval for processor k if I is not inclusion maximal on condition that processor k is busy in very t ∈ I. We define (partial and full) idle intervals, on intervals, and off intervals of a processor analogously via inclusion maximality. Observe that if a processor is idle for more than q units of time, it is worth turning the processor off during the corresponding idle interval. Our algorithm will specify for each processor when it is busy and when it is idle. Each processor is then defined to be in the off-state during idle intervals of length greater than q and otherwise in the on-state. Accordingly, we can express the costs of a schedule S in terms of busy and idle intervals. For a multi-processor schedule S, let S^k denote the schedule of processor k. Furthermore, for fixed k, let 𝒩, ℱ, ℬ, ℐ be the set of on, off, busy, and idle intervals on S^k. We partition the costs of processor k into the costs (S^k) for residing in the on-state and the costs (S^k) for transitioning between the off-state and the on-state, hence (S^k) = (S^k) + (S^k) = ∑_N ∈𝒩 |N| + q. Equivalently, we partition the costs of processor k into the costs (S^k) ∑_I ∈ℐmin{ |I|, q } for being idle and the costs (S^k) ∑_B ∈ℬ |B| for being busy. The total costs of a schedule S are the total costs across all processors, i.e. (S) = ∑_k = 1^m(S^k). Clearly we have ∑_k = 1^m (k) = P, this means for an approximation guarantee the critical part is bounding the idle costs. Lower and upper bounds for the number of busy processors We specify a generalization of our problem which we call deadline-scheduling-with-processor-bounds. Where in the original problem, for each time slot t, between 0 and m processors were allowed to be working on jobs, i.e. being busy, we now specify a lower bound l_t ≥ 0 and an upper bound m_t ≤ m. For a feasible solution to deadline-scheduling-with-processor-bounds, we require that in every time slot t, the number of busy processors, which we denote with (t), lies within the lower and upper bounds, i.e. l_t ≤ v(t) ≤ m_t. This will allow us to express the greedy policy of keeping processors idle or busy, respectively. Note that this generalizes the problem deadline-scheduling-on-intervals introduced by <cit.> by additionally introducing lower bounds. Properties of an optimal schedule Given some arbitrary but fixed order on the number of processors, a schedule S fulfills the stair-property if it uses the lower numbered processors first, i.e. for every t ∈ T, if processor k ∈ [m] is busy at t, then every processor k' ≤ k is busy at t. This symmetrically implies that if processor k ∈ [m] is idle at t, then every processor k' ≥ k is idle at t. lemmalemmastairproperty For every problem instance we can assume the existence of an optimal schedule S_ which fulfills the stair-property. § ALGORITHM The Parallel Left-to-Right () algorithm shown in Algorithm <ref> iterates through the processors in some arbitrary but fixed order and keeps the current processor idle for as long as possible such that the scheduling instance remains feasible. Once the current processor cannot be kept idle for any longer, it becomes busy and keeps it and all lower-numbered processors busy for as long as possible while again maintaining feasibility. The algorithm enforces these restrictions on the busy processors by iteratively making the lower and upper bounds l_t, m_t of the corresponding instance of deadline-scheduling-with-processor-bounds more restrictive. Visually, when considering the time slots on an axis from left to right and when stacking the schedules of the individual processors on top of each other, this generalization of the single processor Left-to-Right algorithm hence proceeds Top-Left-to-Bottom-Right. Once returns with the corresponding tight upper and lower bounds m_t, l_t, an actual schedule S_ can easily be constructed by running the flow-calculation used for the feasibility check depicted in Figure <ref> or just taking the result of the last flow-calculation performed during . The mapping from this flow to an actual assignment of jobs to processors and time slots can then be defined as described in Lemma <ref>, which also ensures that the resulting schedule fulfills the stair-property from Definition <ref>, i.e. that it always uses the lower-numbered processors first. As stated in Lemma <ref>, the check for feasibility in subroutines and can be performed by calculating a maximum α-ω flow in the flow network given in Figure <ref> with a node u_j for every job j ∈ J and a node v_t for every time slot t ∈ T including the corresponding incoming and outgoing edges. lemmalemmaflowfeasibility There exists a feasible solution to an instance of deadline-scheduling-with-processor-bounds l_t, m_t if and only if the maximum α-ω flow in the corresponding flow network depicted in Figure <ref> has value P. Given a feasible problem instance, algorithm constructs a feasible schedule. By definition of subroutines and , only modifies the upper and lower bounds m_t, l_t for the number of busy processors such that the resulting instance of deadline-scheduling-with-processor-bounds remains feasible. The correctness of the algorithm then follows from the correctness of the flow-calculation for checking feasibility, which is implied by Lemma <ref>. § STRUCTURE OF THE PLTR-SCHEDULE §.§ Types of Volume For a schedule S, a job j ∈ J, and a set Q ⊆ T of time slots, we define * the volume _S(j, Q) as the number of time slots of Q for which j is scheduled by S, * the forced volume (j, Q) as the minimum number of time slots of Q for which j has to be scheduled in every feasible schedule, i.e. (j, Q) max{0; p_j - |E_j ∖ Q|}, * the unnecessary volume _S(j, Q) as the amount of volume which does not have to scheduled during Q, i.e. _S(j, Q) _S(j, Q) - (j,Q), * the possible volume (j, Q) as the maximum amount of volume which j can be feasibly scheduled in Q, i.e. (j, Q) min{ p_j, | E_j ∩ Q | }. Since the corresponding schedule S will always be clear from context, we omit the subscript for and . We extend our volume definitions to sets J' ⊆ J of jobs by summing over all j ∈ J', i.e. (J', Q) ∑_j ∈ J'(j, Q). If the first parameter is omitted, we refer to the whole set J, i.e. (Q) (J, Q). For single time slots, we omit set notation, i.e. (t) (J, {t}). Clearly we have for every feasible schedule, every Q ⊆ T, j ∈ J that (j, Q) ≤(j, Q) ≤(j, Q). The following definitions are closely related to these types of volume. Let Q ⊆ T be a set of time slots. We define * the density ϕ(Q) (J, Q) / |Q| as the average amount of processing volume which has to be completed in every slot of Q, * the peak density ϕ̂(Q) max_Q' ⊆ Qϕ(Q'), * the deficiency (Q) (Q) - ∑_t ∈ Q m_t as the difference between the amount of volume which has to be completed in Q and the processing capacity available in Q, * the excess (Q) ∑_t ∈ Q l_t - (Q) as the difference between the processor utilization required in Q and the amount of work available in Q. If ϕ̂(Q) > k - 1, then clearly at least k processors are required in some time slot t ∈ Q for every feasible schedule. If (Q) > 0 or (Q) > 0 for some Q ⊆ T, then the problem instance is clearly infeasible. §.§ Critical Sets of Time Slots The following Lemma <ref> provides the crucial structure required for the proof of the approximation guarantee. Intuitively, it states that whenever requires processor k to become busy at some time slot t, there must be some critical set Q ⊆ T of time slots during which the volume scheduled by is minimal. This in turn implies that processor k needs to be busy at some point during Q in every feasible schedule. The auxiliary Lemmas <ref> and <ref> provide a necessary and more importantly also sufficient condition for the feasibility of an instance of deadline-scheduling-with-processor-bounds based on the excess (Q) and the deficiency (Q) of sets Q ⊆ T. Lemmas <ref> and <ref> are again a generalization of the corresponding feasibility characterization in <cit.> for their problem deadline-scheduling-on-intervals, which only defines upper bounds. lemmalemmacut For every α-ω cut (S, S̅) in the network given in Figure <ref> we have at least one of the following two lower bounds for the capacity c(S) of the cut: c(S) ≥ P - (Q(S)) or c(S) ≥ P - (Q(S̅)), where Q(S) { t | v_t ∈ S }. lemmalemmafeasibility An instance of deadline-scheduling-with-processor-bounds is feasible if and only if (Q) ≤ 0 and (Q) ≤ 0 for every Q ⊆ T. A time slot t ∈ T is called engagement of processor k if t = min B for some busy interval B on processor k. A time slot t ∈ T is just called engagement if it is an engagement of processor k for some k ∈ [m]. Let Q ⊆ T be a set of time slots and t ∈ T an engagement of processor k ∈ [m]. We call Q a tight set for engagement t of processor k if t ∈ Q and (Q) = (Q) , (t') ≥ k-1  for all t' ∈ Q , and (t') ≥ k  for all t' ∈ Q with t' ≥ t . For every engagement t of some processor k ∈ [m] in the schedule S_ constructed by , there exists a tight set Q_t ⊆ T for engagement t of processor k. Suppose for contradiction that there is some engagement t ∈ T of processor k ∈ [m] and no such Q exists for t. We show that would have extended the idle interval on processor k which ends at t. Consider the step in when t was the result of on processor k. Let l_t', m_t' be the lower and upper bounds for t' ∈ T right after the calculation of t and the corresponding update of the bounds by . We modify the bounds by decreasing m_t by 1. Note that at this point m_t'≥ k for every t' > t and m_t'≥ k - 1 for every t'. Consider Q ⊆ T such that t ∈ Q and (Q) < (Q). Before our decrement of m_t we had m_Q ∑_t' ∈ Q m_t'≥(Q) > (Q). The inequality m_Q ≥(Q) here follows since the upper bounds m_t' are monotonically decreasing during . Since our modification decreases m_Q by at most 1, we hence still have m_Q ≥(Q) after the decrement of m_t. Consider Q ⊆ T such that t ∈ Q and (t') < k - 1 for some t'. At the step in considered by us, we hence have m_t'≥ k - 1 > (t'). Before our decrement of m_t we therefore have m_Q > (Q) ≥(Q), which implies m_Q ≥(Q) after the decrement. Finally, consider Q ⊆ T such that t ∈ Q and (t') < k for some t' > t. At the step in considered by us, we again have m_t'≥ k > (t'), which implies m_Q ≥(Q) after our decrement of m_t. In summary, if for t no Q exists as characterized in the proposition, the engagement of processor k at t could not have been the result of on processor k. We call a set C_k ⊆ T critical set for processor k if C_k fulfills that * C_k ⊇ C_k' for every critical set for processor k' > k, * t ∈ C_k for every engagement t of processor k, * (C_k) = (C_k), * (t) ≥ k - 1 for every t ∈ C_k, and * ϕ(C_k) is maximal. For every processor k ∈ [m] of S_ which is not completely idle, there exists a critical set C_k for processor k. We show the existence by induction over the processors m, …, 1. For processor m, consider the union of all tight sets over engagements of processor m. This set fulfills all conditions necessary except for the maximality in regard to ϕ. Suppose that the critical sets C_m, …, C_k+1 exist. Take Q_k ⊆ T as the union of C_k+1 and all tight sets over engagements of processor k. By definition of C_k+1, we have Q_k ⊇ C_k' for all k' > k. By construction of Q_k, every engagement t of processor k is contained in Q_k. Finally, we have (Q_k) = (Q_k) and (t) ≥ k-1 for every t ∈ Q_k since all sets in the union fulfill these properties. §.§ Definitions Based on Critical Sets For the critical set C_k of some processor k ∈ [m], we define (C_k) k. Let ≽ be the total order on the set of critical sets C across all processors which corresponds to , i.e. C ≽ C' if and only if (C) ≥(C'). Equality in regard to ≽ is denoted with ∼. We extend the definition of to general time slots t ∈ T with (t) max{(C) | C is critical set, t ∈ C } if t ∈ C for some critical set C and otherwise (t) 0. We further extend to intervals D ⊆ T with (D) max{(t) | t ∈ D } A nonempty interval V ⊆ T is a valley if V is inclusion maximal on condition that C ∼ V for some fixed critical set C. Let D_1, …, D_l be the maximal intervals contained in a critical set C. A nonempty interval V is a valley of C if V is exactly the valley between D_a and D_a+1 for some a < l, i.e. V = [max D_a + 1, min D_a+1 - 1]. By choice of C as critical set (property 1), a valley of C is indeed a valley. We define the jobs J(V) ⊆ J for a valley V as all jobs which are scheduled by S_ in every t ∈ V. For a critical set C, an interval D ⊆ T is a section of C if D ∩ C contains only full subintervals of C and at least one subinterval of C. For a critical set C and a section D of C, the left valley V_l is the valley of C ending at min (C ∩ D) - 1, if such a valley of C exists. Symmetrically, the right valley V_r is the valley of C starting at max (C ∩ D) + 1, if such a valley of C exists. For every critical set C, every section D ⊆ T of C, we have: if ϕ(C ∩ D) ≤(C) - δ for some δ∈ℕ, then the left valley V_l or the right valley V_r of C and D is defined and |J(V_l)| + |J(V_r)| ≥δ. We take |J(V)| 0 if V is not defined. Refer to Figure <ref> for a visual sketch of the proposition. By choice of C as critical set with c (C), we have (C ∩ D) ≥ (c-1) · |C ∩ D|. If this inequality is fulfilled strictly, then with the premise (C ∩ D) / |C ∩ D| ≤ c - δ we directly get (C ∩ D) / |C ∩ D| > δ - 1. This implies that there are at least δ jobs j scheduled in C ∩ D with (j, C ∩ D) > 0. Such jobs can be scheduled in the part of C not contained in D, i.e. we must have E_j ∩ (C ∖ D) ≠∅ and hence the left valley V_l or the right valley V_r of C and D must be defined. Since these jobs j are scheduled in C only for the minimum amount possible, i.e. (j, C) = (j, C) > 0, they must be scheduled in every t ∈ E_j ∖ C and are therefore contained in J(V_l) or J(V_r). If on the other hand we have equality, i.e. (C ∩ D) = (c-1) · |C ∩ D|, then let t be an engagement of processor c. Since (t) > c-1, we must have t ∉ C ∩ D. By the same argument as before, we have that if (C ∩ D) / |C ∩ D| ≤ c - δ, then (C ∩ D) / |C ∩ D| ≥δ - 1. Let J' { j ∈ J |(j, C ∩ D) > 0 }. Since (j, C ∩ D) ≤ | C ∩ D | for every j ∈ J, we have |J'| ≥δ - 1. If this lower bound is fulfilled with equality, then every j ∈ J' must be scheduled in every time slot of C ∩ D and hence (J', C ∖ D) = (J', C ∖ D). Now suppose for contradiction that all jobs j scheduled during C ∖ D which are not contained in J' have E_j ∩ C ∩ D = ∅. Then (C ∖ D) = (C ∖ D) and we get ϕ(C ∖ D) > ϕ(C) since by case assumption (C ∩ D) / |C ∩ D| = (c-1) < ϕ(C). With (t) ≤ c - 1 for every t ∈ C ∩ D, we know that (C ∩ D) ≤ k and therefore C ∖ D is still a critical set for processor c but has higher density than C, contradicting the choice of C. Therefore, there must exist a job j ∉ J' scheduled in C ∖ D with an execution interval intersecting C ∩ D. In any case, we have at least δ jobs scheduled in C with an execution interval intersecting both C ∖ D and C ∩ D. This implies that the left valley V_l or the right valley V_r of C and D exists and that at least δ jobs are contained in J(V_l) or J(V_r). § MODIFICATION OF THE PLTR-SCHEDULE FOR ANALYSIS In this section we modify the schedule S_ returned by in two steps. We stress that this is for the analysis only and not part of . The first step augments specific processors with auxiliary busy slots such that in every critical set C at least the first (C) processors are busy all the time. For the single processor algorithm, the crucial property for the approximation guarantee is that every idle interval of S_ can intersect at most 2 distinct idle intervals of the schedule returned by . The second modification step of S_ is more involved and establishes this crucial property on every processor k ∈ [m] by making use of Lemma <ref>. More specifically, it will establish the stronger property that ϕ̂(B) > k - 1 for every interval B on processor k with (B) ≥ 2, i.e. that every feasible schedule requires k busy processors at some point during B. Idle intervals surrounded by only busy intervals B with (B) ≤ 1 are then handled in Lemma <ref> with essentially the same argument as for the single processor algorithm. By making sure that the modifications cannot decrease the costs of our schedule, we obtain an upper bound for the costs of S_. §.§ Augmentation and Realignment We transform S_ into the augmented schedule S_ by adding for every t with k (t) ≥ 2 and (t) = k-1 an auxiliary busy slot on processor k. No job is scheduled in this auxiliary busy slot on processor k and it does also not count towards the volume of this slot. It merely forces processor k to be in the on-state at time k while allowing us to keep thinking in terms of and intervals in our analysis of the costs. In S_ processors 1, …, (t) are busy in every slot t ∈ T with (t) ≥ 2. The property directly follows from our choice of the critical sets, the definition of (t), and the construction of S_. As a next step, we transform S_ into the realigned schedule S_ using Algorithm <ref>. We briefly sketch the ideas behind this realignment. Lemma <ref> guarantees us that every busy interval B on processor k is a section of the critical set C with C ∼ B. It also guarantees that the left and right valley V_l, V_r of C and B do not end within an idle interval on processor k. Lemma <ref> in turn implies that if the density of B is too small to guarantee that S_ has to use processor k during B, i.e. if ϕ̂(B) ≤ k - 1, then V_l or V_r is defined and there is some j scheduled in every slot of V_l or V_r. Let V be the corresponding left or right valley of C and D for which such a job j exists. Instead of scheduling j on the processors below k, we can schedule j on processor k in idle time slots during V. This merges the busy interval B with at least one neighbouring busy interval on processor k. In the definition of the realignment, we will call this process of filling the idle slots during V on processor k closing of valley V on processor k. The corresponding subroutine is called (k, V). The crucial part is ensuring that this merging of busy intervals by closing a valley continues to be possible throughout the realignment whenever we encounter a busy interval with a density too small. For this purpose, we go through the busy intervals on each processor in decreasing order of their criticality, i.e. in the order of ≽. We also allow every busy slot to be used twice for the realignment by introducing further auxiliary busy slots, since for a section D of the critical set C, both the right and the left valley might be closed on processor k in the worst case. This allows us to maintain the invariants stated in Lemma <ref> during the realignment process, which correspond to the initial properties of Lemma <ref> and <ref> for S_. §.§ Invariants for Realignment lemmalemmainvariant For an arbitrary step during the realignment of S_ and a valley V ⊆ T, let the critical processor k_V for V be the highest processor such that * processor k_V is not fully filled yet, i.e. (k_V, T) has not yet returned, * no V' ⊇ V has been closed on k_V so far, and * there is a (full) busy interval B ⊆ V on processor k_V. We take k_V 0 if no such processor exists. At every step in the realignment of S_ the following invariants hold for every valley V, where C denotes the critical set with C ∼ V. * If ϕ(C ∩ D) ≤ k_V - δ for some δ∈ℕ, some section D ⊆ V of C, then the left valley V_l or the right valley V_r of C, D exists and (V_l) + (V_r) ≥ 2 δ. * For every t ∈ C ∩ V, processors 1, …, k_V are busy at t. * Every busy interval B ⊆ V on processor k_V with B ∼ V is a section of C. lemmalemmasreal The resulting schedule S_ of the realignment of S_ is defined. For every processor k ∈ [m] and every busy interval B on processor k in S_ with (B) ≥ 2, we have ϕ̂(B) > k - 1. We show that (k, T) establishes the property on processor k. The claim then follows since (k, T) does not change the schedules of processors above k. We know that on processor k busy intervals are only extended, since in (k, T) we only close valleys for busy intervals B on k which are a section of the corresponding critical set C. Let B ⊆ V be a busy interval on processor k in S_ with B ∼ V and (B) ≥ 2. No valley W ⊇ V can have been closed on k since otherwise there would be no B ⊆ V in S_. Therefore, at some point (k, V) must be called. Consider the point in (k, V) when the while-loop terminates. Clearly at this point all busy intervals B' ⊆ V with B' ∼ V on processor k have ϕ̂(B') > k - 1. At this point there must also be at least one such B' for B to be a busy interval on k in S_ with B ∼ V and B ⊆ V. In particular, one such B' must have B' ⊆ B, which directly implies ϕ̂(B) ≥ϕ̂(B') > k - 1. While with Lemma <ref> we have our desired property for busy intervals B of (B) ≥ 2, we still have to handle busy intervals of (B) ≤ 1. To be precise, we have to handle idle intervals which are surrounded only by busy intervals B of (B) ≤ 1. We will show that this constellation can only occur in S_ on processor 1 and that the realignment has not done any modifications in these intervals, i.e. S_ and S_ do not differ for these intervals. With the same argument as for the original single-processor Left-to-Right algorithm, we then get that at least one processor has to be busy in any schedule during these intervals. The realignment of S_ does not create new engagement times but may only change the corresponding processor being engaged, i.e. if t ∈ T is an engagement of some processor k in S_, then t is also an engagement of some processor k' in S_. Consider the first step in the realignment of S_ in which some t ∈ T becomes an engagement of some processor k' where t was no engagement of any processor before this step. This step must be the closing of some valley V on some processor k > k': On processor k, we have seen that closing some valley can only merge busy intervals. On processors above k, the schedule does not change. Busy slots on processors k” < k are only removed (by definition of ), therefore t-1 must have been busy on processor k' and idle on k' + 1, …, k before the close. If t ∈ V, then processor k' + 1 (or k) must have been busy before at t. Hence t was already an engagement before the close, contradicting our initial choice of t. If t ∉ V, then t ≻ V. Let W be the valley such that V is closed during (k, W), hence W ⊃ V. If t ∈ W, then t ∼ C_W and t ∈ C_W. By Invariant 2, processors 1, …, k_W = k are busy at t before the close. Again, this implies that t was an engagement before the close already, contradicting our choice of t. If t ∉ W, then let W' be the valley with t ∼ W and t ∈ W'. We have W ≺ t ∼ W' and W' ⊃ W and t ∈ C_W'. Therefore k_W'≥ k_W = k and Invariant 2 implies that processors 1, …, k are busy at t before the close. Hence, t was an engagement before the close already, again contradicting our initial choice of t. Let I be an idle interval in S_ on some processor k and let B_l, B_r be the busy intervals on k directly to the left and right of I with (B_l) ≤ 1 and (B_r) ≤ 1. Allow B_l to be empty, i.e. we might have min I = 0, but B_r must be nonempty, i.e. max I < d. Then we must have k = 1 and ϕ̂(B_l ∪ I ∪ B_r) > 0. By Lemma <ref> and (B_r) ≤ 1, we know that min B_r is an engagement of processor 1 in S_. Hence max I is idle in S_ on processor 1 and hence idle on all processors (by stair-property in S_). Since no jobs are scheduled at max I, we know that (max I) ≤ 1 and J(V) = ∅ for all valleys V containing the slot max I, and hence also (V) = 0 at all times during the realignment. Therefore, no V intersecting [max I, max B_r] was closed during the realignment on any processor since this V would contain max I. Since B_r is a busy interval with (B_r) ≤ 1 (i.e. not containing engagements of processors above 1 in S_), we must then have k = 1. For I to be idle on processor k = 1 in S_ and (I) ≥ 2, some V ≽ I with V ∩ I ≠∅ and hence V ⊇max I would have to been closed, which contradicts what we have just shown. Therefore (I) ≤ 1 and no valley V with V ∩ [min B_l - 1, max B_r + 1] ≠∅ can have been closed during the realignment. Therefore, the constellation occurs exactly in the same way in S_ and S_ on processor 1. In other words, for processor 1 in both S_ and S_, B_l and B_r are busy intervals and I is an idle interval. Let j be the single job scheduled at time slot min B_r. We conclude by showing that E_j ⊆ I ∪ B_r and therefore ϕ̂(I ∪ B_r) > 0. Otherwise, j could be scheduled at min I or max B_r + 1. In the first case, would have extended B_l by scheduling j at time min I instead of at min B_r. In the second case, would have extended the idle interval I by scheduling j at max B_r + 1 instead of at min B_r. For every processor k, every idle interval on processor k in S_ intersects at most two distinct idle intervals of processor k in S_. Let I_ be an idle interval in S_ on processor k intersecting three distinct idle intervals of processor k in S_. Let I be the middle one of these three idle intervals. Lemma <ref> and Lemma <ref> imply that k busy processors are required during I and its neighboring busy intervals. This makes it impossible for S_ to be idle on processor k during the whole interval I_. §.§ Approximation Guarantee Lemma <ref> finally allows us to bound the costs of the schedule S_ with the same arguments as in the proof for the single-processor LTR algorithm of <cit.>. We complement this with an argument that the augmentation and realignment could have only increased the costs of S_ and that we have hence also bounded the costs of the schedule returned by our algorithm . Algorithm constructs a schedule of costs at most 2 + P. We begin by bounding (S_) as in the proposition. First, we show that (S^k_) ≤ 2 (S^k_) + (S^k_) for every processor k ∈ [m]. Let ℐ_1 be the set of idle intervals on S^k_ which intersect some interval of S^k_. Lemma <ref> implies that ℐ_1 contains as most twice as many intervals as there are intervals in S^k_. Since the costs of each idle interval are at most q, and the costs of each off interval are exactly q, the costs of all idle intervals in ℐ_1 is bounded by 2 (S^k_). Let ℐ_2 be the set of idle intervals on S^k_ which do not intersect any interval in S^k_. The total length of these intervals is naturally bounded by (S^k_). We continue by showing that (S_) ≤ 2 P. By construction of S_ and the definition of and , we introduce at most as many auxiliary busy slots at every slot t ∈ T as there are jobs scheduled at t in S_. For S_, an auxiliary busy slot is only added for t with (t) ≥ 2 and hence (t) ≥ 1. Furthermore, initially (V) = 2 |J(V)| for every valley V and (V) is decremented if some V' intersecting V is closed during (k, T). During (k, T) at most a single V' containing t is closed for every t ∈ T. Finally, auxiliary busy slots introduced by S_ are used in the subroutine . This establishes the lower bound (S_) = (S_) + (S_) ≤ 2 (S_) + (S_) + 2 P ≤ 2 + P for our realigned schedule. We complete the proof by arguing that (S_) ≤(S_) since transforming S_ back into S_ does not increase the costs of the schedule. Removing the auxiliary busy slots clearly cannot increase the costs. Since the realignment of S_ only moves busy slots between processors, but not between different time slots, we can easily restore S_ (up to permutations of the jobs scheduled on the busy processors at the same time slot) by moving all busy slots back down to the lower numbered processors. By the same argument as in Lemma <ref>, this does not increase the total costs of the schedule. § RUNNING TIME Algorithm has a running time of 𝒪(n f log d) where f denotes the time needed for finding a maximum flow in a network with 𝒪(n) nodes. First observe that every busy interval is created by a pair of calls to and , respectively. We begin by bounding the number of busy intervals across all processors in S_ by n. Note that if returns d, then we do not have to calculate from d on. Therefore, the total number of calls to and is then bounded by n + m. If m > n we can restrict our algorithm to use the first n processors only, as there cannot be more than n processors scheduling jobs at the same time. We derive the upper bound of n for the number of busy intervals across all processors by constructing an injective mapping g from the set of busy intervals to the jobs J. For this construction of g we consider the busy intervals in the same order as the algorithm, i.e. from Top-Left to Bottom-Right. We construct g such that g(B) = j only if d_j ∈ B. Suppose we have constructed such a mapping for busy intervals on processors m, …, k up to some busy interval B on k. We call a busy interval B' in S_ on processor l ∈ [m] a plateau on processor l, if all slots of B' are idle for all processors above l. Observe that plateaus (even across different processors) cannot intersect, which implies an ordering of the plateaus from left to right. Let B' be the last plateau with B' ⊆ B and let l ≥ k be the processor for which this busy interval B' is a plateau. By construction of g and the choice of B', there are at most l - k distinct jobs j with d_j ∈ [min B', max B] already mapped to by g. This is since at most l - k busy intervals on processors k+1, …, m intersect the interval [min B', max B]. Let Q_t ⊆ T be a tight set over engagement t min B of processor l. Let J' { j_1, …, j_l} be the l distinct jobs scheduled at t. We know that max B + 1 ∉ C_t since (max B + 1) < k ≤ l and max B + 1 > t. With (j, Q_t) = (j, Q_t) > 0 for every j ∈ J', we know that every job j ∈ J' with d_j > max B is scheduled at slot max B + 1. Hence there are at least l - (k-1) distinct jobs j ∈ J' with d_j ∈ [min B', max B] and there must be at least one such job j^* which is not mapped to by g so far and which we therefore can assign to B. Having bounded the number of calls to and by 𝒪(n), the final step is to bound the running time of these two subroutines by 𝒪(f log d). A slight modification to the flow-network of Figure <ref> suffices to have only 𝒪(n) nodes. The idea here is to partition the time horizon T into 𝒪(n) time intervals instead of 𝒪(d) individual time slots. Since this is a standard problem as laid out in e.g. Chapter 5 of <cit.>, we only sketch the main points relevant to our setting in the following. The partition of the time horizon into time intervals is done by using the release times and deadlines as splitting points of the time horizon and scaling the capacities of the incoming and outgoing edges by the length of the time interval. For our generalization we additionally have to split whenever an upper or lower bound l_t, m_t changes. Since we have already bounded the number of such times by 2n in the first part of this proof, there are only 𝒪(n) time intervals and hence also 𝒪(n) nodes in the flow network. Also note that constructing the sub-schedules within the time intervals is a much simpler scheduling problem, since by construction, for every time interval and every job j, the execution interval E_j either completely contains the time interval or does not intersect it. Such a sub-schedule can be computed in O(n^2), as laid out in Chapter 5 of <cit.>. With the feasibility check running in time 𝒪(f), each call to and can be completed in 𝒪(f log d) using binary search on the remaining time horizon. § ACKNOWLEDGEMENT Thanks to Prof. Dr. Susanne Albers for her supervision during my studies. The idea of generalizing the Left-to-Right algorithm emerged in discussions during this supervision. This work was supported by the Research Training Network of the Deutsche Forschungsgemeinschaft (DFG) (378803395: ConVeY). abbrvnat § APPENDIX * Let S be an optimal schedule. We transform S such that it fulfills the stair-property without increasing its costs and while maintaining feasibility. Let k, k' ∈ [m] be two processors with k' < k and job j ∈ J scheduled on processor k in time slot t ∈ T while k' is idle in t. Let I be the idle interval on processor k' containing t. We now move all jobs scheduled on processor k during I to be scheduled on processor k' instead. Since I is a maximal interval for which processor k' is idle, this modification does not increase the combined costs of processors k' and k. The modification also moves at least job j from processor k down to k' while not moving any job from processor k' to k. Jobs are only moved between processors at the same time slot and only to slots of processor k' which are idle, hence the resulting schedule is still feasible. This modification can be repeated until the schedule has the desired property. * Let f be an α-ω flow of value |f| = P. We construct a feasible schedule from f respecting the lower and upper bounds given by l_t and m_t. For every j ∈ J and t ∈ T, if f(u_j, v_t) = 1, then schedule j at slot t on the lowest-numbered processor not scheduling some other job. Since |f| = P and the capacity of the cut c({α}, V ∖{α}) = P, we have f_in(u_j) = p_j for every j ∈ J. Hence f_out(u_j) = ∑_t ∈ E_j f_in(v_t) = p_j. Hence every job j is scheduled in p_j distinct time slots within its execution interval. The schedule respects the upper bounds m_t, since c(v_t, γ) + c(v_t, ω) ≤ m_t - l_t + l_t and hence for every t at most m_t jobs are scheduled at t. The schedule respects the lower bounds l_t, since c(V ∖{ω}, {ω}) = P and hence f(v_t, ω) = l_t for every slot t ∈ T. By flow conservation we then have f_in(v_t) ≥ l_t, which implies that at least l_t jobs are scheduled at every slot t. For the other direction consider a feasible schedule respecting the lower and upper bounds l_t, m_t. We construct a flow f of value P and show that it is maximal. If j is scheduled at slot t and hence t ∈ E_j, define f(u_j, v_t) = 1, otherwise f(u_j, v_t) = 0. Define f(α, u_j) = p_j for every j ∈ J. Hence we have f_in(u_j) = p_j and f_out(u_j) must be p_j since this corresponds to the number of distinct time slots in which j is scheduled. Define f(v_t, ω) = l_t for every slot t ∈ T. Define f(v_t, γ) = f_in(v_t) - l_t. We have f(v_t, γ) ≤ m_t - l_t since f_in(v_t) corresponds to the number (t) of jobs scheduled at t, which is at most m_t. We also have f_out(v_t) = f_in(v_t) - l_t + l_t = f_in(v_t). Define f(γ, ω) = P - ∑_t ∈ T l_t. Then f_in(γ) = ∑_t ∈ T f_in(v_t) - l_t = ∑_t ∈ T(t) - ∑_t ∈ T l_t. Since the schedule is feasible, we have ∑_t ∈ T(t) = P and finally the flow conservation f_in(γ) = P - ∑_t ∈ T l_t = f_out(γ). * Let (S, S̅) be an α-ω cut and let J(S) {j | u_j ∈ S}. We consider the contribution of every node of S to the capacity c(S) of the cut. First consider the case that γ∉ S. * Node α: ∑_j ∈ J(S̅) p_j * Node u_j: |{v_t ∈S̅| t ∈ E_j}| = | E_j ∖ Q(S) | ≥ p_j - (j, Q(S)) * Node v_t: l_t + m_t - l_t = m_t The inequality for node u_j follows since (j, Q(S)) = max{0, p_j - |E_j ∖ Q(S)| }. In total, we can bound the capacity from below with c(S) ≥∑_j ∈ J(S̅) p_j + ∑_j ∈ J(S) p_j - (j, Q(S)) + ∑_t ∈ Q(S) m_t = P - (J(S), Q(S)) + ∑_t ∈ Q(S) m_t ≥ P - (Q(S)). If γ∈ S, we have the following contributions of nodes in S to the capacity of the cut: * Node α: ∑_j ∈ J(S̅) p_j ≥(J(S̅), Q(S̅)) * Node u_j: | E_j ∖ Q(S) | = | E_j ∩ Q(S̅)| ≥(j, Q(S̅)) * Node v_t: l_t * Node γ: P - ∑_t ∈ T l_t In total, we obtain the alternative lower bound c(S) ≥ P + (Q(S̅)) - ∑_t ∈ Q(S̅) l_t = P - (Q(S̅)) . * If (Q) > 0 for some Q, then some upper bound m_t cannot be met. If (Q) > 0 for some Q, then some lower bound l_t cannot be met. For the direction from right to left, consider an infeasible scheduling instance with lower and upper bounds. By Lemma <ref> we have that the maximum flow f for this instance has value |f| < P. Hence, there must be an α-ω cut (S, S̅) of capacity c(S) < P. Lemma <ref> now implies that (Q(S)) > 0 or (Q(S̅)) > 0. * We show Invariants 1 and 2 via structural induction on the realigned schedule S_. Then we show that Invariant 2 implies Invariant 3. For the induction base, consider S_, let V be an arbitrary valley in S_ with c (V) ≥ 2, and let C be the critical set with C ∼ V. We must have k_V ≤ c, otherwise V would contain a full busy interval on processor k_V > c and hence also an engagement t ∈ V of processor k_V, which by construction of S_ would have (t) = k_V > c. This is a direct contradiction to (V) = max_t ∈ V(t) = c. Invariant 2 now follows since by construction of S_ and our choice of C we have for every t ∈ C that processors 1, …, k_V, …, c are busy at t. For Invariant 1, let D be a section of C with ϕ(C ∩ D) ≤ k_V - δ for some δ∈ℕ. With k_V ≤ c we get ϕ(C ∩ D) ≤ c - δ and hence by Lemma <ref>, we have that the left valley V_l or the right valley V_r of C and D exists and |J(V_l)| + |J(V_r)| ≥δ. With the initial definition of the supply (V) of a valley, we get the desired lower bound of (V_l) + (V_r) ≥ 2 δ. Now suppose that Invariants 1 and 2 hold at all steps of the realignment up to a specific next step. Let V again be an arbitrary valley of (V) ≥ 2 and let k be the processor currently being filled. Let furthermore k_V, k'_V be the critical processor for V before and after, respectively, the next step in the realignment. There are four cases to consider for this next step. Case 1: Some V' ⊇ V is closed on processor k. Then no valley W intersecting V has been closed so far on k. Also, since (k, ·) only moves the busy slot of the highest busy processor below k, we know that the stair property holds within V when only considering processors 1, …, k. We show that the closing of V' on k reduces the critical processor of V by at least 1, i.e. k'_V ≤ k_V - 1. If k_V = k, then V' ⊇ V is closed on processor k_V and hence by definition we have k'_V ≤ k_V - 1. If k_V < k, suppose for contradiction that k_V ≤ k'_V ≤ k, where k'_V ≤ k again holds by definition of k' since V' ⊆ V is closed on processor k. Let B ⊆ V be a full busy interval on k_V before the close of V'. We show that B ⊂ V, i.e. that there must be some t ∈ V idle on k_V before the close. The stair-property then implies that processors k_V, …, k are idle at t before the close. Since some V' ⊇ V is closed, clearly V ⊂ T by the choice of V' as valley of some critical set in the realignment definition. Therefore we have min V - 1 ∈ T or max V + 1 ∈ T, without loss of generality we assume the former. We show that t min V - 1 must be busy on processor k_V before the close. Let W be the valley with W ∼ t and t ∈ W. We know that W ⊇ V since V is a valley and hence V ≺ t ∼ W. By our case assumption and the definition of the realignment, no W' ⊇ W can have been closed on processor k so far. With W ⊇ V and the definition of k_W we get k_W ≥ k_V, where k_W is the critical processor of W before the close. Our induction hypothesis now implies that processors 1, …, k_V, …, k_W are busy at t before the close. For B ⊆ V to be a (full) busy interval on k_V before the close, we hence must have min V ∉ B. We know by definition of the realignment and the subroutine that for every k' with k_V ≤ k' < k and every t ∈ V: * If t was idle on k' before the close, then t is still idle on k' after the close (definition of , k' < k). * If t was idle on k_V before the close, then t was idle on k' before (stair-property with k_V ≤ k') and hence t is still idle on k' after the close. * If t was part of a full busy interval B ⊆ V on k_V before the close, then t was idle on k_V + 1 before the close. Otherwise, by the stair property there would have been a full busy interval B' ⊆ B ⊆ V on processor k_V + 1 ≤ k before the close, contradicting the definition of k_V. Hence t was idle on k before by the stair-property and therefore t is idle on k_V after the close (by the definition of close). Taken together, for t ∈ V to be busy on k' after the close, t must have been busy on k' before the close (definition , k' < k) and t cannot have been part of a full busy interval B ⊆ V. Hence t ∈ B for some partial busy interval B ⊆ V on k' before the close. For B' ⊆ V to be a full busy interval on k'_V after the close (with k_V ≤ k'_V < k), we must have B' ⊆ B, as shown in the following sketch. Hence there must have been a busy interval B”⊆ [min B', max B] on processor k'_V + 1 > k_V before the close, which contradicts the choice of k_V < k. In conclusion, we have k'_V ≤ k_V - 1, which allows us to prove Invariants 1 and 2. If ϕ(C ∩ D) ≤ k'_V - δ for some δ∈ℕ and some section D of C, then ϕ(C ∩ D) ≤ k_V - (δ + 1) and hence by induction hypothesis the left valley V_l or the right valley V_r for C, D exists and (V_l) + (V_r) ≤ 2 (δ + 1) both before and after the close. Our induction hypothesis also implies that for every t ∈ C ∩ V, processors 1, …, k_V are busy before the close. Since at most the uppermost busy slot is moved by , after the close of V' we still have that processors 1, …, k_V - 1 ≥ k'_V are busy. Case 2: Some V' ⊂ V is closed on processor k. Again, no V”⊇ V can have been closed on processor k so far. We show that k_V = k ≥ k'_V, i.e. that the critical processor of V before the close of V' is the processor currently being filled. Let W be the valley for which V' is closed, i.e. V' is closed during (k, W). We must have W ⊃ V' and therefore no W' ⊇ W has been closed on k so far. Also, for V' to be closed in (k, W), there must be some busy interval B ⊆ W on k before the close, hence k_W = k. Since V' ⊂ V and V' ⊂ W, V and W intersect (V' ≠∅ by definition of V' as valley). Let C_W be the critical set with C_W ∼ W. If V ≺ W, then by choice of V' as valley of C_W we must have V ⊆ V', which contradicts our case assumption. Therefore V ≽ W and V ⊇ W, which in turn implies k_V ≤ k_W = k. Since processor k+1 is already completely filled before the close, we have k_V = k ≥ k'_V. For Invariant 1, again let ϕ(C ∩ D) ≤ k'_V - δ and hence ϕ(C ∩ D) ≤ k_V - δ for some δ∈ℕ and some section D of C. Our induction hypothesis implies that the left valley V_l or the right valley V_r of C, D exists and that both before and after the close we have (V_l) + (V_r) ≥ 2 δ. For Invariant 2, observe that V' ∩ C = ∅ since our case assumption V' ⊂ V implies V' ≺ C. Therefore, no slots of C are modified when V' is closed. Invariant 2 now directly follows from the induction hypothesis and k'_V ≤ k_V. Case 3: Some V' with V' ∩ V = ∅ is closed on processor k. We first show that min V - 1 ∉ V' and symmetrically max V + 1 ∉ V'. Consider t min V - 1 and assume t ∈ T. By choice of V and t we must have t ≻ V. If t ∈ V', we would have V' ≻ V and hence V' ⊇ V, which contradicts our case assumption. Symmetrically, we know that max V + 1 ∉ V'. Therefore the close of V' does not modify the schedule within [min V - 1, max V + 1], implying that no partial busy interval in V before the close can become a full busy interval. Hence we have k_V = k'_V and Invariants 1 and 2 follow as in Case 2. Case 4: The call to (k, T) returns and (V') is decreased by 1 for every valley V' such that some valley intersecting V' has been closed during (k, T). First observe that the schedule itself does not change by this step but processor k is now fully filled, which implies k'_V ≤ k_V. Invariant 2 then follows directly from the induction hypothesis. We consider two subcases. If during (k, T), no valley V' intersecting V was closed on k, then (V) does not change and Invariant 1 follows from the induction hypothesis and k'_V ≤ k_V. If on the other hand some valley V' intersecting V was closed on k during (k, T), then (V) is decreased by 1 to '(V) (V) - 1. As argued in Cases 1 to 3, the critical processor of V decreases monotonically during (k, T). Consider the schedule right before the first valley V' intersecting V is closed on k. Let k^0_V be the critical processor for V at this point of the realignment and k^1_V the critical processor right after V' is closed. We have k'_V ≤ k^0_V - 1: If V' ⊇ V, then as argued in Case 1, we have k^1_V ≤ k^0_V - 1 and hence k'_V ≤ k_V ≤ k^1_V ≤ k^0_V -1. If V' ⊂ V, then as argued in Case 2 we have k^0_V = k. Since by our case assumption (k, T) returns in the next step, we have k'_V ≤ k - 1 and hence k'_V ≤ k^0_V -1. Invariant 2 now follows by our induction hypothesis. If ϕ(C ∩ D) ≤ k'_V - δ then ϕ(C ∩ D) ≤ k^0_V - (δ + 1) and hence by our induction hypothesis the left valley V_l or the right valley V_r of C, D exists and before the close we have (V_l) + (V_r) ≥ 2 (δ + 1). Since is decreased for every valley by at most 1, we have after the close that '(V_l) + '(V_r) ≥ 2 δ. We conclude by showing that Invariant 2 implies Invariant 3. Let V be an arbitrary valley during the realignment of S_ and B ⊆ V a busy interval on processor k_V with B ∼ V. Let C be the critical set with C ∼ V. Note that B ∼ V implies that B intersects C. Assume for contradiction that B is not a section of C. Then min B lies strictly within a subinterval of C or symmetrically max B lies strictly within a subinterval of C. We assume the first case, i.e. tmin B - 1 ∈ C and min B ∈ C. The second case follows by symmetry. If t ∈ V, then time slot t is busy on processor k_V by Invariant 2. Therefore, B cannot be a (full) busy interval on processor k_V, contradicting the choice of B. If t = min V-1, then consider the valley W with t ∈ W and t ∼ W and let C_W be the critical set with C_W ∼ W. We must have W ⊃ V, W ≻ V and t ∈ C_W. Therefore k_W ≥ k_V and Invariant 2 implies that t = min B - 1 is busy on processor k_V, again contradicting the choice of B as full busy interval on processor k_V. * Since in the while-loop of (k, V), the busy interval B ⊆ V on k_V always is a section of C if V ∼ C (Invariant 3), the left valley V_l and the right valley V_r of the critical set C and interval B are properly defined. Also since ϕ̂(B) ≤ k - 1, Invariant 1 implies that V_l or V_r exists and that there is sufficient such that one of the two valleys of C is closed in this iteration. This reduces the number of idle intervals on processor k by at least 1, since Invariant 2 implies that V_l or V_r cannot end strictly within an idle interval on k. Hence all terms in the realignment are well defined and the realignment terminates.
http://arxiv.org/abs/2307.03221v1
20230706180001
Astrometry with Extended-Path Intensity Correlation
[ "Ken Van Tilburg", "Masha Baryakhtar", "Marios Galanis", "Neal Weiner" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.EP", "astro-ph.GA", "astro-ph.SR", "physics.optics" ]
Astrometry with Extended-Path Intensity Correlation]Astrometry with Extended-Path Intensity Correlation kenvt@nyu.edu | kvantilburg@flatironinstitute.org Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA mbaryakh@uw.edu Department of Physics, University of Washington, Seattle WA 98195, USA mgalanis@perimeterinstitute.ca Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada neal.weiner@nyu.edu Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA Intensity interferometry—the correlation of spatially separated light intensities—has historically been an important tool for precision optical astronomical observations. However, due to the extremely narrow field of view, its scope has been limited to studies of the morphology of very bright emission regions, primarily determinations of angular diameters of nearby hot stars. We propose adding an adjustable path extension into the detector optics which creates a primary interference fringe for widely separated sources, allowing maximum source separations parametrically larger than the angular resolution. This extended-path intensity correlator (EPIC), augmented with advances in single-photon detectors and spectroscopic gratings, would enable ground-based astrometry at microarcsecond-level precision in a field of view as large as several arcseconds. EPIC has the potential to revolutionize astrophysical and cosmological observations requiring high-precision differential astrometry on sources of high surface brightness. We outline how EPIC can be employed to detect the astrometric wobble of Earth-like planets around Sun-like stars at tens to hundreds of parsecs, and expect that EPIC's larger field of view will expand the power of intensity interferometry to a broad range of astronomical applications. [ Neal Weiner August 1, 2023 ================== §.§ Introduction Interferometry—the precision measurement of phase differences between paths—has a long history of revolutionary advances in physics and astronomy <cit.>. In the last decade alone, for instance, amplitude interferometry has led to ground-breaking observations of gravitational waves <cit.> and images of light rings <cit.> and orbits <cit.> near black hole horizons. Intensity interferometry, pioneered by Hanbury Brown and Twiss <cit.>, utilizes second-order coherence of light, by correlating intensities instead of amplitudes at two separated telescopes. The technique enables exceptional angular resolution scaling as the inverse of the telescope separation, which can be made arbitrarily large since the (optical) light need not be physically recombined as in an amplitude interferometer. The method primarily requires fast photon counting to precisely measure intensity as a function of time and large light collection areas to tease out the small statistical correlations in photon arrival times, and is robust under poor atmospheric conditions <cit.>. One of the fundamental limitations of intensity interferometry is that correlations diminish dramatically on angular scales large compared to the resolution, restricting this technique to measurements of stellar angular diameters <cit.> and close binary orbits <cit.>. A novel approach is needed to broaden the scope of intensity interferometry, literally and figuratively. We propose such an idea here. §.§ Astrometry with Intensity Interferometry In two-source intensity interferometry, the primary observable is the correlation between light intensities from two sources a and b separated by angle θ_ba≡θ̂_b - θ̂_a at two detectors 1 and 2 with baseline d (fig:basicsa). The intensity fluctuations are positively correlated when θ_ba·d≲λ/2, where λ is the wavelength of the recorded light. For larger baselines, the intensity correlation exhibits fringes for d at integer multiples of θ_ba·d̂/λ. For two equally bright, nearly monochromatic point sources with mean intensity I_0, the intensities I_1,2 at the two telescopes are: I_1(t_1)/I_0 = 1 + cos[ k (r_a1 - r_b1) + ϕ_a (t^ret_a1 ) - ϕ_b (t^ret_b1) ], I_2(t_2)/I_0 = 1 + cos[ k (r_a2 - r_b2) + ϕ_a (t^ret_a2 ) - ϕ_b (t^ret_b2) ]; where k = 2π / λ is the wavenumber of the light. In this idealized classical model, the phases ϕ_s fluctuate randomly as a function of the retarded time t_sp^ret = t_p - r_sp/c from the telescope p=1,2 to the source s = a,b at a distance r_sp. For a relative time delay τ≡ t_2-t_1 equal to (r_a2 - r_a1)/c, the phase ϕ_a and the intensity fluctuations from source a will be (positively) correlated at both telescopes. If source b is sufficiently close in angle to a, then the same choice of τ will simultaneously lead to nearly equal ϕ_b, generating positive correlations for both phases and thus excess fractional intensity correlation: C(d,τ) ≡⟨I_1(t) I_2(t+τ) ⟩/⟨I_1 ⟩⟨I_2 ⟩ - 1 = 1 + cos( k d ·θ_ba )/2. Brackets ⟨·⟩ signify phase averaging over ϕ_a,b, and we use the small-angle approximation: (r_b1 + r_a2) - (r_a1 + r_b2) = θ_ba ·d. That is, the “crossed” paths in fig:basicsa are longer than the “uncrossed” paths by the relative angular source separation times the baseline distance. Classically, this information is carried in the relative phases in the four-point correlation of the electromagnetic field (eq:C_1). Quantum mechanically, one can view this as the two-photon amplitudes ⟨ 1,2 | b,a ⟩ and ⟨ 1,2 | a,b ⟩ interfering with a relative propagation phase k d·θ_ba <cit.>. Measurement of the intensity correlations of eq:C_1 yields the relative source separation θ_ba with a fiducial angular resolution: σ_θ_res = 1/k d = λ/2 πd ≈1.64 μas (λ/500 nm ) ( 10 km/d) along the direction d̂. In practice, by recording the arrival times of photons, one can construct an estimator for the instantaneous intensities I_1,2 and their excess fractional correlation C(d,τ) <cit.>. A positive value of the latter is a direct measure of “photon bunching”, the intuitively surprising result that near-simultaneous photon arrival times (after applying an appropriate time delay τ) are more likely to occur than from random chance <cit.>. An inversion of the function in eq:C_1 yields a multivalued map C(d,τ) ↦ k θ_ba·d, from which the relative separation θ_ba between the light centroids of a and b can be measured with a precision of σ_δθ∼σ_θ_res / SNR (eq:sigma_theta), where SNR is the total signal-to-noise ratio on the C(d,τ) observation (Methods <ref>,<ref>). The degeneracy of the multivalued map from correlator to separation can be broken—θ_ba can be assigned to a unique fringe—by observing the intensity correlations in many spectral channels (each with different k) and as a function of time, since the projected baseline θ̂_ba·d changes (primarily) due to Earth's rotation. Ground-based differential astrometry with a fiducial resolution of eq:theta_res and even more astonishing light-centroiding precision opens up a myriad of scientific applications, but traditional intensity interferometry is severely hamstrung by two problems: an extremely limited field of view (FOV) and low SNR. Our proposal of Extended-Path Intensity Correlation (EPIC) solves the former, while multichannel observations and recent technological improvements in ultrafast single photon detection can ameliorate the latter <cit.>. The small-FOV limitation arises from the finite bandwidth of the detected light. In each spectral channel of spectral resolution ℛ≡ k / σ_k with Gaussian spread σ_k around wavenumber k, bandwidth smearing leads to a loss of fringe contrast for | σ_k θ_ba·d | ≳ 1 or an angular dynamic range σ_Δθ = √(2)/σ_k d ≈ 12 mas(ℛ/5,000) (λ/500 nm) (10 km/d) , analogous to the “coherent FOV” of amplitude interferometers. In other words, the source separation for which an intensity interferometer produces sharp fringes, as in eq:C_1, has to be less than ℛ times the resolution, θ_ba≲ℛσ_θ_res. This is a serious impediment if one desires microarcsecond-level angular resolution for sources separated by arcseconds. Such a high spectral resolution with dense coverage over a wide spectral range is unachievable by ground-based telescopes, and operation at a very high-order fringe would impose prohibitive requirements on fringe stability and possibly lead to fringe confusion. Furthermore, at separations for which θ_ba·d / c is larger than the relative timing resolution σ_t (typically longer than the coherence time 1/cσ_k of the light in each spectral channel), a total loss of mutual second-order coherence occurs, since the wavefronts from sources a and b arrive at the telescopes at different relative times τ. The timing precision defines the angular scale σ_θ̂ = 2 σ_t/d ≈124 mas (σ_t/10 ps) ( 10 km/d) , at which global astrometry is possible with intensity interferometry, assuming σ_t > 1/cσ_k. §.§ Extended-Path Intensity Correlation In this work, we propose a variant of intensity interferometry that parametrically decouples the maximum source separation from the angular resolution, effectively increasing the field of view by orders of magnitude while retaining its light-centroiding precision. To “point” an interferometer at a target of interest, a relative time delay τ can be applied offline, but any detector-dependent phase shift cannot point at two targets at once, since it will contribute to both terms in brackets on the LHS of eq:path_diff, thus leaving the RHS unchanged. A source-dependent phase shift has to be added in real time, in the telescope optics, to lengthen e.g. r_a1 and/or r_b2 without affecting r_b1 nor r_a2. We refer to intensity interferometry with this additional shift as “Extended-Path Intensity Correlation” (EPIC). Outside the context of optical astronomy, similar approaches have been proposed for gravitational-wave detection <cit.> and tests of quantum mechanics <cit.>. In EPIC, the light from both sources enters the same telescope aperture and is equally split into two paths of different lengths (with difference ℓ_p) before it is recombined into one beam in a Mach-Zehnder geometry (fig:optics). The probability for the light from source s to be detected by each telescope's photodetector p is the superposition of two possible amplitudes with respective path lengths: r_sp and r_sp + ℓ_p (s = a,b;   p = 1,2). There are 2^4 = 16 propagation path combinations contributing to the intensity correlator ⟨ I_1 I_2 ⟩, corresponding to the 4 independent possibilities in eq:path_extension. One possible fringe choice is the one where only light from a → 1 is extended by ℓ_1, and that of b → 2 by ℓ_2 (fig:basicsb), leading to a modification of eq:path_diff of the doubly-differential propagation path: [r_b1 + r_a2] - [(r_a1 + ℓ_1) + (r_b2 + ℓ_2) ] = θ_ba ·d - (ℓ_1 + ℓ_2) ≡δθ_ba. The path difference and δθ_ba≡ (θ_ba - θ_ba^ref) ·d̂ can thus be made arbitrarily small by adjusting the reference angle θ_ba^ref≡d̂ (ℓ_1 + ℓ_2) / d close to the true separation θ_ba. The fringe of eq:path_epic can be selected (i.e. the other fringes ignored) by picking the time delay τ equal to the optimal value τ^opt = - (θ̂_a + θ̂_b) ·d/2 + ℓ_2 - ℓ_1. The excess fractional intensity correlation in EPIC is: C(d,τ^opt) ≃1/4√(2) cσ_k σ_t {⟨I_a ⟩^2 + ⟨I_b ⟩^2/(⟨I_a ⟩+ ⟨I_b ⟩)^2 exp[-(δθ_ba)^2/2 σ_θ̂^2] + 2 ⟨I_a ⟩⟨I_b ⟩/(⟨I_a ⟩+ ⟨I_b ⟩)^2 cos[δθ_ba/σ_θ_res ] exp[-(δθ_ba)^2/2 σ_Δθ^2 ] }, with σ_θ_res, σ_Δθ, and σ_θ̂ from eq:theta_res, <ref>, and <ref>. We include effects from unequal source fluxes ⟨ I_a ⟩≠⟨ I_b ⟩, the overall fringe contrast suppression due to a timing resolution σ_t, and smearing over the bandwidth σ_k <cit.>. Equation <ref> shows that the angular dynamic range σ_Δθ is not enhanced, but that the path extensions create “ghost images” of the sources, as if they are only displaced by a small angle θ_ba - θ_ba^ref (fig:basicsc) near the main fringe. Since θ_ba^ref is known, the source separation can be measured via inversion of the map in eq:C_epic. §.§ EPIC Sensitivity and Maximum Separation We anticipate an EPIC program to develop in three Phases that would rapidly reach unprecedented light-centroiding precision on bright stars; the benchmark parameters and expected performance are listed in tab:phases. Fractional intensity correlations manifest as coincident photon detections. An optimal estimator for C (eq:C_epic) has variance σ_C^2 = t_obs / (√(4π)σ_t N_1 N_2) where N_p = t_obs⟨ I_p ⟩η_p A_p / (ħ c k) is the expected number of photons at telescope p per spectral channel centered on wavenumber k after an observation time t_obs, with efficiency η_p and aperture area A_p (which can be enhanced by n_arr telescopes per array site) <cit.>. The total SNR is the quadrature sum of C/σ_C over all channels, which can be logarithmically spaced by factors of e^2/ℛ (Methods <ref>). The SNR is halved for unpolarized light. Spectral resolutions of ℛ≥ 5,000 are standard with commercially available diffraction gratings <cit.>, while timing resolutions approaching σ_t ≲ 3 ps (30 ps) have been achieved with superconducting nanowire single photon detectors <cit.> (single photon avalanche diodes <cit.>). EPIC can perform high-precision measurements at source separations orders of magnitude larger than traditional intensity interferometry. Instead of being limited by spectral or timing resolution, the maximum source separation is now set by refractive phase errors from the turbulent atmosphere. These phase fluctuations become important at opening angles greater than the isoplanatic angle θ_0, of the order of a few arcseconds <cit.>, yielding a suppression in the correlation by a factor exp{-(θ_ba/θ_0)^5/3} in the second line of eq:C_epic (Methods <ref>) <cit.>. For separations of order the isoplanatic angle, the main EPIC fringe (δθ_ba≃ 0) is obtained with path extensions of ℓ_1 + ℓ_2 ≈ 4.8 cm (θ_ba·d) / (1 arcsec· 10 km). §.§ Applications: Exoplanet Detection High-precision differential astrometry benefits many scientific applications <cit.>, including binary-orbit characterization <cit.>, gravitational microlensing of stars <cit.> and quasars <cit.>, galactic dynamics <cit.>, and orbits around Sagittarius A* <cit.>. Here, we focus on exoplanet detection to illustrate EPIC's capabilities. The gravitational pull of an orbiting exoplanet causes a small periodic wobble in its host star's position. Dozens of exoplanets have been discovered astrometrically with amplitude interferometers <cit.> and Gaia <cit.>; thousands more are expected soon <cit.>. The challenge is the small amplitude of the astrometric wobble: Δθ_h = (M_p/M_h)(a_p/D_h) ≈ 0.15 μ as for exoplanet mass M_p = M_⊕, host mass M_h = M_⊙, circular orbit's semimajor axis a_p = AU, and line-of-sight distance D_h = 20 pc. Gaia's final-mission wobble light-centroiding precision will be σ_Δθ_h≈ 7 μ as for typical nearby stars, limiting its sensitivity to massive exoplanets. EPIC can greatly increase the discovery potential for Earth-mass exoplanets around host stars with a nearby reference source—either in multiple-star systems or accidental doubles. The per-epoch light-centroiding precision of EPIC Phases {I, II, III} is σ_δθ≈{ 4.5, 0.29, 0.011} μ as for a pair of Sun-like stars at D_h = 20 pc (scaling as σ_δθ∝ D_h). After N_obs = 300 observations over 30 yr, wobble precisions of σ_Δθ_h = σ_δθ / √(N_obs) enable detection of Earth-Sun-like systems with a nearby reference star at distances up to 20 pc (400 pc) at 3σ with EPIC-II(III). The exoplanet parameter space accessible to EPIC astrometry (blue regions in fig:exoplanets) is complementary to that of other techniques. Transits (purple) and radial-velocity signatures (orange) are most sensitive to exoplanets at small semimajor axes, while direct imaging (red) favors large planets far away from their host star. Microlensing (green) due to chance alignments of exoplanetary systems with background stars can lead to detection of very low-mass systems but rapidly loses sensitivity for small orbits. In regions where EPIC shares sensitivity with other techniques, the respective observational biases would be different, aiding population synthesis analyses over a wider range of systems <cit.>. §.§ Conclusion Intensity interferometry holds the promise of exceptional angular resolution on bright sources, but has been hampered by its narrow FOV in its uses for differential astrometry. By introducing variable, source-dependent path extensions, EPIC enlarges the observable source separation to the maximum allowed by atmospheric disturbances. Combined with advances in spectroscopy and fast single-photon detection, EPIC's differential light-centroiding performance will facilitate new exoplanet discoveries and unlock many other scientific applications benefiting from narrow-angle astrometry. §.§.§ Acknowledgments We thank Gordon Baym, Megan Bedell, Karl Berggren, Michael Blanton, Matteo Cantiello, Calvin Chen, Cyril Creque-Sarbinowski, Liang Dai, Neal Dalal, Julianne Dalcanton, David Dunsky, Peter Graham, David Hogg, Marius Kongsore, Miguel Morales, Oren Slone, and David Spergel for valuable conversations and input. KVT thanks Jason Aufdenberg, Matthew Brown, James Buckley, Dainis Dravins, David Kieda, Michael Lisa, Nolan Matthews, Andrei Nomerotski, Ue-Li Pen, Naomi Vogel, Shiang-Yu Wang, and Luca Zampieri for fruitful conversations, and Sebastian Karl for pointing out Ref. <cit.>, during the 2023 Workshop on Stellar Intensity Interferometry at The Ohio State University. KVT is supported by the National Science Foundation under Grant PHY-2210551. MB is supported by the DOE Office of Science under Award Number DE- SC0022348, and the Royal Research Fund, the Department of Physics, and the College of Arts and Science at the University of Washington. NW is supported by NSF under award PHY-1915409, by the BSF under grant 2018140, and by the Simons Foundation. The Center for Computational Astrophysics at the Flatiron Institute is supported by the Simons Foundation. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. MB, MG, and KVT thank the Institute for Nuclear Theory at the University of Washington for its kind hospitality and stimulating research environment. The INT is supported in part by the U.S. Department of Energy grant No. DE-FG02-00ER41132. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611 and PHY-2210452. The participation of MB at the Aspen Center for Physics was supported by the Simons Foundation. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Methods § OBSERVATIONAL PROCEDURE EPIC is ideally suited for differential astrometry on relatively bright sources with apparent magnitude m ≲ 15, especially with Phases II and III. The reference sources in tab:phases, a pair of Sun-like stars at 100 pc, have m ∼ 10. They are roughly at the limiting magnitude of Phase I, which just about reaches a single-epoch SNR of order unity, as σ_δθ≈σ_θ_res after t_obs = 10^4 s. Statistically, most candidate source pairs on which EPIC can be applied will be separated by an angle of order the maximal one, the isoplanatic angle. This is significantly larger than the seeing angle of ground-based observatories and the diffraction limit of space-based telescopes such as HST, JWST, and Gaia, thus allowing identification and characterization prior to EPIC observations. For the bright source pairs under consideration, Gaia will be able to provide astrometry with 𝒪(20 μ as) accuracy across the full sky. For the first EPIC observations, one would choose path extensions ℓ_1(t)+ℓ_2(t)=θ_ba^ref(t)·d(t) for a reference angle close to the Gaia value and commensurate with the time dependence of Earth's rotation and the relative proper motion and parallax of the sources. For Phase I, Gaia's accuracy should be sufficient to place the source on the primary EPIC fringe, as the optimal baseline corresponds to an angular resolution worse than 20 μ as (cfr. tab:phases and Methods <ref>). In Phases II & III, there may initially be fringe confusion, but with sufficient SNR across different spectral channels and at different times, this degeneracy can be broken. Subsequent EPIC observations can then use the updated light centroid in the adjustment of θ_ba^ref <cit.>. For practical simplicity and computational efficiency, we envision observations broken into a series of short intervals of t_obs∼ 10^2 s, discretely varying ℓ_1(t)+ℓ_2(t) and computing the intensity correlation for each spectral channel and time interval individually. For each generalized bin of wavenumber k and time interval t, the correlation C may be below statistical noise levels. However, a global fit to these binned data can extract δθ_ba = (θ_ba-θ_ba^ref) ·d̂ through the wavenumber and time dependence as the argument of the correlator in eq:C_epic gradually transits across the interference pattern, with a precision given by eq:sigma_channel and <ref> (Methods <ref>). To achieve the target light-centroiding precision, the path extensions ℓ_1,2 need to be measured and controlled at the sub-wavelength level; we perform a detailed study of tolerances and aberrations in follow-up work <cit.>. § LIGHT-CENTROIDING PRECISION The differential light-centroiding precision σ_δθ depends on the spectrum, surface brightness, and angular size of the sources a and b. The fiducial angular resolution of eq:theta_res can be made arbitrarily small by taking d →∞, but light-centroiding precision suffers in this limit because fringe contrast is lost due to form factor suppression of the sources' finite angular sizes. Here, we outline the calculation of the optimal baseline and resolution, and of the resulting light-centroiding precision used in tab:phases and fig:exoplanets. We model stars, the source targets of primary interest, as circular disks of uniform temperature T_s and angular radius θ_s = R_s / D_s where R_s is the physical radius and D_s the line-of-sight distance of the star. The mean light intensity in a spectral channel centered at k with Gaussian standard deviation σ_k = k / ℛ is then: ⟨I_s ⟩= ħc^2/(2π)^3/2 σ_k k^3 θ_s^2/e^ħc k / k_BT_s -1, with k_B the Boltzmann constant. The finite angular size of the source is taken into account by accompanying every factor of ⟨ I_s ⟩ in the numerator of eq:C_epic with a form factor ℱ_s(y) ≡ 2 J_1(y)/y with y ≡ (θ_s/σ_θ_res), i.e. the 2D Fourier transform of a uniform disk at angular wavenumber kd. The SNR on the intensity correlation in a single spectral channel is C/σ_C for polarized light, and C/2σ_C for unpolarized light. If one can disambiguate the fringe number of θ_ba (Methods <ref>), the per-channel light-centroiding precision becomes σ^(1)_δθ = (1/2σ_CC/(δθ_ba) )^-1 = σ_θ_res σ_C/|sin(δθ_ba/σ_θ_res )| 4√(2) c σ_k σ_t/ℱ_aℱ_b (⟨I_a ⟩+ ⟨I_b ⟩)^2/⟨I_a ⟩⟨I_b ⟩ by standard error propagation. The light-centroiding precision from the combination over all spectral channels labeled by k, with n_arr detectors per array site, is the inverse quadrature sum of eq:sigma_channel: σ_δθ = 1/n_arr [ ∑_k (σ_δθ^(1))^-2 ]^-1/2 ≃2^13/2 π^5/4 ħ^3 c^3/k_B^3 1/n_arr A d √(σ_t/η^2 t_obs1/ℛ) 1/T_s^3 θ_s^2 ℐ^-1/2, where the sum runs over all spectral channels centered on wavenumbers k = (2 π / λ_max) e^2m/ℛ with m = 0, 1, …, ⌊ (ℛ/2) ln(λ_max/λ_min) ⌋ with minimum and maximum wavelengths, assumed to be λ_min = 300 nm and λ_min = 1,000 nm in tab:phases and fig:exoplanets. In the second line, we have evaluated and approximated this sum with an integral to give the parametric dependence on telescope properties (second fraction), detection specifications (square root), and source parameters (fourth fraction). The telescopes and detectors are assumed to be the same at both sites (η = η_1 = η_2, A = π D^2 / 4 = A_1 = A_2, etc.); likewise for the sources s = a,b, with identical T_s and θ_s. The final factor is the (inverse square root of the) dimensionless integral: ℐ ≡∫_x_min^x_max x x^5/(e^x-1)^2 [ℱ_s(x k_B T_s θ_s d/ħc )]^4, with x_min = 2πħ c / k_B T_s λ_min and similar for x_max. The suppression of ℱ_s and thus ℐ at large d is why there is an optimal baseline d^opt for differential light-centroiding. This optimal value depends on λ_min, λ_max, T_s, and θ_s, but is roughly that for which σ_θ_res∼θ_s in the most sensitive spectral channel. For a pair of Sun-like stars (T_s = 6,000 K, R_s = R_⊙), this optimal baseline is d^opt ≈0.71 km (D_s/100 pc) for the assumed spectral range. For hotter stars even more suitable to EPIC, d^opt would be larger and σ_θ_res better (at fixed intensity). Equation <ref> sets the fiducial resolution and the other resulting angular scales in tab:phases and fig:exoplanets. § ATMOSPHERIC NOISE One of the main advantages of traditional intensity interferometry, preserved by EPIC, is that the differential light-centroiding precision σ_δθ is impervious to atmospheric aberrations for small source separations θ_ba. Any fluctuation in the index of refraction n[x] will be common to a → p and b → p for any p separately, and will not contribute to the doubly-differential phase in the second line of eq:C_epic for the same reason that a common extension/delay will not alter eq:path_epic. We write the atmospheric phase fluctuation as ϕ_sp = k ∫_0^r_sp l n[x_sp(l)] with the mean refraction subtracted out: ⟨ n[x] ⟩ = 0  ∀ x and thus ⟨ϕ_sp⟩ = 0. Equation <ref> is a line-of-sight integral over the p → s path, namely x_sp(l) ≡r_p + l θ̂_s. At optical wavenumbers k, the atmospheric phase variance is enormous: ⟨ϕ_sp^2 ⟩≫ 10^4. Because of the small spatial coherence of the fluctuating index of refraction in the turbulent atmosphere, any intensity interferometric scheme where the light from a → p and b → p reach the same photodetector through separate apertures will not enjoy phase cancellation, thus erasing all fringe contrast. Between the inner scale l_0 ∼ 1 mm and the outer scale L_0 ∼ 10 m, spatial fluctuations in the refractive index are statistically quantified by the structure function: ⟨(n[x + r] - n[x] )^2 ⟩= C_n^2[x] r^2/3, valid for l_0 ≪ r ≪ L_0 and an overestimate elsewhere. The “constant” C_n is only weakly dependent on position, and is mostly a (decreasing) function of altitude, with C_n ∼𝒪(10^-8 m^-1/3) at 1 km and 𝒪(10^-9 m^-1/3) at 10 km. The differential atmospheric phase variance σ_ϕ,p^2 ≡⟨(ϕ_bp - ϕ_ap )^2 ⟩= (θ_ba/θ_0,p)^5/3 between the wavefronts of a and b from a single vantage point p can be small, as long as the source separation θ_ba is much smaller than the isoplanatic angle: θ_0,p ≡[2.9 k^2 ∫_0^r_sp l C_n^2[x_sp(l)] l^5/3 ]^-3/5, itself a smoothly varying function of the position of p and the sources' angle from zenith. A further calculation <cit.> reveals that the fringe in the second line of eq:C_epic is suppressed by the factor exp{ - (σ_ϕ,1^2 + σ_ϕ,2^2)/2 }. Intensity correlation fringe contrast remains essentially unaltered for sources within the same isoplanatic patch. This analysis shows that while a source-dependent path extension could be introduced with a double aperture and a beam recombiner <cit.>, such a setup would be susceptible to severe refractive phases from the turbulent atmosphere, negating one of the core advantages of intensity interferometry. To avoid atmospheric phase noise, it is imperative that the beams from both sources traverse the same air column down to millimeter accuracy. The need for a “nearly common beam” necessitates the beam splitting of fig:basics(b) for wide-angle astrometry with EPIC, but makes possible exceptional differential astrometric measurements from ground-based observatories despite potentially poor atmospheric conditions. § AUTHOR CONTRIBUTIONS KVT conceived the experimental setup and produced the figures. KVT, MB, MG, and NW worked out all theoretical and practical aspects of the technique and its scientific applications. All authors contributed to the manuscript. § DATA AVAILABILITY All data and code are available from the authors upon reasonable request. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2307.01984v1
20230705020014
The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT
[ "Nicholas Heller", "Fabian Isensee", "Dasha Trofimova", "Resha Tejpaul", "Zhongchen Zhao", "Huai Chen", "Lisheng Wang", "Alex Golts", "Daniel Khapun", "Daniel Shats", "Yoel Shoshan", "Flora Gilboa-Solomon", "Yasmeen George", "Xi Yang", "Jianpeng Zhang", "Jing Zhang", "Yong Xia", "Mengran Wu", "Zhiyang Liu", "Ed Walczak", "Sean McSweeney", "Ranveer Vasdev", "Chris Hornung", "Rafat Solaiman", "Jamee Schoephoerster", "Bailey Abernathy", "David Wu", "Safa Abdulkadir", "Ben Byun", "Justice Spriggs", "Griffin Struyk", "Alexandra Austin", "Ben Simpson", "Michael Hagstrom", "Sierra Virnig", "John French", "Nitin Venkatesh", "Sarah Chan", "Keenan Moore", "Anna Jacobsen", "Susan Austin", "Mark Austin", "Subodh Regmi", "Nikolaos Papanikolopoulos", "Christopher Weight" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Tetrahedron genuine entanglement measure of four-qubit systems Shao-Ming Fei ============================================================== This paper presents the challenge report for the 2021 Kidney and Kidney Tumor Segmentation Challenge (KiTS21) held in conjunction with the 2021 international conference on Medical Image Computing and Computer Assisted Interventions (MICCAI). KiTS21 is a sequel to its first edition in 2019, and it features a variety of innovations in how the challenge was designed, in addition to a larger dataset. A novel annotation method was used to collect three separate annotations for each region of interest, and these annotations were performed in a fully transparent setting using a web-based annotation tool. Further, the KiTS21 test set was collected from an outside institution, challenging participants to develop methods that generalize well to new populations. Nonetheless, the top-performing teams achieved a significant improvement over the state of the art set in 2019, and this performance is shown to inch ever closer to human-level performance. An in-depth meta-analysis is presented describing which methods were used and how they faired on the leaderboard, as well as the characteristics of which cases generally saw good performance, and which did not. Overall KiTS21 facilitated a significant advancement in the state of the art in kidney tumor segmentation, and provides useful insights that are applicable to the field of semantic segmentation as a whole. § INTRODUCTION §.§ Kidney Tumor Background With the utilization of cross-sectional imaging now as high as it's ever been, kidney tumors are now most often discovered incidentally, rather than on the basis of symptoms <cit.>. There is growing evidence that large numbers of renal tumors are either benign or indolent, especially when they are small and incidentally discovered, and they might therefore be best managed with surveillance rather than intervention <cit.>. However, metastatic renal cancer remains highly lethal, so the rare instances in which a small and/or incidentally-discovered renal tumor progresses to metastatic disease are unacceptable, and their risk must be weighed against overtreatment and its associated cost and morbidity <cit.>. Some argue that renal mass biopsy has the potential to resolve this treatment decision dilemma, but others argue that its relative lack of sensitivity hinders its ability to convince physicians and patients that their disease won't progress <cit.>, and ultimately, its utilization remains relatively low <cit.>. There remains a significant unmet need for tools to reliably differentiate between benign/indolent renal tumors and those with metastatic potential. §.§ Kidney Tumor Radiomics Increasingly, the so-called “radiome” is revealing itself as a powerful quantitative predictor of clinically-meaningful outcomes in cancer <cit.>. In renal tumors, radiomic features have shown exciting potential for predicting histologic subtype <cit.>, nuclear grade <cit.>, somatic tumor mutations <cit.>, and even cancer-specific and overall survival <cit.>. In surgical oncology, a number of “nephrometry” scores such as R.E.N.A.L. <cit.> and PADUA <cit.> have been developed which synthesize various manually-extracted radiomic features to produce scores which have been shown to robustly correlate with perioperative and oncologic outcomes. Of these, the R.E.N.A.L. score has recently been approximated in terms of segmentation-based radiomic features, and was shown to be noninferior to human-derived scores at predicting clinical outcomes <cit.>, and the others are sure to follow in short order. At the heart of most radiomics approaches is the need for the spatial delineation of which structures occupy what space in a given image. As one might imagine, manually delineating each region of interest is a very time consuming activity, and is subject to significant interobserver variability <cit.> to which, radiomics algorithms are sometimes quite sensitive <cit.>. There is thus significant interest in developing highly-accurate automatic methods for semantic segmentation. Modern deep learning approaches have achieved impressive performance on a wide variety of semantic segmentation tasks <cit.>, but their need for large and high-quality training datasets has hindered their development in problems such as kidney cancer for which little annotated data is publicly available. Further, the development of deep learning algorithms requires a huge number of design decisions, not only about their structure but also about procedures during training and validation. There remains little consensus in the computer vision community about which algorithms are truly optimal for a given semantic segmentation task. §.§ The KiTS21 Challenge Machine learning competitions, or “challenges” as they are often called, have become a mainstay in the medical image analysis research community <cit.>. In a challenge, a central organizing team takes responsibility for defining a clinically important problem and collecting a large labeled dataset. They then split this dataset into a public “training set” which is disseminated to the larger research community, and a “test set” which is kept secret. Teams are invited to train their favorite machine learning models on the training set, and the organizing team is responsible for measuring and ranking how well these models perform on the secret test set. Challenges serve as an excellent model for interdisciplinary collaboration: The organizing team, which is ostensibly most interested in the domain-specific nuances of the clinical problem, benefits from top research groups from around the world turning their attention to their problem and proposing solutions. And the participants, who are ostensibly most interested in machine learning methodology, benefit from a new and high quality dataset carefully tailored to a clinically meaningful problem by domain experts. In a way, these challenges play a role analogous to that played by “model organisms" and “cell lines" in the biological sciences – that is, they allow researchers to make the sort of head to head comparisons that would otherwise be impossible if everyone was working only on their own private datasets – or their own private breed of organism, or their own private line of immortalized cells. One of the first machine learning competitions to use this format was the Critical Assessment of protein Structure Prediction or “CASP” competition which has been followed by a sequel CASP event every even-numbered year since 1994 <cit.>. CASP is in its 15th iteration at the time of writing, and it has served as an invaluable resource to the protein structure prediction community over the last 30 years as it has progressed through several generations of computational biochemistry <cit.>, with the latest being dominated by deep learning methods such as DeepMind's "AlphaFold" <cit.>. One might wonder whether DeepMind would have shown such strong interest in this important problem if it had not been so carefully and painstakingly curated into a challenge format as it was by the organizers of CASP. It is the authors' strong belief that the same applies to most challenges: they attract interest and attention to their chosen problem by individuals and research groups that otherwise might never have spent time on them. Many excellent challenges have been organized in the medical image analysis community over the last two decades, and the semantic segmentation of cross-sectional images is one of the most popular subjects <cit.>. These segmentation challenges have asked participants to segment things such as specific anatomical structures like bones and organs <cit.>, organs at risk in radiation therapy planning <cit.>, and, like KiTS, lesions and the organs they affect <cit.>. Of particular interest is the recent QUBIQ challenge [<https://qubiq21.grand-challenge.org/>] which provided participants with multiple independent annotations per region of interest. Here, participants were asked to train a model not only to segment the region accurately, but also to estimate the pixelwise uncertainty in their segmentations in the hope that model uncertainty would be highest in the areas where different annotators disagreed. The challenge described in this report, “The 2021 Kidney Tumor Segmentation Challenge” or “KiTS21” is the second challenge in the “KiTS” series after its 2019 iteration, “KiTS19” <cit.>. KiTS19 represented the first large-scale publicly available dataset of kidney tumor Computed Tomography (CT) images with associated semantic segmentations. It attracted submissions from more than 100 teams from around the world, and saw the winning team surpass the previous state of the art in kidney tumor segmentation, while nearly matching human-level performance in segmenting the affected kidneys. KiTS21 builds upon KiTS19, but differs from it in several important ways, enumerated below: * Data annotation was performed in public view * Like QUBIQ, multiple annotations were released per ROI * Renal cysts were segmented as an independent class * The test set images came from a separate institution in a different geographical area * Teams were required to submit a paper summarizing their method for review and approval before participating The following section will explain each of these new design features in detail, while also providing an in-depth description of the cohort. The remainder of this report proceeds as follows: Sec <ref> describes the KiTS21 dataset and annotation process. In section <ref>, the results of the KiTS21 challenge are discussed, including a statistical analysis of the leaderboard and the methods used by the 3 highest-performing teams. Section <ref> concludes with the a discussion of the lessons learned from organizing this challenge and possible future directions for KiTS. § MATERIALS AND METHODS §.§ The KiTS21 Dataset The dataset used for KiTS21 consisted of two distinct cohorts for training and test sets collected at separate time points for different purposes. Ultimately, they were both annotated with segmentation labels in a single unified effort, but the processes to identify the patients and collect their images are described in separate sections below. §.§.§ Training Set Collection After receiving approval from the University of Minnesota institutional review board (study 1611M00821), a query was designed to identify patients from the institution's electronic medical record system who met the following criteria: * Underwent partial or radical nephrectomy between January 1, 2011 and June 15, 2019 * Were diagnosed with a renal mass prior to the nephrectomy * Underwent a CT scan within the 80 days prior to their nephrectomy This returned a collection of 962 patients. Manual chart review of each of these 962 patients was used to identify those who specifically underwent nephrectomy for fear of renal malignancy. The resulting 799 patients were reviewed in random order to identify those who had a CT scan available which showed the entirety of all kidneys and kidney tumors, and were in the corticomedullary contrast phase. After reviewing 544 cases in this way, 300 were identified for use as the training set for this study. It is important to note that these 300 cases were the same 300 that were split between the training (210) and test (90) sets for the KiTS19 challenge, <cit.>. §.§.§ Test Set Collection The test set used for the KiTS21 challenge consisted of 100 cases of patients who had been treated with partial or radical nephrectomy for fear of renal malignancy at the Cleveland Clinic. Preoperative CT images were obtained from all patients for whom they were available, and these patients were reviewed in random order until 100 patients with a scan in the corticomedullary phase were identified for use as the test set of the KiTS21 challenge. §.§.§ Overall Dataset Characteristics The characteristics of the patients comprising the training and test sets can be found in table <ref>. Of note is a stark gender imbalance in which men outnumber women by roughly a 2:1 ratio. This imbalance is consistent between the training and test sets, and is, in fact, a well established phenomenon in the epidemiology of renal cell carcinoma <cit.>. The median age of patients in the training set was 60 years, and the median age of patients in the test set was 63 years. The median BMI of patients in the training set was 29.82 kg/m^2, and the median BMI of patients in the test set was 29.7 kg/m^2. The median tumor diameter in the training set was 4.2 cm, and the median tumor diameter in the test set was 3.8 cm. While the patients used for the training set were all treated at a single academic health center, the preoperative scans used in this dataset were often captured at a variety of community hospitals and clinics prior to referral. This endows the dataset with significant heterogeneity in terms of which scanners were used and with what protocol. A map depicting the geographic locations of all of the scanning institutions represented in this dataset is shown in figure <ref>. §.§ Data Annotation Process Based on the prior experience and feedback collected during and after the KiTS19 challenge, KiTS21 features a unique purpose-built annotation process. §.§.§ Public Annotation Platform There has recently been some discussion about the need for greater clarity about how annotations are produced for medical image analysis research <cit.>. All too often, papers simply report which structures were segmented along with the credentials of the researcher or group of researchers who supervised and or carried out the annotations. This approach fails to capture important nuances about the annotation process such as how the regions of interest were specifically defined, what tool was used to produce the annotations themselves, and specific instructions that were given to the annotators, if any, regarding uncertainty and quality control. This information is crucial for making informed and fair comparisons regarding the performance of models on a given task. In an attempt to provide as much clarity as possible regarding the annotation process, the training set was annotated in such a way that any member of the public could view the annotation process as it took place. A website was developed[<https://kits21.kits-challenge.org>] which offered a dashboard showing every training case and its status in the annotation process. For an example of this, see Fig. <ref>. Each region of interest is denoted with a set of clickable icons that the user can use to view the annotations it represents in the same tool that the annotators used to produce them[<https://github.com/SenteraLLC/ulabel>]. Further still, the exact set of instructions provided to the group of annotators is documented in a webpage that is available for anyone to view[<https://kits21.kits-challenge.org/instructions>]. §.§.§ Multiple Annotations per Region of Interest No semantic segmentation dataset is exactly correct, nor will one ever be. For much the same reason that surgeons will excise some `margin' of healthy tissue with a tumor, a radiologist cannot always identify the exact extent of a tumor's border with 100% certainty. This issue is further complicated by artifacts such as partial volume averaging in cross-sectional imaging. On top of genuine uncertainty, there is a second factor which contributes to error in semantic segmentation datasets: mistakes. This is akin to `coloring outside the lines' in a coloring book. When one is asked to precisely delineate hundreds of structures with dozens of axial frames to delineate per structure, they can quickly grow tired and the extent to which their delineations match their intentions will degrade. The KiTS21 challenge aimed to explicitly address this latter issue of delineation mistakes. To do this, the annotation process consisted of three distinct phases: * Localization: A medical trainee places a 3d bounding box around the ROI * Guidance: A medical trainee places a small number of t-shaped pins along the intended delineation path surrounding the ROI in some sample of axial slices (see Fig. <ref>) * Delineation: A layperson (e.g., crowd worker) uses the localization and guidance to produce a delineation that matches the trainee's intentions In the above paradigm, localization and guidance were performed once for each region of interest, reviewed and refined by an expert if needed, and then three separate delineations were collected from laypeople. This allowed for quantifying and controlling for mistakes by collecting three independent delineations which were all guided by the same annotation intent. §.§.§ Changes To Intended Segmentation Classes and Dataset Size One might notice the presence of three additional ROI types referenced in Fig. <ref>: `Ureter', `Artery', and `Vein'. The original intention when planning this challenge was to segment these structures in the same way as the `Kidney', `Tumor', and `Cyst' structures that were ultimately included in the dataset. Regrettably, during the annotation process it was discovered that it would not be feasible to ensure that the segmentation labels for all six region types would be of sufficiently quality before the training set release deadline, especially with the inordinate amount of time that these complex structures took to correct and refine. The annotation team thus decided to prioritize the quality of the kidney, tumor, and cyst regions, while leaving the ureter, artery, and vein regions for future work. §.§ Challenge Design Decisions §.§.§ Use of a Separate Institution for Test Set One commen criticism of single-institutional cohorts in medical image analysis is the possibility that any model trained only on that cohort might show inflated performance when validated on that cohort, as compared to its true performance on some random collection of images sampled from the true distribution of images that it is meant to be applied to. It is therefore generally recommended that data from a separate institution should be used for external validation <cit.>. As described in section <ref>, the KiTS21 training set is unique in that while all nephrectomy procedures took place at a single institution, patients most commonly underwent imaging at a different institution prior to being referred to the Fairview University of Minnesota Medical Center for treatment. One might therefore argue that the KiTS19 dataset already represents sufficient diversity in imaging institutions to prevent overfitting to site-specific characteristics. Nonetheless, it is much more convincing to use a properly separate cohort for the test set, and for that reason, the KiTS21 test set was built on a cohort of patients treated at a separate institution: the Cleveland Clinic. §.§.§ Peer Review Requirement One of the most important contributions a challenge can make to the research community is to elucidate which approaches work best for a particular problem. This depends on challenge participants taking the time to document their approach in a detailed publication. Unfortunately, the reports produced to accompany challenge submissions are often woefully lacking in detail and clarity, and this severely hinders a challenge's impact <cit.>. In an attempt to prevent this, KiTS21 instituted a policy that short papers accompanying submissions to the challenge would undergo a peer review process in which they would be reviewed for clarity and completeness. After the challenge, these papers were then published as MICCAI satellite event proceedings, similar to a typical workshop. A template paper[<https://www.overleaf.com/read/nfbqmtkcyzdp>] was provided to teams to help guide them in what information was expected to be provided, and what a typical structure might look like. Teams were required to submit this paper at least a week before test set submissions opened, and their submissions would not be considered until the paper had been approved. Ultimately 27 out of 28 papers submitted were approved, but most had to undergo a round of revisions with repeat review. §.§.§ Metrics and Ranking In KiTS19, a simple average Sørensen-Dice metric was used to rank teams. Sørensen-Dice is an attractive option because it is very widely-used and well-understood by the community. However, recent research has revealed some of its shortcomings <cit.>, such as its agnosticism to how many objects were `detected' when multiple objects exist within the same image – resulting in a case where smaller objects are given much more weight, when in reality, a smaller tumor on the contralateral kidney, for example, might be even more important to detect. To address this, the Sørensen-Dice metric was supplemented with the Surface Dice metric, as described in <cit.>. Another factor taken under consideration for KiTS21 was the natural hierarchical relationship between the target segmentation classes. Masses (i.e., tumors and cysts) are naturally part of the overall kidney region, and tumors are naturally a subset of a particular patient's collection of masses. Further, since the most difficult tasks for models to learn in this problem are (1) differentiating between tumors and cysts, and (2) determining the boundary between masses and healthy kidney, the problem was framed in terms of what we're calling `Hierarchical Evaluation Classes' (HECs) where the first class is the union of all regions, the second class is the union of the tumor and cyst regions, and the third class is the tumor alone. This prevents penalizing a model twice for mislabeling a tumor for a cyst, or part of a mass as healthy kidney, etc. Since three independent annotations per region of interest were collected, each case consisted of 3^N possible composite segmentation masks, where N is the number of regions of interest for that case. N ranged from 3, in the simplest case (two kidneys and a tumor), to well over 10 in some cases with several cysts and tumors. At test time, a random sample of these composite segmentations for each case was used for evaluation. Ultimately, both the Sørensen-Dice and the Surface Dice were computed for each HEC of each randomly sampled composite segmentation in the test set. Average total scores were computed for each metric, and then a rank-then-aggregate approach was used to determine the final leaderboard rankings. In the case of a tie, the average Sørensen-Dice score on the tumor region was used as a tiebreaker. §.§.§ Incentive and Prize Every team which had their manuscript approved for publication in the KiTS21 proceedings and made a valid submission to the challenge was invited to present their work in a short format at the KiTS21 session of the 2021 MICCAI conference. Teams that placed in the top 5 were given the option to give a longer presentation, and were also invited to participate on the challenge report as coauthors. The first place team was also awarded a cash prize of $5,000 USD. §.§.§ Changes to Intended Submission Procedure The testing phase of machine learning competitions generally proceeds in one of two ways. In the first way, teams are given access to the images (but not labels) of the test set for a limited period of time, during which they download the data, run inference on it with the model that they have developed, and send their predictions back to the organizers for evaluation. This was the method used by KiTS19. The second approach is to ask teams to package their model in such a way that it can be sent to the organizers and run on the test set on a private server, thereby preventing the participating team from ever having direct access to the test set images. The second approach is generally thought to be preferable, since it eliminates all possibility that a team might `cheat' by manually intervening in the predictions made by their model to improve them unfairly. The downside to this approach, however, is that teams are limited to the computational resources made available to them by the organizers, which depending on the level of funding, might be quite limited. The sorts of large ensembles of models that often win machine learning competitions require significant time and resources to run, and might not be feasible in a challenge using the latter approach. Further still, the latter approach adds significant complexity to the tasks of both the organizers and the participants, with both parties having to build and maintain systems for these inference tasks. Containerization solutions such as Docker have made tasks like this easier, but they come with a learning curve, and not everyone in the research community has extensive experience with them. KiTS21 originally planned to ask teams to prepare Docker containers in which to submit their models for fully private evaluation. Regrettably, soon into the submission period it became clear that this would not be practical with the resources available. More than half of the teams who submitted their containers exhausted either the time or memory constraints imposed by the cloud-based submission system that had graciously been made available by the <https://grand-challenge.org> platform. Ultimately the responsibility for this unfortunate debacle lies with the organizers for failing to clearly communicate the resource limitations to participants. Since teams had already invested considerable effort in solutions that exceeded the resource limitations, it was decided the most fair thing to do was to pivot to the former approach in which teams were provided with a limited 48 hour window during which to download the test imaging, run inference on their own computing hardware, and return their predictions to the organizers. § RESULTS §.§ Performance and Ranking Overall, 29 teams submitted papers to the KiTS21 challenge proceedings, and in-so-doing, registered their intention to submit predictions. One of these teams was not able to be reached after submitting their paper, and so was excluded. Another signed up to receive a copy of the test set images, but did not return predictions, and so was also excluded. A further team withdrew prior to receiving the test set images. This left 26 teams who submitted their predictions to the challenge. Once the results were announced, a further one team asked to have their paper retracted and their results removed from the leaderboard. All told, 25 teams were included in the final leaderboard with corresponding manuscripts. The top-5 teams were invited to give long-form oral presentations at the MICCAI KiTS21 workshop. The remaining 20 teams were given the option to give a shorter talk, and 9 of them accepted. Shown in figure <ref> is a series of box plots for each team's tumor segmentation performance on the 100 test set cases. For reference, the inter-rater agreement in tumor segmentation for this task was previously shown to be 0.88 <cit.>, whereas the top-5 teams achieved values of 0.86, 0.83, 0.83, 0.82, and 0.81 respectively. The top-9 teams all appear to have very similar performance profiles with a tight cluster around 0.8 and then a low-density uniform distribution of scores on one or two dozen cases. Interestingly, the top 5 teams had no "complete misses" among them, with a nonzero dice score on all 100 cases. This stands in contrast to the KiTS19 leaderboard in which every team missed at least one tumor completely. The variety in predictions for a single case can be qualitatively examined by plotting the sum of those predictions as a heatmap. This is what is shown in figure <ref>. Clearly, the vast majority of teams concentrated their predictions on the correct region of the kidney. However, there is significant variation in the exact delineation of the boundary between the lesion and the kidney. This is consistent with expert opinion that this tumor-kidney delineation is much more difficult than simply detecting the tumor itself. That said, a precise and accurate boundary delineation is very important for downstream applications of these segmentations, such as for surgical planning, where a misjudgement of the boundary could lead to the unnecessary removal of too much healthy parenchyma, leading to poor renal functional outcomes, and the removal of too little tumor could lead to positive surgical margins and a greater risk for avoidable recurrence at the primary site. The final leaderboard ranking was determined with a rank-then-aggregate procedure using the respective means across HECs of the two chosen varieties of dice scores. A static final ranking is, of course, necessary in order to award prizes and to determine which teams are invited to present at the MICCAI KiTS21 workshop. However, it is important to note that the final ranking does not necessarily represent an unimpeachable truth about which teams are better than others. The test set has a finite size, and so inferential statistics must be used to make claims about differences in performance. The final results of this pairwise analysis, along with descriptive plots supporting these conclusions are shown in figure <ref>. All analyses were performed with α = 0.05 and corrections for multiple hypothesis testing were made using the Holm-Bonferroni procedure <cit.>. As shown, the first-place team was not statistically superior to any of the top-5 teams at a family-wise error rate of α = 0.05, but it was suprior to nearly every team thereafter – with the interesting exception of the 9th place team. This lack of statistical significance is often used to criticise challenges as being “unfair” or “unreliable”. In the context of the nominal cash prize awarded to the “winning” team, perhaps this is true. However, it can be argued that the true value in machine learning competitions is not in the pairwise testing between individual methods, but rather in the population-level meta-analyses that they enable regarding the design decisions that were made by each team. Challenges, therefore, might not be the best way to determine which method is “best”, but they are an excellent way to determine which methods are currently “better” than others. It is therefore of great importance that the submissions be accompanied by detailed manuscripts describing which methods were used. KiTS21 not only requested, this, but enforced it using a peer review process. After the challenge, discrete data about each method were collected by a manual review of each paper, and a brief discussion of this data is provided in the following section. §.§ Methods Used In the course of manually reviewing each team's manuscript, 11 specific binary data points were extracted from each paper, corresponding to 11 commonly-used methods for this problem. Which teams used which of these methods is presented in tabular form in figure <ref>. As can be seen, the nnU-Net approach dominates the top half of the leaderboard, with "coarse-to-fine" frameworks and transfer learning both also being overrepresented among high-performing teams. The most popular loss function was the sum of cross entropy and dice loss, but notably, it was an nnU-Net trained with a sum of cross entropy and surface loss that ultimately won the competition. Many of the teams who chose not to use nnU-Net instead opted for architectures which incorporated some form of attention mechanism. Among these, two teams used a visual transformer network, but in general, these attention mechanisms underperformed in comparison to nnU-Net. §.§ Hidden Strata Analysis It's important to understand how the performance of these models varies on subpopulations within the greater population of patients they might be applied to. Existing health care disparities are well-documented <cit.>, and there is a significant concern that the proliferation of predictive models in medicine will serve to exacerbate these disparities <cit.>. One aspect of machine learning problems that heightens the risk of disparate performance on different populations is underrepresentation in the training set. In the case of KiTS21, the training set was drawn from patients treated at the University of Minnesota, which is a considerably more caucasian population than most other parts of the United States. Further still, given that the subject matter is kidney cancer, which has an intrinsic higher prevalence in males, females also make up a relative minority of the dataset (see table <ref>). The following sections present an exploration of this issue using both hypothesis-driven and unsupervised methodologies. §.§.§ Hypothesis-Driven Analysis Due to substantial underrepresentation of non-white and female patients in the KiTS21 training set, a natural hypothesis is that segmentation performance could vary on the basis of race and/or sex. A multivariate linear regression analysis was performed using both race and gender as predictors with two additional covariates in order to determine whether these variables independently associate with the mean model performance of the top-5 teams. The results, shown in table <ref> reveal that non-white patients do in fact see signficantly worse performance compared to white patients. Surprisingly, however, women actually see significantly better performance than men. This speaks to the inherent unpredictability of hidden strata analysis, and how training set characteristics alone are not sufficient to predict how a model will perform on a given subpopulation. Interestingly, for the two teams who submitted transformer networks, the results look quite different (table <ref>). In fact, both gender and race fall out of significance, whereas tumor size appears to play a much larger role. This could suggest that transformer networks are more robust to hidden strata than the nnU-Net dominated top-5 submissions. However, it should be noted that the top-5 teams still outperform the transformer networks even on those subpopulations where it performs worst. §.§.§ Unsupervised Analysis Deep neural networks are very effective at extracting high-level semantic information from high-dimensional data such as images. Researchers often exploit this property in conjunction with a variety of nonparametric dimensional reduction techniques <cit.> in order to visualize the clusters of input samples which appear to be represented by the network in similar ways. This has proven to be a useful tool for discovering structure in high-dimensional datasets. By a similar argument, the performance of a variety of deep neural networks on a given dataset can also be conceived as a high-dimensional representation of high-level semantic information about each instance. The same techniques can therefore be applied to visualize the clusters of patients which appear to have similar signatures in terms of semantic segmentation performance. Figure <ref> shows the results of hierarchical clustering performed on the set of test set cases using the performance metrics from each team as a feature vector. This reveals certain interesting clusters of cases, such as that on the far right in which virtually every team performed poorly. This stands in contrast to that cluster near the middle on which nearly every team performed well. Perhaps the most distinctive is the cluster of three cases on the far left on which nearly every teams performed poorly, except for a select few teams who performed well. Interestingly, the teams that performed well on these cases were not necessarily the teams who ranked near the top of the leaderboard. §.§ Methods Used by Top 3 Teams The three subsections that follow are brief overviews of the methods used by the three highest-performing teams who submitted to the challenge. §.§.§ First Place: A Coarse-to-Fine Framework for the 2021 Kidney and Kidney Tumor Segmentation Challenge This submission <cit.> was made by Zhongchen Zhao, Huai Chen, and Lisheng Wang from the Shanghai Jiao Tong University, China. Data use and preprocessing This submission made use of the KiTS21 dataset alone, and used random weight initializations for their network. The images were preprocessed by resampling to an isotropic 0.78125 mm pixel size using third order spline interpolation. Architectures As shown in figure <ref>, a coarse-to-fine approach was used which first roughly segmented the entire kidney region. This coarse kidney segmentation was used to generate a cropped region around each kidney, which was fed to a finer kidney segmentation network. The result of that network was fed to two additional networks as inputs, along with the cropped image, to produce fine tumor and "mass" segmentations, where masses refer to the union of the tumor and cyst regions. The predictions of each of these networks were aggregated to produce a final composite prediction. Each of the four networks in use were trained using the nnU-Net framework <cit.>. Training The networks were originally trained using a sum of the dice loss and the cross entropy loss, but once these objective functions plateaued, a novel “surface loss” was used for further fine-tuning. The aim of this was to optimize the predictions in such a way that was in line with the surface dice metric, which was used in addition to the volumetric dice for the final leaderboard ranking. This surface loss term is defined below: L_s = ∑_p_pred∈ FP∪ FN(p_gt∈ S_gtmin||p_pred-p_gt||_2) * 1/C Where S_gt is the surface of the ground truth, FP and FN are the sets of false-positive and false-negative points, respectively, and C is a constant. Postprocessing Once raw predictions by the network had been made, connected component analysis was used to clean up extraneous predictions. Size thresholds of 20,000 voxels, 200 voxels, and 50 voxels were used to eliminate kidney, tumor, and cyst predictions that were too small to be realistic. Cysts and tumors that were not touching regions that were predicted to be kidney were also excluded. Results This method achieved the 1^st place rank on the leaderboard of the KiTS21 challenge with an average volumetric dice score of 0.908 and an average surface dice score of 0.826. Of note, this method achieved a volumetric dice score for the kidney region of 0.86, which is quite close to the previously reported interobserver agreement for the KiTS19 challenge of 0.88 <cit.>. §.§.§ Second Place: An Ensemble of 3D U-Net Based Models for Segmentation of Kidney and Masses in CT Scans This submission <cit.> was made by Alex Golts, Daniel Khapun, Daniel Shats, Yoel Shoshan and Flora Gilboa-Solomon of IBM Research - Israel. Data use and preprocessing This submission did not make direct use of any data other than the official training set. One of the models in the final ensemble used by this method was initialized with weights of a model pretrained on the publicly available Liver Tumor Segmentation (LiTS) dataset <cit.>. Other models in the ensemble were initialized randomly. This method used both low and high resolution architectures. For the former, the data was resampled to a common spacing of 1.99 × 1.99 × 1.99 mm, and for the latter, 0.78 × 0.78 × 0.78 mm. The labeled annotations maps used during training were sampled randomly per slice from different plausible annotations per region based on the existing multiple human annotators. This was done to improve the robustness of the trained models and make them better suited to the official KiTS21 evaluation protocol. Architectures A single-stage 3D U-Net and a two-stage 3D U-Net Cascade architecture were used by this method, as implemented in the nnU-Net framework <cit.>. The latter consists of first applying a 3D U-Net on low resolution data, and then using the low-res segmentation results to augment the input to another 3D U-Net applied to high resolution data. This serves the purpose of increasing the spatial contextual information that the network sees, while keeping a feasible input patch size with regards to available GPU memory. The final model ensemble used by this submission consists of three single-stage 3D U-Net models and one two-stage cascaded model. Training Patches of size 128 × 128 × 128 were sampled and input to the network. All models were trained with a combination of Dice and cross-entropy losses <cit.>. One of the models in the final ensemble was additionally trained with a regularized loss which encouraged smoothness in the network predictions. Training was done for ∼250,000 iterations of Stochastic Gradient Descent. Training a single-stage 3D U-Net model took ∼48 hours on a single Tesla V100 GPU. All models were trained on 5 cross-validation splits with 240 cases used for training and the remaining 60 for validation. Postprocessing Custom postprocessing was applied to the segmentation results, removing rarely occuring implausible findings: tumor and cyst findings positioned outside of kidney findings, and small kidney fragment findings surrounded by another class. Fig. <ref> shows prediction examples for two slices, with and without the proposed postprocessing. Results The final submission was an ensemble of four models as follows: (1) 3D U-Net trained with an added regularized loss, (2) 3D U-Net trained with a different random seed for the training label generation process, (3) 3D U-Net and (4) 3D U-Net cascade, both trained with weights initialized from a model trained on the LiTS dataset. This submission scored 0.896 mean Dice and 0.816 mean Surface Dice, resulting in second place. For a more detailed description of this submission, see <cit.>. §.§.§ Third Place: A Coarse-to-Fine 3D U-Net Network for Semantic Segmentation of Kidney CT Scans This submission <cit.> was made by Yasmeen George from the Monash University, Australia. Network architecture. The proposed coarse-to-fine cascaded U-Net approach is based on 3D U-Net architecture and has two stages. In the first stage, a 3D U-Net model is trained on downsampled images to roughly delineate kidney region. In the second stage, a 3D U-Net model is trained to have more detailed segmentation of the three classes (kidney, tumor, cyst) using the full resolution images guided by the first stage segmentation maps. The 3D U-Net architecture had an encoder and a decoder path each with five resolution steps. The encoder part was performed using strided convolutions starting with 30 feature maps then doubling up each level to a maximum of 320. The decoder part was based on transposed convolutions. Each layer consists 3D convolution with 3×3×3 kernel and strides of 1 in each dimension, leaky ReLU activations, and instance normalization. For more details please refer to our paper <cit.>. Data preprocessing. The CT intensities (HU) were transformed by subtracting mean and dividing by standard deviation. In the first stage, each CT scan was resampled using third order spline interpolation to a spacing of 1.99×1.99×1.99 mm resulting in median volume dimensions of 207×201×201 voxels. While in the second stage, a spacing of 0.78×0.78×0.78 mm was used with median volume dimensions of 528×512×512 voxels. Data augmentation methods including random rotations, gamma transformation, and random cropping were used during training. Training and validation. The proposed models were implemented using nnU-Net framework <cit.> with Python 3.6 and PyTorch framework on NVIDIA Tesla V100 GPUs. Majority aggregation ground truth was used for training and validation. All models were trained from scratch using 5-fold cross-validation with a patch size of 128×128×128 that was randomly sampled from the input resampled volumes. The models were trained using stochastic gradient descent (SGD) optimizer for 1000 epochs using a batch of size 2 with 250 batches per epoch. The training objective was to minimize the sum of cross-entropy and dice loss. Results. The model achieved the 3^rd place in the leaderboard of KiTS21 challenge with a mean sampled average dice score of 0.8944 and a mean sampled average surface dice score of 0.8140 using a test set of 100 CT scans. The proposed approach scored 0.9748, 0.8813, 0.8710 average dice for kidney, tumor and cyst using 3D cascade U-Net model. Figure <ref>.visualizes the segmentation results for the trained model. § CONCLUSIONS This paper presented the results of the 2021 Kidney Tumor Segmentation Challenge (KiTS21). The challenge featured many innovations in terms of challenge design, including a novel annotation scheme to produce multiple volumetric annotations per ROI and a fully transparant web-based annotation process. The challenge attracted 25 full submissions from teams around the world, and the top-performing team surpassed the prior state of the art performance set with the predecessor KiTS19 challenge, despite the use of a test set from an entirely different institution and geographic area. A meta-analysis of the methods used by participating teams showed the continued popularity and dominant performance of the nnU-Net framework, although significant interest by teams in developing transformer or other attention-based methods was also observed. A hidden strata analysis was presented, which revealed that the top-performing teams were not necessarily the ones who had the most uniform performance on subpopulations within the test set. The continued goals with KiTS are to continue to expand and augment the quality of the dataset to facilitate even better performance, while also continuing to challenge participants with additional hetereogeneity and complexity, such that the community can continue to move towards a more realistic real-world setting for this problem. The upcoming KiTS23 edition achieves this by incorporating the venous contrast phase in addition to the corticomedullary phase which both KiTS19 and KiTS21 were based on. As organizers, our hope is that the community will continue to find KiTS to be a useful resource for advancing the state of the art in kidney tumor segmentation. § ACKNOWLEDGEMENTS Research reported in this publication was supported in part by the National Cancer Institute of the National Institutes of Health under Award Number R01CA225435. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Additional support for research activities including developing the annotation procedure, performing image annotations, and analyzing the submission data was provided by The Intutive Foundation, Cisco, and by research scholarships from the Climb 4 Kidney Cancer Foundation. The monetary prize for the winning team was graciously sponsored by Histosonics, Inc. Finally, we would like to thank the urology departments at the University of Minnesota and Cleveland Clinic for graciously allowing us to use their collections of patient data for this purpose. plain
http://arxiv.org/abs/2307.00560v1
20230702130118
Economical Quasi-Newton Self Consistent Field Solver
[ "Samuel A. Slattery", "Kshitijkumar Surjuse", "Edward F. Valeev" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
We present an efficient quasi-Newton orbital solver optimized to reduce the number of gradient (Fock matrix) evaluations. The solver optimizes orthogonal orbitals by sequences of unitary rotations generated by the (preconditioned) limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm incorporating trust-region step restriction. Low-rank structure of the inverse (approximate) Hessian is exploited not only in L-BFGS but also when solving the trust-region problem. The efficiency of the proposed “Quasi-Newton Unitary Optimization with Trust-Region” (QUOTR) method is compared to that of the standard Roothaan-Hall approach accelerated by the Direct Inversion of Iterative Subspace (DIIS), and other exact and approximate Newton solvers for mean-field (Hartree-Fock and Kohn-Sham) problems. § INTRODUCTION Orbital optimization via self-consistent field (SCF) is a fundamental ingredient of the electronic structure methods at all levels of approximation, from 1-body models (Hartree-Fock (HF), Kohn-Sham Density Functional Theory (KS-DFT)) to many-body methods (multiconfiguration self-consistent field (MCSCF)). Despite the long history of innovation,<cit.> efforts to develop improved SCF solvers continue to this day, <cit.> driven by the desire to reduce computational cost and to improve robustness. Most popular solvers in the molecular context are based on the Roothaan-Hall (RH) iterative diagonalization of the Fock matrix<cit.> augmented by convergence accelerators such as Anderson mixing<cit.> known in chemistry as direct inversion in the iterative subspace (DIIS)<cit.>, as well as others.<cit.> However, several issues plague the efficient RH/DIIS heuristics: * for systems with complex electronic structure (such as molecules far from equilibrium, open-shell systems,<cit.> and systems with small HOMO-LUMO gaps<cit.>) convergence will be slow,<cit.> erratic, or nonexistent,<cit.> * the use of diagonalization produces canonical orbitals whose lack of localization makes the use of orbital-based reduced-scaling formalisms for Fock matrix construction difficult, * applications to large systems and/or in non-LCAO representations can be bottlenecked by the 𝒪(N^3) cost of diagonalization,<cit.> * locating non-Aufbau (e.g., excited state) solutions is possible<cit.> but is not robust, and * even in favorable cases the convergence rate is linear <cit.>(i.e. error reduced by approximately the same factor each iteration) even in the vicinity of the solution; this is slower than the quadratic convergence exhibited by, e.g., the Newton method.<cit.> SCF solvers relying on the direct energy minimization can address some/all of these concerns and thus have a long history of development.<cit.> In the molecular context direct minimization mean-field SCF solvers have long been employed as the recommended alternative in the case of convergence problems, used in combination with the RH/DIIS to gain superlinear convergence, and to enable reduced-scaling SCF approaches <cit.> Nevertheless, RH/DIIS remains the default SCF solver in most quantum chemistry packages. Clearly, this is not due to its formal advantages, but due to its superior efficiency. This may be puzzling, since direct minimization solvers are often demonstrated to converge in as few as (or fewer) iterations than RH/DIIS. However, the number of iterations is a misleading figure since each update of the orbitals or density matrix may involve multiple energy/gradient evaluations or solving similarly-expensive subproblems (such as multiplication by the orbital Hessian). In other words, the number of Fock matrix builds (N_F) in a direct minimization solver is typically significantly greater than the number of iterations (N_I), whereas in RH/DIIS they are equal. Thus the latter typically involves significantly fewer Fock matrix evaluations, which in most practical applications determines the overall cost of SCF. The objective of this work is to develop a reduced-scaling direct minimization SCF solver that obtains as low a cost as possible, namely by reducing the number of Fock matrix evaluations and avoiding the need for exact Hessian evaluation. The resulting “Quasi-Newton Unitary Optimization with Trust-Region” (QUOTR) SCF solver uses preconditioned limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm<cit.> step-restricted by trust-region (TR) and leverages the inverse BFGS Hessian's low-rank structure to efficiently solve the trust-region update problem<cit.>. The rest of the manuscript is structured as follows. In <ref> we briefly review the general classes of SCF solvers before describing the theoretical aspects of QUOTR. Next, the implementation of QUOTR is discussed in <ref>. In <ref> we display solver performance statistics for a standard set of chemical systems and make comparison to a method that uses “exact Hessian". Additionally, in <ref> we illustrate its utility for a problem where the RH/DIIS solver could not find a solution, namely a simple neuropeptide containing 75 atoms from Ref. . In <ref> we summarize our findings and offer suggestions for further improvement. § FORMALISM §.§ Overview of SCF Solver Approaches All SCF methods attempt to iteratively minimize the variational energy E(𝐱), where 𝐱 is a set of independent parameters defining the particular method. In practice the minimum is determined by using the energy, its gradient 𝐠, and optionally the Hessian B. Starting with an initial (guess) set of parameters 𝐱^(0) SCF constructs improved parameter values using the current energy and its derivatives, (optionally) their values from previous iterations (histories), as well as any optional additional parameters and their histories: 𝐱^(k+1) = f({𝐱^(k)}, {E^(k)}, {𝐠^(k)}, …) The SCF solvers differ in how they construct the update in <ref>; unfortunately it is not possible to systematically classify the solvers since in the vast majority of cases f() is an algorithm, not a simple function. Thus here we only focus on essential common elements of all SCF solvers. Most solvers split the update problem (<ref>) into 2 subproblems by defining the parameter update, 𝐬^(k)≡𝐱^(k+1) - 𝐱^(k) = α^(k)𝐩^(k) in terms of a search direction 𝐩^(k) and a step size α^(k), each of which has its own prescription similar to <ref> 𝐩^(k) = g({𝐱^(k)}, {E^(k)}, {𝐠^(k)}, …), α^(k) = h({𝐱^(k)}, {E^(k)}, {𝐠^(k)}, …). The need to control the step size is common to all SCF solvers due to the fundamental nonlinearity of the energy function. Therefore even solvers that do not employ <ref>, such as RH/DIIS, still introduce ad hoc ways to control the step size by level shifting, damping, and other means of step restriction. The simplest “2-step” solver is the steepest descent (SD) method<cit.> in which the search direction 𝐩^(k) is opposite to the current gradient 𝐠^(k): 𝐩^(k)SD= - 𝐠^(k)/||𝐠^(k)|| Unfortunately, although SD method is guaranteed to converge to a nearby minimum, the plain SD variant converges very slowly;<cit.> this can be rationalized by comparing it to the the (exact) Newton step: 𝐬^(k)Newton= - (𝐁^(k))^-1𝐠^(k). Hessian 𝐁 is a diagonally-dominant matrix with large (and growing with the basis set size) condition number. Luckily it is relatively simple to construct an effective approximation to the Hessian; a particularly popular way is to use only the 1-electron terms in the Hessian, 𝐁_1e. Approximate Hessians can then be used for preconditioning SD (using the 1-electron Hessian for preconditioning is also known as the “energy weighted steepest descent" method <cit.>) by replacing 𝐠^(k) in <ref> with the preconditioned gradient: 𝐠̅^(k)≡(𝐁_1e^(k))^-1𝐠^(k). The RH method can be viewed as a simplified version of preconditioned SD, due to its step being exactly the negative of the gradient preconditioned by the 1-electron Hessian:<cit.> 𝐬^(k)RH= - 𝐠̅^(k). More sophisticated prescriptions for direction include the conjugate gradient (CG) method<cit.> in which history is limited to the information about the current and previous iteration. Of course, use of preconditioning is mandatory with CG just as with SD. Unfortunately neither SD nor CG, even with an approximate preconditioner, lead to optimal convergence rate near the minimum. Thus the most efficient solvers utilize exact or approximate Hessians near the minimum. Models that use the exact Hessian have been developed to avoid storage of the full Hessian, and rather calculate the Hessian-vector products at roughly the cost of a Fock build. <cit.> Although it is possible to apply straightforward Newton method using exact Hessian when sufficiently close to the minimum,<cit.> to be able to use exact Hessian further away from the minimum requires some form of step restriction. The popular augmented Hessian (AH)<cit.> method can be viewed as a Newton method with optimally restricted step, it can also be viewed as a quasi-Newton method in which approximate (level-shifted) Hessian is used. The diverse family of quasi-Newton methods use approximate Hessians of some form, often generated from information contained in the gradients and steps of the previous iterations. <cit.> The quasi-Newton idea has been used in MCSCF for a long time<cit.>, and the most commonly employed approximation in SCF is some form of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. <cit.> Although some solvers compute step length separately from direction more sophisticated approaches fuse step restriction more deeply into the step computation. Indeed, when an underlying quadratic model of the energy exists, it is not natural to simply preform a line search toward the (unrestricted) minimum of the model, considering that the model is known to be locally accurate in all directions. The alternative concept of searching for the minimum of a model in all directions, but restricting the step size to some maximum value, is the key idea of the trust-region (TR) methods. <cit.> Two important aspects of any TR method are: how the trust-region is updated between iterations, and how the trust-region problem is solved for the step. The update method that is commonly used is based on an algorithm developed by Fletcher, <cit.> and one of the first true TR applications to use this in quantum chemistry was implemented for MCSCF theory. <cit.> A common way to solve the TR problem in quantum chemistry is within the framework of the AH method. <cit.> This has the benefit of properly dealing with negative eigenvalues in the Hessian, and is thus often used with full-Newton methods where the true Hessian can have negative eigenvalues away from the energy-minimizing solution. However, it is not the only way to solve the TR problem, and we have chosen to use a method from the mathematics literature that uses the low-rank structure of the L-BFGS Hessian to our advantage. <cit.> §.§ QUOTR: Quasi-Newton Unitary Optimization with Trust Region Our direct minimization SCF solver is a preconditioned quasi-Newton (L-BFGS) solver with step restriction applied using the trust-region (TR) method. Although its aspects are similar to prior SCF solvers, there are several novel elements: * The optimization is parameterized with a consistent “reference" (epoch) MO basis allowing use of the exact gradient with minimal computation after the Fock matrix is built. * The preconditioner is updated only on some iterations, and it is regularized in a simple way to ensure a positive definite Hessian. * The low-rank structure of the L-BFGS Hessian is exploited when solving for the quasi-Newton step on the TR boundary. The overall structure of QUOTR is presented in <ref>. Below we elaborate on each key aspect of the solver. §.§.§ Parameterization It is important to consider how the standard unconstrained quasi-Newton minimization scheme can be mapped to the constrained minimization of the single-determinant energy where the orbitals are required to be orthonormal. In the following we assume the linear combination of atomic orbitals (LCAO) representation of the molecular orbitals (MOs), and thus the MOs are defined by the coefficient matrix 𝐂. We seek a unitary matrix, 𝐔_total, that transforms a set of orthonormal orbitals, 𝐂^(0), into the energy minimizing solution, 𝐂_min. 𝐂_min = 𝐂^(0)𝐔_total We iteratively build the total unitary matrix as a product of individual unitary matrices generated at each iteration, indexed by k in <ref>. 𝐔_total = ∏_k 𝐔^(k) At the start of iteration k the total unitary matrix is composed of a product of only the first k individual unitary matrices, the others not yet determined. During iteration k the new total unitary matrix is determined by finding 𝐔^(k) and then multiplying it with the growing product. 𝐔_total^(k) = ∏_p=0^p=k𝐔^(p) Similarly, the MOs at the start of iteration k > 0 are given by the coefficient matrix obtained from the total rotation determined thus far. 𝐂^(k) = 𝐂^(0)𝐔_total^(k-1) Thus, the next set of MOs is determined by rotating the previous set of orbitals. 𝐂^(k+1) = 𝐂^(k)𝐔^(k) = 𝐂^(0)𝐔_total^(k) This formulation, where the orbitals are “reset" each iteration, has been used many times before,<cit.> and it is helpful for simplifying the calculation of the gradient, as will be described shortly. We use the exponential parameterization of unitary rotations, common in electronic structure theory, <cit.> which allows us to work with the independent elements of an antihermitian matrix, κ, without any restrictions on the values of these components, and thereby maintain orthonormal MOs. At iteration k we determine a unitary rotation by finding κ^(k) = -(κ^(k))^T using a prescription described later. We then update the orbitals by applying the unitary rotation defined by κ^(k) through <ref> to the growing 𝐔_total. 𝐔^(k) = exp(κ^(k)) The matrix exponentials are evaluated accurately with a Taylor series expansion using a tight convergence threshold, so that no later reorthogonalization is necessary, in contrast to some methods. <cit.> The simple scaling-and-squaring approach with a fixed order of 2 is used. <cit.> Thus, the elements of the step vector are divided by 2^2 = 4 before performing the Taylor series expansion, and the resulting converged unitary matrix is squared twice. Although the matrix exponential could be improved further, e.g., by leveraging the block-sparse antihermitian structure of κ <cit.>, our focus is on the overall SCF method and its cost in terms of N_F. What is important to note about the matrix exponentials in our implementation is that they are computed nearly exactly (to within a tolerance, t_e) so that no reorthogonalization is needed. This is important because we need to know how the independent parameters change in each iteration to be able to form the quasi-Newton approximation to the Hessian. It is important to express both the gradients and steps in the same basis between iterations when using a quasi-Newton approximation for the Hessian, since the standard update formulas assume a fixed basis. It is well known that the gradient of the energy with respect to the matrix elements of κ is easily found from the MO Fock matrix: ∂ E/∂κ^(k)_ai = n_s 𝐅^(k)_ai, where a and i refer to the unoccupied and occupied MOs, respectively, and n_s is twice the occupancy of orbitals i (4 for spin restricted wavefunctions, 2 for spin unrestricted wavefunctions). This expression holds because 𝐔^(k) is rotating the current orbitals, as in <ref>, so that the gradient is composed of partial derivatives evaluated for κ^(k) = 0 in <ref>. After the MOs are rotated, the MO basis is changed so that the indices a and i on the MO Fock matrix in <ref> refer to different orbitals for iterations k and k+1. Therefore, we cannot use this formula for the history gradients when constructing the L-BFGS Hessian. To avoid this problem we always compute the gradient in the same “reference" MO basis for the given sequence of iterations comprising the history epoch; the reference basis can only be changed (e.g., when updating the preconditioner, see <ref>) in combination with resetting of the history. This epoch MO basis for iteration k is simply the initial set of MOs, 𝐂^(r), at the start of the epoch. MO basis 𝐂^(k) for iteration k ≥ r is related to the epoch MO basis 𝐂^(r) by the unitary obtained as a product of each interstitial iterations' update: 𝐂^(k) = 𝐂^(r)𝐔_epoch^(k-1)≡𝐂^(r)𝐔^(r)𝐔^(r+1)…𝐔^(k-1) This formalism simplifies the calculation of the gradient, 𝐠^(k), in the epoch basis, as shown in <ref>. 𝐠^(k)≡ n_s [𝐅_epoch^(k) , 𝐏_epoch^(k) ] Here 𝐅_epoch^(k) is the Fock matrix expressed in the epoch MO basis and 𝐏_epoch^(k) is a projector onto the occupied space. The Fock matrix is evaluated in AO basis using the current iteration's occupied MOs and then transformed to the epoch basis: 𝐅_epoch^(k) = (𝐂^(r))^T𝐅_AO^(k)𝐂^(r) The projection operator onto the occupied space at iteration k in the epoch basis is obtained as 𝐏^(k)_epoch = 𝐔_epoch^(k)𝐏 (𝐔_epoch^(k))^T, where 𝐏 = diag({n_i}) and n_i is the occupancy of spin orbital i. It should be noted that n_s in <ref> accounts for double/single occupancy of orbitals. For spin unrestricted orbitals we rotate the alpha and beta coefficient matrices separately, and thus there is a separate gradient, projector and Fock matrix for each. This is why n_i in the construction of 𝐏 is always either 0 or 1, even for the spin restricted case; hence, 𝐏 is a projector. Not only does this formulation of the gradient allow us to have a consistent basis for forming the L-BFGS Hessian, it also avoids a non-truncating series expression for the gradient, as in other solvers<cit.>. While κ is indeed a matrix, and it is necessary to be able to find its matrix exponential to calculate 𝐔, we will generally work with a vector of the independent elements as the parameters for determining a step in iteration k. We obtain a vector space with no constraints on the value of the vector components if we use only the upper (or lower) triangle of the kappa matrix (the other elements being determined by the antihermitian requirement: κ_pq = -κ_qp). This vector space will be called κ-space throughout this manuscript. We only consider real values for κ_pq, and so the diagonal elements must be zero. For an MO basis with n orbitals, this means that the κ-space has dimension n(n-1)/2, assuming no symmetry is taken into account. The independent elements of the kappa matrix are collected into a single vector, which we refer to as a step, 𝐬. In the spin unrestricted case, the kappa elements for the separate alpha and beta spin MOs are simply concatenated into one large vector. Considering the antisymmetry produced by the commutator in <ref>, the gradient is properly a member of the κ-space. Thus, we can map the matrix elements of the gradient to a vector in exactly the same way as for the matrix elements of κ, using only the upper (or lower) triangle. We use the symbol 𝐬^(k) for the vector version of κ^(k), but make no such distinction for the gradient, 𝐠^(k): the use should be clear from context. The steps and gradient differences are the ingredients for the L-BFGS update to the Hessian. Thus, when these history vectors are saved, we keep them both in the epoch MO basis. There is never a need to transform the gradients to the current MO basis, only the steps are transformed. Every time a step is calculated using L-BFGS, the algorithm is performed in the epoch MO basis. Then, it is transformed to the current MO basis of iteration k before taking the exponential and multiplying it by the growing total unitary rotation. We can then write κ^(k)=𝐔_epoch^(k-1)Tκ_epoch^(k)𝐔_epoch^(k-1) as how we can obtain useful parameters that were found in the epoch MO basis. Thus, we can revise <ref> to combine the basis transformation with the exponentiation. 𝐔^(k) = exp(𝐔_epoch^(k-1)Tκ_epoch^(k)𝐔_epoch^(k-1)) The convergence criteria are that both the gradient norm per element, ||𝐠^(k)||/n^2, and the energy difference between iterations, Δ E^(k), are lower than a threshold t_c. However, for some comparisons with other solvers, we use the gradient norm (of the gradient as a matrix) instead of the per element version. §.§.§ Quasi-Newton The L-BFGS algorithm is used to approximate the Hessian, 𝐁_BFGS^(k)≈𝐁^(k), and its inverse, 𝐇_BFGS^(k)≡ (𝐁_BFGS^(k))^-1≈ (𝐁^(k))^-1, using the “history" vectors from (at most) m previous iterations. In the following we will assume that the history size is equal to m for simplicity. This approximate Hessian and its inverse can be represented in low-rank form as follows:<cit.> 𝐁_BFGS^(k) = 𝐁_0 - 𝐕_B^(k) (𝐖^(k))^-1𝐕_B^(k)T 𝐇_BFGS^(k) = 𝐇_0 + 𝐕_H^(k)𝐌^(k)𝐕_H^(k)T Here the matrix 𝐁_0 is an initial (often diagonal) approximation to the Hessian, which in principle could be any positive definite matrix. <cit.> The matrix 𝐇_0 is the corresponding initial inverse Hessian approximation: 𝐇_0 = (𝐁_0)^-1. More will be said about these critical components later. The matrix 𝐕_B^(k) is composed of columns of the m previous steps multiplied by the initial Hessian, 𝐁_0𝐬^(k), followed by m columns of previous gradient differences, 𝐲^(k)≡𝐠^(k+1) - 𝐠^(k), and therefore has dimension n × 2m. Similarly, 𝐕_H^(k) contains the same information, but in a slightly different form: 𝐕_H^(k) = 𝐇_0𝐕_B^(k). The square matrices 𝐖^(k) and 𝐌^(k) have dimension 2m × 2m, and the need for the inverse of 𝐖^(k) is not a problem since m is typically small (less than 20). The formation of 𝐖^(k) and 𝐌^(k) is described in the literature<cit.>, but essentially they are composed of various dot products involving the history vectors, with an inverse of one of the m × m subblocks. 𝐖^(k) = ( [ ((𝐒^(k))^T𝐁_0𝐒^(k)) 𝐋^(k); (𝐋^(k))^T -𝐄^(k) ] ) 𝐌^(k) = ( [ ((𝐋^(k))^-1 (𝐄^(k) + (𝐘^(k))^T𝐇_0𝐘^(k)) (𝐋^(k))^-T) -(𝐋^(k))^-1; -(𝐋^(k))^-T 0 ] ) In <ref> and <ref>, 𝐒^(k) is an n × m matrix containing the history column vectors of the 𝐬^(k), and 𝐘^(k) similarly contains 𝐲^(k). The smaller m × m sub matrices 𝐋^(k) and 𝐄^(k) are simply constructed as below. 𝐋_ij = 𝐬^(k-m-1+i)·𝐲^(k-m-1+j), if i > j 0 otherwise 𝐄_ij = 𝐬^(k-m-1+i)·𝐲^(k-m-1+j) if i = j 0 else One of the advantages of the L-BFGS Hessian approximation, apart from not requiring calculation of second derivatives, is that it can be stored in this factorized form by simply keeping the relatively small matrices 𝐁_0, 𝐕_B^(k), and 𝐖^(k). Considering that 𝐁_0 is a diagonal matrix, we only need to store (2m+1)n + 4m^2 elements, which is typically much smaller than the full Hessian which requires n^2 elements. From the development up to this point it would seem that we also need to store the information for the inverse L-BFGS Hessian, specifically 𝐕_H^(k), but this will be dealt with soon. The quasi-Newton step, 𝐬^(k)=-𝐇_BFGS^(k)𝐠^(k), is calculated by multiplying the inverse L-BFGS Hessian of <ref> with the negative of the gradient. Thus, the factorized form makes calculation of the quasi-Newton step simply a matter of a few matrix-vector multiplications. Note that although we only need the inverse Hessian to compute the quasi-Newton step, the Hessian is used to compute the energy decrease predicted by the quadratic model. This is needed for determining how the TR is updated between iterations as described in <ref>. As is well known, the use of a physically relevant preconditioner is necessary to have convergence that is competitive with diagonalization methods. <cit.> A manifestation of a preconditioner is the choice of the initial diagonal Hessian (𝐁_0). We construct this component using a formula requiring the MO Fock matrix that becomes equivalent to the 1-electron part of the RHF Hessian as the regularizer goes to zero. Thus we call it 𝐁_1e, since it comes from the 1-electron part. (𝐁_1e)_(ia)(ia) = n_s (𝐅_aa - 𝐅_ii + r_ia) The elements of the MO Fock matrix that are used in <ref> are “pseudocanonical", <cit.> meaning that the occupied-occupied block is diagonalized and the unoccupied-unoccupied block is diagonalized, but the off-diagonal occupied-unoccupied block is non-zero (unless at convergence). Therefore, to form this initial diagonal Hessian, we first apply a unitary rotation that does not change the energy, but mixes occ-occ and unocc-unocc to diagonalize each separately. The final term in <ref>, r_ia, is a regularization parameter that depends on the Fock matrix elements and ensures that the diagonal Hessian is positive definite. r_ia = t_r + 𝐅_ii - 𝐅_aa if: (𝐅_aa - 𝐅_ii) < t_r 0 else In <ref> t_r is a threshold for the minimum value for the difference in “pesudocanonical" orbital energies 𝐅_aa and 𝐅_ii. Unlike some other quasi-Newton solvers, <cit.> we do not update the diagonal part of the approximate Hessian every iteration. In principle this could lead to slower convergence, since the approximation degrades as the orbitals are changed from the point where the diagonal Hessian was calculated. Indeed, we found that in the early iterations it is imperative to use an updated preconditioner, and thus we do an approximate line search along the preconditioned steepest descent direction until the max element of the gradient drops below some threshold (we use 0.1 which is smaller than 0.25, which has literature precedent <cit.>). During this early phase of the solver, the orbitals are made “pseudocannonical" each iteration, and the preconditioner is rebuilt. Essentially, the epochs are only 1 iteration long. However, near the solution, we have found that it is not necessary to update the preconditioner every iteration, and because we work in the epoch MO basis it would be difficult to update the preconditioner. We have found that with a good initial guess, only a median of 2 iterations of this line search are required to drop the max gradient element below 0.1 and trigger L-BFGS starting for simple systems (see <ref>). If the gradient gets large again, the history is reset and preconditioned steepest descent is again carried out with an updated preconditioner. Every time the history is reset the epoch basis is also reset to the current orbitals. An alternative and perhaps more conventional view of the preconditioner is that it is a basis transformation that makes the diagonal part of the L-BFGS Hessian or its inverse closer to an identity matrix. To see how this view relates to the diagonal Hessian, consider the following transformation of the quasi-Newton equation: 𝐬 = -(𝐇_BFGS)𝐠 (omitting iteration index). Multiply both sides of the equation by 𝐁_1e^1/2 and insert the identity 𝐈 = 𝐁_1e^1/2𝐁_1e^-1/2 (which is clearly acceptable because the diagonal matrix 𝐁_1e is guaranteed to be positive definite due to the regularizer) between the inverse Hessian and the gradient, to get an equivalent equation. 𝐁_1e^1/2𝐬=-(𝐁_1e^1/2𝐇_BFGS𝐁_1e^1/2)𝐁_1e^-1/2𝐠 Setting 𝐬̃=𝐁_1e^1/2𝐬, we see that we are finding the quasi-Newton step with the modified gradient, 𝐠̅=𝐁_1e^-1/2𝐠, and inverse Hessian, 𝐇̃_BFGS=𝐁_1e^1/2𝐇_BFGS𝐁_1e^1/2 to produce the modified step, 𝐬̃. Considering the form of the L-BFGS inverse Hessian that we are using, the modified inverse Hessian reduces to the following. 𝐇̃_BFGS=𝐁_1e^1/2𝐇_BFGS𝐁_1e^1/2 =𝐁_1𝐞^1/2(𝐁_1e^-1 + 𝐕_H𝐌 (𝐕_H)^T)𝐁_1𝐞^1/2 = (𝐁_1e^1/2𝐁_1e^-1𝐁_1e^1/2) + 𝐁_1e^-1/2(𝐕_H𝐌 (𝐕_H)^T)𝐁_1e^-1/2 = 𝐈 + 𝐕̃𝐌𝐕̃^T Where 𝐕̃=𝐁_1e^1/2𝐕_H. Notice that we also have 𝐕̃=𝐁_1e^-1/2𝐕_B. This provides a more unified way to form the L-BFGS Hessian and its inverse. If we work with the preconditioned quantities 𝐬̃^(k) and 𝐠̅^(k) we can form 𝐕̃^(k) directly from them. Additionally, we do not have separate versions of the history for the Hessian and the inverse Hessian. The price that we pay is that we need to convert back to the “un-preconditioned" basis before taking the step. But this is simply a basis transformation: 𝐬^(k)=𝐁_1e^-1/2𝐬̃^(k). The preceding development of a preconditioned basis is essentially the “energy-weighted coordinates" previously introduced in the literature. <cit.> However, it should be emphasized that we do not update this basis every iteration. Here is probably a good place to summarize the steps to obtain the unitary rotation at iteration k, since there are now quite a few layers. 𝐬̃^(k)→𝐬^(k)→κ_epoch^(k)→κ^(k)→𝐔^(k) To keep the L-BFGS Hessian positive definite between iterations, we require the history vectors to satisfy the following.<cit.> 𝐬̃^(k)·𝐲̅^(k) > t_h ||𝐬^(k)|| ||𝐲^(k)|| When this requirement is not met the pair of history vectors {𝐬̃^(k),𝐲̅^(k)} is not saved for future use. We have used t_h = 10^-5. §.§.§ Trust-Region Once the quasi-Newton step has been computed it is imperative for the robustness of the solver that we determine if the step is acceptable. Clearly, we want the energy to be lowered each iteration to prevent oscillations and non-convergence. Because the quadratic energy model may be inaccurate for large step sizes and because the L-BFGS Hessian is inherently an approximation for smaller m, restricting the step to a region where the model can be “trusted" is a key way we achieve robustness. In the trust-region (TR) formalism we require the step to be within the trust-region. ||𝐬̃^(k)|| ≤Δ^(k) In <ref> the quantity Δ^(k) is the trust-radius, which defines the trust-region. Notice first that the trust-radius is indexed by k so that it may differ between iterations. Fletcher's algorithm, <cit.> which either expands or contracts the TR based on how accurate the model has been, is used to update the trust-radius (see <ref>). We use parameters (τ_i/η_i) from a more recent source. <cit.> The initial value of the trust-radius is set to the most recent successful line search step size since this should be in the correct order of magnitude for the next step. Also, we always do line search when there are no history data available so the most recent step size from line search is always a known quantity when quasi-Newton steps are attempted. When the quasi-Newton step does not satisfy <ref>, we solve for the optimal step that is within the TR, which will be on the TR boundary: ||𝐬̃^(k)|| = Δ^(k). Our TR solver is based primarily on work from the mathematics literature that shows how to leverage the low-rank structure of the L-BFGS hessian. <cit.> The parameters for our implementation of the TR solver are given in <ref>, and the algorithm is given in <ref>. In essence we iteratively update the parameter σ, which is a level-shift of the L-BFGS Hessian, until <ref> is satisfied. We have made some modifications to improve stability and simplify when the TR is considered solved that appear to be sufficient for our purposes. One specific difference is that we do not use Cholesky decomposition to orthogonalize the columns of 𝐕̃^(k), but rather use matrix inverse square root: (𝐕̃^(k)T𝐕̃^(k))^-1/2. Once a new step has been determined that satisfies the TR conditions, the energy change expected from the quadratic model, q^(k), is calculated according to the usual formula. q^(k) = 𝐬̃^(k)·𝐠̃^(k) + 1/2𝐬̃^(k)·𝐁̃_BFGS^(k)𝐬̃^(k) The new step is then tested by actually updating the coefficient matrix and constructing the associated Fock matrix. The step is then either accepted or rejected based on the actual energy change, Δ E^(k), if the step were to be taken. We have used the simple requirement that the actual energy change is not positive, and thus the energy will never increase between iterations. When the step is rejected, the trust-radius is decreased and a new TR problem is solved. Sometimes the TR may be shrunk repeatedly. The minimum allowed trust radius, t_t, is a way to detect if the quasi-Newton approximation is very bad. If for some reason the trust radius shrinks below this value, we reset the history, do a single line-search iteration, and continue from there. Our source for the method of solving the TR problem using the low-rank structure of the L-BFGS Hessian assumes that the diagonal initial Hessian approximation is simply a scaled identity. <cit.> That is, the algorithm for solving the TR problem requires that the elements of the diagonal Hessian approximation be all the same. This would seem to preclude the use of any initial diagonal Hessian that is not a scaled identity, including the approximation based on the Fock matrix diagonal elements. However, if we work with the preconditioned quantities discussed in <ref> the initial diagonal Hessian is the identity matrix in the epoch basis. Therefore, we can apply the method of solution presented in the literature source <cit.> to this modified TR problem, then transform back to the original basis before taking the step. This means that to be consistent, all the quantities used in the TR must be in the epoch basis, as in <ref>. Fundamentally, if we update the TR using the information in the epoch basis (e.g. the length of the step) we should still be able to apply Fletcher's algorithm. <cit.> Additionally, because the different epochs bases represent different coordinate transformations, we cannot simply carry over the TR between epochs. Thus, the decision to use the step size from the most recent line search step is therefore an important decision. §.§.§ Line-Search At least one iteration of steepest descent (or preconditioned steepest descent) is necessary before beginning quasi-Newton steps since the L-BFGS hessian construction requires information about the previous steps and gradient differences. We have implemented a line search to determine the step size, α^(k), in this case. The line search is also a fall-back to handle possible problems that can come up, as described later. To reduce the number of Fock builds required for the line search, we use an approximate line search by finding the step size that minimizes a 3rd order polynomial fitted to the energy along the search direction using the energy and gradient at two points in κ-space. One of these points is always the current MOs (κ = 0), and the other is a scalar multiple of the search direction. Choosing the second fitting point at a reasonable distance from the current point is crucial for the success of the line search. We use a method based on the periodicity of the exponential function. <cit.> The eigenvalues of the step direction in κ-space, as a matrix (or set of matrices for spin unrestricted), are computed. Then the maximum frequency ω_max is determined as the maximum of the absolute values of the eigenvalues. The fitting point along the search direction is then computed as the search direction times a step length, calculated as below. <cit.> α_fit = 2π/q|ω_max| Here q is the assumed function order, which we choose to be 4. This is a reasonable choice for Hartree-Fock, but for KS-DFT it is more approximate. The approximate line search method requires only two Fock builds since both the energy and the gradient are easily computed from the Fock matrix. Notice that this means only 1 more Fock build is required than an acceptable quasi-Newton step since the current Fock matrix should already have been computed from the previous iteration. A similar procedure has recently been used, <cit.> and a nearly identical method for the step determiniation also has precident. <cit.> The energy along the line search direction is a function of the step size, E(α), and the 3rd order fit is defined by four parameters: a, b, c, and d. E(α) ≈Ê(α) = a α^3 + b α^2 + cα + d Here Ê(α) is the fitted polynomial. The coefficients are found by requiring the four conditions to be satisfied (energy and gradient correct at κ = 0 and the second point). The minimum is determined by finding the roots of the derivative, which is a simple quadratic polynomial. Ê'(α) = 3 a α^2 + 2 b α + c Since it is possible to have two solutions to this equation, it is required that α>0 AND the second derivative of Ê(α) is positive. If both roots fail to meet these conditions then the fitting is repeated with fitting points that give smaller sample step sizes. We have used a “line search adjustment factor" of 0.5 for this re-fitting. In the case when the 3rd order fit to the energy is so bad that the energy actually increases, the step is rejected and a new fitting range is determined. Assuming that the bad fitting is usually caused by a fitting range that is too large, we shrink the fitting range by a factor of 2 when a step is rejected and the line-search is repeated. §.§.§ Initial Guess The initial guess orbitals are another critical component for rapid SCF convergence. We use an extended-Hückel initial guess,<cit.> which is constructed in a minimal basis and then projected onto the full orbital basis. The standard Wolfsberg-Helmholtz formula for the off-diagonal elements of the extended-Hückel Hamiltonian is used:<cit.> H_ij = K' S_ij (H_ii + H_jj)/2, but with the updated formula for the constant K'.<cit.> Instead of experimental ionization potentials for the diagonal elements, we follow a suggestion by Lehtola <cit.> and use numerical Hartree-Fock orbital energies<cit.> for each shell. Although the guess orbitals are populated according to the Aufbau principle using the extended-Hückel energies and in the minimal basis, after projection to the orbital basis the populations may not be qualitatively correct. When this situation occurs, and the symmetry of the orbitals is such that there is no gradient between incorrectly occupied and incorrectly unoccupied orbitals, QUOTR will not be able to correct the populations. Therefore, we have added an option to apply a small, random unitary matrix to the initial guess which then allows the solver to rotate the incorrectly occupied orbitals and find the lower energy solution. This random matrix is produced by filling κ with random numbers from a uniform real distribution between -1 and 1 scaled to a particular perturbation strength. The random number generator is seeded to allow for reproducible results. §.§.§ Overall Control There are a couple of parameters that control the overall SCF solver. In general, second order solvers should not be used unless sufficiently close to the true solution. Additionally, the need to have a sufficiently up-to-date preconditioner means that we need to calculate the “pseudocanonical" orbitals relatively close to the solution. Therefore, we only start using L-BFGS once the maximum element of the gradient drops below the L-BFGS start threshold, t_b. Additionally, the benefit of having the preconditioner updated once more, very near the solution, is motivation for the history reset threshold, t_d. Here we drop the history and recalculate the “pseudocannonical" orbitals when the gradient norm per element drops below this value. When t_b is tight enough it appears that the history reset is not necessary. These two control conditions are shown in <ref> anytime ||𝐠^(k)||_∞ > t_b first time ||𝐠^(k)||/n^2 < t_d There are a few other cases which trigger the history reset that are essentially sanity-checks: if q^(k) > 0, the step is uphill, or Δ^(k) is shrunk below the minimum TR tolerance. These may not be necessary, and may never be triggered. In order to reduce unnecessary computation, the solver tries to determine if the history will be reset as early as possible. Thus, the boolean variable R_hist is set to true when one of these situations is detected. The situations where a history reset will occur are as follows. * The gradient is too large (||𝐠^(k)||_∞ > t_b) * The abs max gradient element is below the reset threshold for the first time * The TR is too small (Δ^(k) < t_t) * The step is “up hill" (𝐬̃^(k)·𝐠̅^(k) > 0) * The quadratic model predicts energy increase (q^(k) > 0) The first 3 of these situations are detected before the L-BFGS step is constructed, allowing the solver to skip this step to go directly to re-building the preconditioner and on to line search. The last 2 situations are only determined after the step is calculated. All the parameters for QUOTR (other than those inside the TR solver) are listed in <ref>. § TECHNICAL DETAILS The QUOTR method was implemented in a developmental version of the Massivly Parallel Quantum Chemistry (MPQC) version 4 program package.<cit.> The orbital bases sets used were 6-31G*<cit.>, 6-31G**<cit.>, 6-311++G**<cit.>, and def2-TZVPP<cit.>. Density fitting was done with the def2-SVP-J basis. <cit.> The extended-Hückel initial guess was constructed in the Huzinaga MINI basis, <cit.> then projected onto the orbital basis. Hartree-Fock calculations were performed in <ref> for the G2 set, <cit.> the geometries for which were obtained from the Gaussian output files on the NIST website <cit.> with the exception of four systems that were not available with the correct method (MP2=FULL/6-31G*). For the four systems that were not available from NIST (acetamide, furan, SiH_2-triplet and 2-butyne) Gaussian 09<cit.> was used to obtain the geometry. The G2-1 set consists of 55 systems and is a subset of G2-2, which consists of a total of 148 systems. <cit.> The RH/DIIS implementation in MPQC uses the default of 5 previous Fock matrices for the extrapolation. The KS-DFT implementation in MPQC uses GauXC<cit.> for calculation of the exchange-correlation potentials and energies. The integration grid used for the 1PLW calculations in <ref> was the “ultrafine" grid, which has 99 radial points and 590 angular points. All other KS-DFT calculations used the “superfine" grid which has 250 radial points and 974 angular points for all atoms except hydrogen, which has 175 radial points. The particular parameterization we use for LDA is Slater Exchange with VWN RPA. <cit.> For the B3LYP calculations on the Cr systems in <ref>, we use VWN3 for the correlation functional<cit.> to match PySCF. The structure of the neuropeptide, 1PLW, <cit.> was obtained from the Protein Data Bank (PDB). <cit.> Calculation using KDIIS<cit.> for SCF acceleration on the CrC and Cr_2 systems in <ref> were performed with the Orca program system,<cit.> version 5.0.4. <cit.> Additionally, the DIIS implementation from Orca was also used for these systems instead of the MPQC version. Most calculations were run on Virginia Tech's Advanced Research Computing (ARC) TinkerCliffs cluster.<cit.> § RESULTS AND DISCUSSION §.§ G2 Data Set The number of Fock matrix builds (N_F) and solver iterations (N_I) are the two key metrics that we examine to understand solver performance. In <ref> we display statistics for these metrics and also compare with two similar direct minimization solvers: ETDM<cit.> and GDM<cit.>. Any systems where convergence was not achieved in 256 iterations are removed from the statistical values and enumerated as “no convergence". The calculations using QUOTR were Hartree-Fock with spin-restricted orbitals (RHF) if the system is a singlet, and with spin-unrestricted orbitals (UHF) in all other cases. For the comparison using 6-31G** basis, QUOTR kept all history (i.e. maximum history size, m, was same as number of iterations max). For the comparison with GDM on G2-1, we used the same basis and our initial guess was similar. Our convergence criteria was 1 × 10^-9 for both the energy differecne (E_h) and gradient norm per element, while GDM used 1 × 10^-10 for the energy difference and 1 × 10^-7 for the RMS gradient. However, due to the tighter gradient criteria for QUOTR, all systems were converged in energy difference to tighter than 1 × 10^-10 E_h, making the comparison valid. The average number of iterations is very similar, although QUOTR has a higher max (Si_2). There was only one other system higher than the max for GDM, and that was NO, which took 60 iterations. It is disappointing that we do not have the information on the number of Fock builds used by GDM, which would be a more accurate comparison of overall computational cost. But, QUOTR performs fairly well with a median of 18 Fock builds to converge. Apart from Si_2 and NO, the highest number of Fock builds for QUOTR is for ClO at 40, which is below the maximum number of iterations for GDM. The only system which did not converge for DIIS in 256 iterations was HCO. The local minimum found by QUOTR (relative to DIIS solution) was CO, while the local minimum found by DIIS (relative to QUOTR) was Si_2. Next, comparison to ETDM is more difficult for a few reasons. The data presented in <ref> is for KS-DFT using the PBE functional, used a different basis (double-zeta polarized default of GPAW) and had frozen core orbitals. We used 6-31G** basis for this comparison because it is a double-zeta basis with polarization functions on all atoms, so should be more similar to the basis used by ETDM than the other two orbital bases we use. For QUOTR only PH_3 did not converge, but after tightening the Fock build precision it was able to be converged. The average number of iterations (unclear if it is mean or median) reported for ETDM was 17, which compares well with the median of QUOTR. We will now take a closer look at the results for G2-2/6-31G*. A breakdown of the convergence statistics for QUOTR on G2-2/6-31G* is displayed in <ref>. The designation “Before L-BFGS" indicates the statistics are for the purely line search, early iterations before the first use of a Newton step, while the designation “Line Search" refers to all use of line search (which may be triggered later in the SCF process by e.g. the gradient becoming large again). From the results in <ref> table we see that on average only 7 Fock builds are needed before the gradient is small enough that Quasi-Newton steps begin. This is consistent with the average of 3 line search iterations before stating L-BFGS, beacause each line search takes 2 Fock builds and we need one initial Fock build. These results are mostly an indication of the quality of the initial guess. A histogram of the number of Fock builds for this data set is displayed in <ref>. Based on the histogram, we can see that there are only a few outliers. The systems with more than 40 total Fock builds are displayed in <ref>. From this table we notice that many of these are open-shell systems, indicating that QUOTR has a difficult time with this situation. §.§ Comparison with Exact Hessian It is natural to ask how QUOTR compares to using the exact Hessian? We want to know, in terms of N_F, is it worthwhile to use the exact Hessian, or is L-BFGS sufficient? To make this comparison, we used the Co-iterative augmented Hessian (CIAH)<cit.> method available in PySCF<cit.> and counted the number of computational steps equivalent to Fock builds whose statistics are displayed in <ref>. For CIAH the total number of equivalent Fock builds is the sum of the key frames (KF) and coulomb/exchange (JK) calls. <cit.> The KF enumerate the number of times that the gradient is evaluated exactly, in other cases the gradient is evaluated approximately. <cit.> It should be noted that the initial guess we use for QUOTR (extended-Hückel) is not the same as that used by CIAH (SOAD from minimal basis), but in <ref> we consider the comparison fair since each method uses its default guess. Also, the energy is converged much more tightly in QUOTR (1 × 10^-9 E_h), while CIAH was only converged to 1 × 10^-6 E_h. For CIAH, one system (HCl) did not converge as it got stuck with a gradient norm of 1.2 × 10^-6, and then simply used 3 N_F per iteration until the max iterations was reached. The system with the next highest N_F was NF_3 at 77. The system that took the most Fock builds for QUOTR was BCl_3 at 191. Looking at the median values in <ref>, we see that QUOTR takes about 1/3 fewer effective Fock builds. For 14 out of the 148 systems, CIAH got a significantly lower energy than QUOTR (greater than 1 × 10^-9 E_h), and so the comparison with QUOTR might not be considered fair. We reran QUOTR for the 14 systems that were local minima relative to CIAH, but applied a small, random unitary rotation to the initial guess to break the symmetry. This symmetry breaking allowed all 148 systems to converge to the same (within 1 × 10^-9 E_h) or lower energy as CIAH. The statistics with the corrected 14 systems are displayed in the right most column in <ref>. Notice that the median does not change, and the mean gets a little worse. To understand how these random perturbations to the initial guess impact convergence, we ran AlCl_3 (the system with largest error to CIAH) with 50 different seeds for the random number generator. The plot in <ref> shows that when the minimum solution is accessible by symmetry, then QUOTR is robust in converging to the solution. To make a comparison starting from exactly the same initial energy, we used the core Hamiltonian guess (“hcore") because it is unambiguous and we were able to match the initial energy to better than 9 digits between CIAH and QUOTR. The convergence criteria for QUOTR was changed to match the criteria of CIAH, using 1 × 10^-6 gradient norm of the unique elements of the gradient. The results for RHF computations of 10 small molecules in the G2 set are displayed in <ref>. From the data in <ref> we see that in all cases QUOTR is able to converge to the same energy (starting from the same energy) in fewer effective Fock builds. Overall, the median “N_F" required by CIAH is nearly three times as many as required by QUOTR. §.§ Large Calculations To demonstrate the performance of QUOTR for larger systems, we calculated both HF and LDA solutions for a neuropeptide containing 75 atoms (1PLW).<cit.> This example shows the robustness of our method as it is one of the systems that a diagonalization based solver could not converge for LDA. <cit.> In <ref> we show convergence of SCF for 1PLW for both HF and LDA using both QUOTR and RH/DIIS. The two panels show two different initial guesses, demonstrating qualitativly similar comparisons between the methods but roughly two times as many iterations for QUOTR in the LDA case. For HF, both methods converge at a similar rate based on iterations. The more difficult case of LDA shows that RH/DIIS oscillates and does not converge, while QUOTR does converge. The reason for the slowness in QUOTR's convergence rate is likely due to the preconditioner shifting. As shown in <ref> the LDA calculation is really a zero gap system, and so even near convergence the preconditioner will give a significant shift from the true 1-electron Hessian diagonal. We do apply one later history reset when the gradient norm per element drops below 1× 10^-6 to help make the preconditioner more accurate later in the calculation, and the last recomputation of the preconditioner occurs for “Not Perturbed" calculation at iteration 41 and for the “Perturbed" calculation at iteration 27. The converged energies for the HOMO and LUMO along with the gap are displayed in <ref>. The sizeable gap found for HF is within 0.01 eV of previous work (7.25 eV). <cit.> For LDA, the gap turnes out to be nearly zero! §.§ Difficult Systems We have now shown that QUOTR is competitive with standard direct-minimization SCF solvers for simple cases (G2 set), and robust enough to converge nearly zero-gap systems that RH/DIIS cannot converge (1PLW). To investigate both the robustness and efficiency of various SCF solvers we now turn to two small systems that are known to be difficult to converge: Cr_2 and CrC in their singlet states. In <ref> we display the number of Fock builds necessary to converge SCF using the Orca default convergence criteria of 5× 10^-5 for the gradient norm, and the “hcore" initial guess. Although the “hcore" initial guess is known to be poor, it is one of the only guesses that can easily be consistently constructed by different programs to produce the same initial energy. So, it should be emphasized here that these results are for situations that are doubly difficult: bad initial guess and complicated electronic structure. Comparing QUOTR to TRAH, we see that in all cases QUOTR requires fewer Fock builds. The largest error in converged energies for QUOTR was for CrC with B3LYP, which was higher than TRAH by 1.1 × 10^-6 E_h. It should be noted that TRAH uses a random number in one of the Davidson diagonalization start vectors which helps break symmetry, while for QUOTR we apply a small random unitary rotation to the initial guess to help break symmetry. Thus, the hcore guess is actually not exactly what is used, and the QUOTR initial guess is usually a little higher in energy (  1 - 5 of E_h). Additionally, the strength of the random perturbation is important since for the LDA results for Cr_2 QUOTR converged to a local minimum when the maximum kappa element was 0.001, but increasing it to 0.01 allowed the lower energy solution that matched TRAH to be found. The results for DIIS and KDIIS look promising from the low number of Fock builds; however, local minima are very common, so these low numbers are deceiving. For comparison to CIAH, a slightly different version of hcore guess is used, due to PySCF using this different idea of what hcore means. PySCF performs 1 cycle of Fock build then diagonalization for its version of hcore. The initial energy for this guess is generally higher than for the other definition. The first thing to notice is that QUOTR takes more Fock builds than CIAH for RHF, but fewer Fock builds for the other two methods. However, CIAH converges to a local minimum for both systems when using RHF, indicating that QUOTR is providing the more optimal solution. § CONCLUSION We have presented a quasi-Newton direct minimization SCF solver that combines the L-BFGS approximation to the orbital Hessian with the trust-region step restriction method in such a way that the low-rank structure of this Hessian can be efficiently used when solving the trust-region problem. This trust-region quasi-Newton minimization solver (QUOTR) is economical in terms of Fock builds and performs similarly to other solvers. Comparison to an exact Hessian method (CIAH) shows that QUOTR requires far fewer effective Fock builds to reach the same level of convergence. A key advantage of QUOTR (and direct minimization solvers in general) was demonstrated for a biological system where a diagonalization solver was not able to converge. Further work to understand the pathologies of KS-DFT is underway with QUOTR. This work was supported by the U.S. Department of Energy via award DE-SC0022327.
http://arxiv.org/abs/2307.03023v1
20230706143800
Convergence rate of entropy-regularized multi-marginal optimal transport costs
[ "Luca Nenna", "Paul Pegon" ]
math.OC
[ "math.OC", "math.AP", "math.FA", "49Q22, 49N15, 94A17, 49K40" ]
Convergence rate of entropy-regularized multi-marginal optimal transport costs Luca NennaUniversité Paris-Saclay, CNRS, Laboratoire de mathématiques d'Orsay, 91405, Orsay, France. email: luca.nenna@universite-paris-saclay.fr and Paul PegonCEREMADE, Université Paris-Dauphine, Université PSL, CNRS, Mokaplan, Inria Paris, 75016 Paris, France. email: pegon@ceremade.dauphine.fr August 1, 2023 ================================================================================================================================================================================================================================================================================================================ We investigate the convergence rate of multi-marginal optimal transport costs that are regularized with the Boltzmann-Shannon entropy, as the noise parameter ε tends to 0. We establish lower and upper bounds on the difference with the unregularized cost of the form Cεlog(1/ε)+O(ε) for some explicit dimensional constants C depending on the marginals and on the ground cost, but not on the optimal transport plans themselves. Upper bounds are obtained for Lipschitz costs or locally semi-concave costs for a finer estimate, and lower bounds for ^2 costs satisfying some signature condition on the mixed second derivatives that may include degenerate costs, thus generalizing results previously in the two marginals case and for non-degenerate costs. We obtain in particular matching bounds in some typical situations where the optimal plan is deterministic. Keywords. optimal transport, multi-marginal optimal transport, entropic regularization, Schrödinger problem, convex analysis, Rényi dimension. 2020 Mathematics Subject Classification. Primary: 49Q22 ; Secondary: 49N15, 94A17, 49K40. § NOTATIONS X[2,c,m] X[12,l,p] X_i a subset of ^N for any index i∈{1, …, m} X product space X_1×…× X_m; x_i,x,x a point in X_i, in some X_j, j∈{1,…,m}, and in X respectively; x_q (x_i)_i∈ q if q ⊆{1, …, m} and x ∈ X; · Euclidean norm on ^N; · norm on ^N×…×^N defined by x = max_1≤ i ≤ mx_i if x = (x_1,…,x_m); B_r(x),B_r( x) open ball of radius r centered at x∈^N or x∈ (^N)^m for the above norms; _^0,1(X) space of real-valued locally Lipschitz functions on X which is a sub-manifold of ^N or (^N)^m; [f]_^0,1(X) Lipschitz constant of f : X → where X is a subset of ^N or (^N)^m for the above norms; _^1,1(X) space of differentiable real-valued functions on X, a sub-manifold of ^N or (^N)^m, with locally Lipschitz differential; (X) space of probability measures on a metric space X; μ support of the measure μ; M_N() space of real matrices of size N × N, endowed with the Frobenius norm induced by the scalar product A · B (A^T B), for A,B∈ M_N(); S_N() subspace of real symmetric matrices of size N × N; ^s_X s-dimensional Hausdorff measure on the metric space X endowed with the Borel σ-algebra (the subscript X will often be dropped); Δ_P simplex of P-uples t = (t_p)_p∈ P such that t_p≥ 0 for all p∈ P and ∑_p∈ Pt_p=1. § INTRODUCTION We consider a m-uple of probability measures μ_i compactly supported on sub-manifolds X_i⊆^Nof dimension d_i and a cost function c : X_1 ×…× X_m →_+. The Entropic Multi-Marginal Optimal Transport problem is defined as : MOT__inf{∫_X_1 ×…× X_m cγ + (γ | ⊗_i=1^m μ_i) | γ∈Π(μ_1, …, μ_m)}, where Π(μ_1, …, μ_m) denotes the set of all probability measures γ having μ_i for i ∈{1,…,m} as marginals. The classical multi-marginal optimal transport problem corresponds to the case where =0. In the last decade, these two classes of problems (entropic optimal transport and multi-marginal optimal transport) have witnessed a growing interest and they are now an active research topic. Entropic optimal transport (EOT) has found applications and proved to be an efficient way to approximate Optimal Transport (OT) problems, especially from a computational viewpoint. Indeed, when it comes to solving EOT by alternating Kullback-Leibler projections on the two marginal constraints, by the algebraic properties of the entropy such iterative projections correspond to the celebrated Sinkhorn's algorithm <cit.>, applied in this framework in the pioneering works <cit.>. The simplicity and the good convergence guarantees (see <cit.>) of this method compared to the algorithms used for the OT problems, then determined the success of EOT for applications in machine learning, statistics, image processing, language processing and other areas (see the monograph <cit.> and references therein). As concerns multi-marginal optimal transport (MOT), it arises naturally in many different areas of applications, including economics <cit.>, financial mathematics <cit.>, statistics <cit.>, image processing <cit.>, tomography <cit.>, machine learning <cit.>, fluid dynamics <cit.> and quantum physics and chemistry, in the framework of density functional theory <cit.>. The structure of solutions to the multi-marginal optimal transport problem is a notoriously delicate issue, and is still not well understood, despite substantial efforts on the part of many researchers <cit.>; see also the surveys <cit.> and <cit.>. Since _ can be seen a perturbation of _0, it is natural to study the behaviour as vanishes. In this paper we are mainly interested in investigating the rate of convergence of the entropic cost _ to _0 under some mild assumptions on the cost functions and marginals. In particular we are going to extend the techniques introduced in <cit.> for two marginals to the multi-marginal case which will also let us generalize the bounds in <cit.> to the case of degenerate cost functions. For the two marginals and non-degenerate case we also refer the reader to a very recent (and elegant) paper <cit.> where the authors push a little further the analysis of the convergence rate by disentangling the roles of ∫ cγ and the relative entropy in the total cost and deriving convergence rate for both these terms. Notice that concerning the convergence rate of the entropic multi-marginal optimal transport an upper bound has been already established in <cit.>, which depends on the number of marginals and the quantization dimension of the optimal solutions to MEOTpb with = 0. Here we provide a improved, smaller, upper bound, which will depend only on the marginals, but not on the optimal transport plans for the un-regularized problem, and we also provide a lower bound depending on a signature condition on the mixed second derivatives of the cost function, that was introduced in <cit.>. Our main findings can be summarized as follows: we establish two upper bounds, one valid for locally Lipschitz costs and a finer one valid for locally semi-concave costs. The proofs rely, as in <cit.>, on a multi-marginal variant of the block approximation introduced in <cit.>. Notice that in this case the bound will depend only on the dimension of the support of the marginals. Moreover, for locally semi-concave cost functions, by exploiting Alexandrov-type results as in <cit.>, we improve the upper bound by a 1/2 factor, obtaining the following inequality for some C^*∈_+ _≤_0 + 1/2(∑_i=1^m d_i-max_1≤ i≤ md_i )log(1/) + C^*. We stress that this upper bound is smaller or equal than the one provided in <cit.>, which is of the form 1/2 (m-1)D log(1/) + O() where D is a quantization dimension of the support of an optimal transport plan. Thus D must be greater or equal than the maximum dimension of the support of the marginals, and of course ∑_i=1^m d_i - max_1≤ i ≤ m d_i ≤ (m-1) max_1≤ i≤ m d_i. The inequality may be strict for example in the two marginals case with unequal dimension, as shown in <Ref>. For the lower bound, from the dual formulation of MEOTpb we have _≥_0-log∫_Π_i=1^mX_ie^-E(x_1,…,x_m)/⊗_i=1^mμ_i(x_i), where E(x_1,…,x_m)=c(x_1,…,x_m)- ⊕_i=1^mϕ_i(x_i) is the duality gap and (ϕ_1,…,ϕ_m) are Kantorovich potentials for the un-regularized problem MEOTpb with = 0. By using the singular values decomposition of the bilinear form obtained as an average of mixed second derivatives of the cost and a signature condition introduced in <cit.>, we are able to prove that E detaches quadratically from the set {E=0} and this allows us to estimate the previous integral in the desired way as in <cit.> and improve the results in <cit.> where only an upper bound depending on the quantization dimension of the solution to the un-regularized problem is provided. Moreover, this slightly more flexible use of Minty's trick compared to <cit.> allows us to obtain a lower bound also for degenerate cost functions in the two marginals setting. Given a κ depending on a signature condition (see signature-condition) on the second mixed derivatives of the cost, the lower bound can be summerized as follows _≥_0 + κ/2log(1/) - C_* . for some C_*∈_+. The paper is organized as follows: in <Ref> we recall the multi-marginal optimal transport problem, some results concerning the structure of the optimal solution, in particular the ones in <cit.>, and define its entropy regularization. <Ref> is devoted to the upper bounds stated in <Ref> and <Ref>. In <Ref> we establish the lower bound stated in <Ref>. Finally, in <Ref> we provide some examples for which we can get the matching bounds. § PRELIMINARIES Given m probability compactly supported measures μ_i on sub-manifolds X_i of dimension d_i in ℝ^N for i=1,2…,m and a continuous cost function c: X_1 × X_2 ×…× X_m →ℝ_+, the multi-marginal optimal transport problem consists in solving the following optimization problem MOT_0inf_γ∈Π(μ_1,…,μ_m)∫_ X c(x_1,…,x_m)γ where X X_1 × X_2 ×…× X_m and Π(μ_1,…,μ_m) denotes the set of probability measures on X whose marginals are the μ_i. The formulation above is also known as the Kantorovich problem and it amounts to a linear minimization problem over a convex, weakly compact set; it is then not difficult to prove the existence of a solution by the direct method of calculus of variations. Much of the attention in the optimal transport community is rather focused on uniqueness ans the structure of the minimizers. In particular one is mainly interested in determining if the solution is concentrated on the graph of a function (T_2,…,T_m) over the first marginal, where (T_i)_♯μ_1=μ_i for i=1,…,m, in which case this function induces a solution à la Monge, that is γ=(𝕀,T_2,…,T_m)_♯μ_1. In the two marginals setting, the theory is fairly well understood and it is well-known that under mild conditions on the cost function (e.g. twist condition) and marginals (e.g. being absolutely continuous with respect to Lebesgue), the solution to pb:mmot is unique and is concentrated on the graph of a function ; we refer the reader to <cit.> to have glimpse of it. The extension to the multi-marginal case is still not well understood, but it has attracted recently a lot of attention due to a diverse variety of applications. In particular in his seminal works <cit.> Pass established some conditions, more restrictive than in the two marginals case, to ensure the existence of a solution concentrated on a graph. In this work we rely on the following (local) result in <cit.> giving an upper bound on the dimension of the support of the solution to pb:mmot. Let P be the set of partitions of {1,…,m} into two non-empty disjoint subsets: p={p_-,p_+}∈ P if p_-⋃ p_+={1,…,m}, p_-⋂ p_+=∅ and p_-,p_+≠∅. Then for each p∈ P we denote by g_p the bilinear form on X as g_p=D^2_p_- p_+ c+ D^2_p_+ p_-c where D^2_p q c ∑_i∈ p,j∈ qD^2_x_i x_jc for every p,q ⊆{1,…,m}, and D^2_x_ix_j c∑_α_i,α_j∂^2 c/x_i^α_i x_k^α_k x_i^α_i⊗ x_j^α_j, defined for every i,j on the whole tangent bundle T 𝐗. Define G{∑_p∈ Pt_pg_p | (t_p)_p∈ P∈Δ_P} to be the convex hull generated by the g_p, then it is easy to verify that each g∈ G is symmetric and therefore its signature is well defined. Then, the following result from <cit.> gives a control on the dimension of the support of the optimizer(s) in terms of these signatures. Let γ a solution to pb:mmot and suppose that at some point x∈ X, the signature of some g∈ G is (d^+,d^-,d^0), that is the number of positive, negative and zero eigenvalues. Then, there exists a neighbourhood N_ x of x such that N_ x⋂γ is contained in a Lipschitz sub-manifold of X with dimension no greater than ∑_i=1^m d_i-d^+(g( x)). For the following it is important to notice that by standard linear algebra arguments we have for each g∈ G that d^+(g)≤∑_i=1^m d_i-max_id_i. This implies that the smallest bound on the dimension of γ which <Ref> can provide is max_i d_i. When m=2, the only g∈ G coincides precisely with the pseudo-metric introduced by Kim-McCann in <cit.>. Assuming for simplicity that d_1=d_2=d, they noted that g has signature (d,d,0) whenever c is non-degenerate so <Ref> generalizes their result since it applies even when non-degeneracy fails providing new information in the two marginals case: the signature of g is (r,r,2d-2r) where r is the rank of D^2_x_1 x_2c. Notice that this will help us to generalize the results established in <cit.> to the case of a degenerate cost function. It is well known that under some mild assumptions the Kantorovich problem pb:mmot is dual to the following MDsup{∑_i=1^m∫_X_iϕ_i(x_i)μ_i | ϕ_i ∈_b(X_i),∑_i=1^mϕ_i(x^i) ≤ c(x_1,…,x_m)}. We recall the entropic counterpart of pb:mmot: given m probability measures μ_i on X_i as before, and a continuous cost function c: X →ℝ_+ the _ problem is *MEOTpb_ = inf_γ∈Π(μ_1, …, μ_m)∫_ X cγ + (γ | ⊗_i=1^m μ_i), where (·|⊗_i=1^m μ_i) is the Boltzmann-Shannon relative entropy (or Kullback-Leibler divergence) w.r.t. the product measure ⊗_i=1^m μ_i, defined for general probability measures p,q as (p | q) = ∫_^dρlog(ρ) q if p = ρ q, +∞ otherwise. The fact that q is a probability measure ensures that (p | q) ≥ 0. The dual problem of MEOTpb reads as MD__ = + sup{∑_i=1^m∫_X_iϕ_i(x_i)μ_i -∫_ Xe^∑_i=1^mϕ_i(x_i)-c( x)/⊗_i=1^m μ_i | ϕ_i ∈_b(X_i)}, which is invariant by (ϕ_1,…,ϕ_m)↦(ϕ_1+λ_1,…,ϕ_m+λ_m) where (λ_1,…,λ_m)∈^m and ∑_i=1^mλ_i=0, see <cit.> for some recent presentations. It admits an equivalent log-sum-exp form: MD_'_ = sup{∑_i=1^m∫_X_iϕ_i(x_i)μ_i -log(∫_ Xe^∑_i=1^mϕ_i(x_i)-c( x)/⊗_i=1^m μ_i) | ϕ_i ∈_b(X_i)}, which is invariant by the same transformations without assuming ∑_i=1^mλ_i=0. From MEOTpb and pb:dual_meot we recover, as → 0, the unregularized multi-marginal optimal transport pb:mmot and its dual pb: mmot_dual we have introduced above. The link between multi-marginal optimal transport and its entropic regularization is very strong and a consequence of the Γ-convergence of MEOTpb towards pb:mmot (one can adapt the proof in <cit.> or see <cit.> for Γ-convergence in some specific cases) is that lim_→ 0_=_0. By the direct method in the calculus of variations and strict convexity of the entropy, one can show that MEOTpb admits a unique solution γ_, called optimal entropic plan. Moreover, there exist m real-valued Borel functions ϕ^_i such that γ_ = exp(⊕_i=1^m ϕ^_i - c/) ⊗_i=1^m μ_i, where ⊕_i=1^m ϕ^_i (x_1,…,x_m) ↦∑_i=1^mϕ_i^(x_i), and in particular we have that that _ = ∑_i=1^m ∫_X_iϕ^_i μ_i and these functions, which have continuous representatives, are a.s. uniquely determined up to additive constants. The reader is referred to the analysis of <cit.>, to <cit.> for the extension to the multi-marginal setting, and to <cit.> for earlier references on the two marginals framework. The functions ϕ^_i in eq:structure are called Schrödinger potentials, the terminology being motivated by the fact that they solve the dual problem pb:dual_meot. Furthermore, ϕ^_i are the (unique) solutions to the so-called Schrödinger system ϕ_i(x_i) = -log∫_ X_-ie^⊕_j=1,j≠ i^m ϕ^_j-c( x)/ ⊗_j=1,j≠ i^m μ_j for μ_i-a.e. x_i, ∀ i=1,…,m, where[color=cyan]Utiliser cette notation pratique dans 3.2 X_-i=Π_j=1,j≠ i^m X_j. Note that eq:Ssystem is a softmin version of the multi-marginal c-conjugacy relation for Kantorovich potentials. § UPPER BOUNDS We start by establishing an upper bound, which will depend on the dimension of the marginals, for locally Lipschitz cost functions. We will then improve it for locally semi-concave (in particular ^2) cost functions. §.§ Upper bound for locally Lipschitz costs The natural notion of dimension which arises in our context is the entropy dimension, also called information dimension or Rényi dimension <cit.>. If μ is a probability measure over a sub-manifold X of ^d, we set for every δ > 0, H_δ(μ) = inf{∑_n∈μ(A_n) log(1/μ(A_n)) : ∀ n, (A_n) ≤δ, and X = _n∈ A_n}, where the infimum is taken over countable partitions (A_n)_n∈ of X by Borel subsets of diameter less than δ, and we define the lower and upper entropy dimension of μ respectively by: _R(μ) lim inf_δ→ 0^+H_δ(μ)/log(1/δ), _R(μ) lim sup_δ→ 0^+H_δ(μ)/log(1/δ). Notice that if μ is compactly supported on a Lipschitz manifold of dimension d, then N_δ(μ) ≤ dlog(1/δ) + C for some constant C > 0 and δ∈ (0,1], where N_δ(μ) is the box-counting number of μ, i.e. the minimal number of sets of diameter δ > 0 which cover μ. In particular, by concavity of t↦ tlog(1/t), we have H_δ(μ) ≤log N_δ(μ). We refer to the beginning of <cit.> for additional information and references on Rényi dimension. The following proposition establishes an upper bound for locally Lipschitz costs. Assume that μ_i ∈(X_i) for i ∈{1,…,m} are compactly supported measures on Lipschitz sub-manifolds of dimension d_i, respectively, and c is of class ^0,1_( X), then _≤_0 + (∑_i=1^m d_i-max_j=1,…,md_j )log(1/) + O(). Given an optimal plan γ_0 for _0, we use the so-called block approximation introduced in <cit.>. For every δ > 0 and i, consider a partition X_i = _n∈ A^n_i of Borel sets such that (A^n_i) ≤δ for every n∈, and set μ^n_iμ_i A^n_i/μ_i(A^n_i) if μ_i(A^n_i) > 0, 0 otherwise, then for every m-uple n = (n_1,…,n_m)∈^m, (γ_0)^n γ_0(A^n_1_1×…× A^n_m_m) μ^n_1_1⊗…⊗μ_m^n_m, and finally, γ_δ∑_n∈^m (γ_0)^n. By definition, γ_δ≪⊗_i μ_i and we may check that its marginals are μ_i. Besides, for ⊗_i μ_i-almost every x = (x_1,…,x_m) ∈∏_i=1^m A^n_i_i, γ_δ/⊗_i μ_i(x_1,…,x_m) γ_0(A^n_1_1×…× A^n_m_m)/μ_1(A^n_1_1) …μ_m(A^n_m_m) if μ_1(A^n_1_1) …μ_m(A^n_m_m) > 0 0 otherwise. Let us compute its entropy and assume for simplicity that the measure μ_m is the one such that _R(μ_m)=max_i=1,…,m(μ_i): (γ_δ | ⊗_i=1^m μ_i) = ∑_n∈^m∫_Π_i=1^mA^n_i_ilog(γ_0(A^n_1_1×…× A^n_m_m)/μ_1(A^n_1_1) …μ_m(A^n_m_m)) γ_δ = ∑_n∈^mγ_0(A^n_1_1×…× A^n_m_m)) log(γ_0(A^n_1_1×…× A^n_m_m)/μ_1(A^n_1_1) …μ_m(A^n_m_m)) = [t] ∑_n∈^mγ_0(A^n_1_1×…× A^n_m_m) log(γ_0(A^n_1_1×…× A^n_m_m)/μ_m(A^n_m_m)) + ∑_j=1^m-1∑_n∈^mγ_0(A^n_1_1×…× A^n_m_m) log(1/μ_j(A^n_j_j)) = [t] ∑_n∈^mγ_0(A^n_1_1×…× A^n_m_m) log(γ_0(A^n_1_1×…× A^n_m_m)/μ_m(A^n_m_m)) + ∑_j=1^m-1∑_n∈^m-1μ_j(A^n_j_j) log(1/μ_j(A^n_j_j)) ≤∑_j=1^m-1∑_n∈^m-1μ_j(A^n_j_j) log(1/μ_j(A^n_j_j)) . the last inequality coming from the inequality γ_0(A^n_1_1×…× A^n_m_m) ≤μ_m(A^n_m_m). Taking 1-optimal partitions (A^n_j)_n∈ of diameter smaller than δ in the definition of H_δ(μ_j), we get by definition of H_δ, (γ_δ | ⊗_i=1^m μ_i) ≤∑_j=1^m-1 H_δ(μ_j)+1. Notice that W_∞(γ_δ,γ_0) ≤δ (taking the ℓ^∞ norm), thus taking γ_δ as competitor in MEOTpb we obtain _ ≤∫ c γ_δ + ∑_j≤ m-1 H_δ(μ_j) + ≤_0 + ∑_j_1,…,j_m∫_B_j_1,…,j_m c (γ_δ - γ_0) + ∑_j≤ m-1H_δ(μ_j)+ ≤_0 + ∑_j_1,…,j_m[c]_^0,1(B_j_1,…,j_m) W_1(γ_δ B_j_1,…,j_m,γ_0 B_j_1,…,j_m) + ∑_j≤ m-1H_δ(μ_j)+ ≤_0 + L δ + ∑_j≤ m-1H_δ(μ_j)/log(1/δ)log(1/δ)+ , where we have considered a covering of each sub-manifold X_i, that is X_i⊆⋃_j_i B_δ_0(a_i^j_i) with δ≤δ_0, B_j_1,…,j_m=B_δ_0(a_1^j_1)×…× B_δ_0(a_m^j_m) for all j_1,…,j_m and L=∑_j_1,…,j_m[c]_^0,1(B_j_1,…,j_m). Taking δ = and considering that μ_i are concentrated on sub-manifolds of dimension d_i then H_δ(μ_i) ≤ d_i log(1/δ) + C^*-1 for some C^*≥ 1 and we get _≤_0 + (∑_j≤ m-1 d_j) log(1/) + C^*. Notice that by taking m=2 and d_1=d_2=d, one easily retrieves the results in <cit.>. Besides, if the μ_i's are merely assumed to have compact support (not necessarily supported on a sub-manifold), the same proof actually shows the slightly weaker estimate _≤_0 + (∑_i=1^m _R(μ_i)-max_j=1,…,m_R(μ_j) )log(1/) + o(log(1/). §.§ Upper bound for locally semi-concave costs We provide now a finer upper bound under the additional assumptions that the X_i's are ^2 sub-manifolds of ^N, c is locally semi-concave as in <Ref> (in particular if c∈^2( X,_+)), and the μ_i's are measures in L^∞(^d_i_X_i) with compact support in X_i. A function f : X → defined on a ^2 sub-manifold X ⊆^N of dimension d is locally semiconcave if for every x∈ X there exists a local chart (i.e. a ^2 diffeomorphism) ψ : U→Ω where U ⊆ X is an open neighborhood of x and Ω is an open convex subset of ^d, such that f∘ψ^-1 is λ-concave for some λ∈, meaning f∘ψ^-1- λ·^2 is concave on Ω. Let c : X →_+ be a locally semiconcave cost function and (ϕ_i)_1≤ i ≤ m∈∏_≤ i≤ m(K_i) be a system of c-conjugate functions defined on compact subsets K_i⊆ X_i. We can find for every i∈{ 1,…, m} a finite open covering (U_i^j)_1≤ j≤ J of K_i and local charts ψ_i^j : U_i^j →Ω_i^j such that for some λ∈: * for every j = (j_1, …, j_m) ∈{1,…,J}^m, c ∘ (ψ^j)^-1 is λ-concave on Ω^ j∏_1≤ i≤ mΩ_i^j_i, * for every (i,j) ∈{1,…, m}×{1,…, J}, ϕ_i ∘ (ψ_i^j)^-1 is λ-concave on Ω_i^j. In particular, all the ϕ_i's are locally semiconcave. For every i, by compactness of the K_i's we can find a finite open covering (U_i^j)_1≤ j≤ J of K_i and local charts ψ_i^j : U_i^j →Ω_i^j such that for every j = (j_1, …, j_m) ∈{1,…, J}^m, c ∘ (ψ^j)^-1 - λ^j·^2 is concave for some λ^j∈, where ψ^j (ψ_1^j_1, …, ψ_m^j_m). We may assume that λ_ j = λ for every j, by taking λ = max{λ_ j : j∈{1,…,J}^m}. Fix i ∈{1,…,m}, j ∈{1,…,J}, then for every k = (k_ℓ)_ℓ≠ i∈{1,…,J}^m-1 set Ω^k = ∏_ℓ≠ iΩ_ℓ^k_ℓ and ψ^ k = (ψ_1^k_1, …, ψ_i-1^k_i-1, ψ_i^j, ψ_i+1^k_i+1, …). Notice that for every y∈Ω_i^j, ϕ_i∘(ψ_i^j)^-1(y) = inf_(x_ℓ)_ℓ≠ i c(x_1,…, x_i-1,(ψ_i^j)^-1(y),x_i+1, …) - ∑_ℓ≠ iϕ_ℓ(x_ℓ) = min_ kinf_(y_ℓ)_ℓ≠ i∈Ω^k c∘ (ψ^ k)^-1(y_1, …, y_i-1,y,y_i+1,…) - ∑_ℓ≠ iϕ_ℓ∘ (ψ_ℓ^k_ℓ)^-1(y_ℓ), and we see that it is λ-concave as an infimum of λ-concave functions. We are going to use an integral variant of Alexandrov Theorem which is proved in <cit.>. Let f : Ω→ be a λ-concave function defined on a convex open set Ω⊆^d, for some λ≥ 0. There exists a constant C ≥ 0 depending only on d such that: ∫_Ωsup_y∈ B_r(x)∩Ωf(y) - (f(x)+ ∇ f(x) · (y-x)) x ≤ C r^2 ^d-1(∂Ω) ([f]_^0,1(Ω) + λ(Ω)). Let c∈^2( X) and assume that for every i ∈{1,…, m}, X_i ⊆^N is a ^2 sub-manifold of dimension d_i and μ_i ∈ L^∞(^d_i_X_i) is a probability measure compactly supported in X_i. Then there exists a constant C^*≥ 0 such that _≤_0 + 1/2(∑_i=1^m d_i-max_1≤ i≤ md_i )log(1/) + C^*. The measures μ_i being compactly supported in X_i, take for every i ∈{1,…,m} an open subset U_i such that μ_i⊆ U_i ⋐ X_i and define the compact set K_i ⊆U̅_i. Take (ϕ_i)_1≤ i ≤ m∈∏_≤ i≤ m(K_i) a m-uple of c-conjugate Kantorovich potentials and a transport plan γ_0∈Π(μ_1,…,μ_m) which are optimal for the unregularized problems. In particular, E c -⊕_i=1^mϕ_i ≥ 0 on U ∏_1≤ i≤ m U_i, E = 0 on γ_0⊆ U. For every i we consider the coverings (U_i^j)_1≤ j≤ J provided by <Ref> and we notice by compactness that there exists open subsets Ũ_i^j ⋐ U_i^j such that (Ũ_i^j)_1≤ j≤ J is still an open covering of K_i and such that for a small δ_0 > 0, the δ_0-neighbourhood of Ω̃_i^j ψ_i^j(Ũ_i^j) is included in Ω_i^j. For δ∈ (0,δ_0) we consider the block approximation γ_δ of γ_0 built in the proof of <Ref>, as well as κ_δ∈Π(γ_0,γ_δ) such that sup_(x_0, x)∈κ_δx_0-x≤δ. For every j = (j_1,…,j_m) ∈{1,…,m}, we set E^ j E ∘ (ψ^ j)^-1 and U^ j∏_i=1^m U_i^j_i, and we write: ∫_ X c γ_δ - ∫_ X c γ_0 = ∫_ X E γ_δ = ∫_ X× X E(x) κ_δ(x_0,x) ≤∑_ j = (j_1,…, j_m)∫_(x_0, x) ∈Ũ^j×X E(x) κ_δ(x_0,x) ≤∑_ j = (j_1,…, j_m)∫_(x_0, x) ∈ (U^j)^2 E^ j(ψ^ j(x)) κ_δ(x_0,x). Notice that for γ_0-a.e. x_0∈ U^ j, E^ j is differentiable at ψ^ j(x_0), or equivalently E is differentiable at x_0. Indeed c is differentiable everywhere, and for every i,j, ϕ_i ∘ (ψ_i^j)^-1 is semi-concave thus differentiable ^d_i-a.e. and hence ϕ_i is differentiable μ_i-a.e. because μ_i ≪^d_i and ψ_i^j is bi-Lipschitz, which in turn implies that ⊕_i=1^m ϕ is differentiable γ_0-a.e. because γ_0 ∈Π(μ_1,…, μ_m). Moreover, by E_minimal we have T_ψ^ j(x_0) E^ j≡ 0 for γ_0-a.e. x_0∈ U^ j. We may then compute: ∫_ X c γ_δ - ∫_ X c γ_0 ≤∑_ j = (j_1,…, j_m)∫_(x_0, x) ∈ (U^j)^2(E^ j(ψ^ j(x))-T_ψ^ j(x_0) E^ j(ψ^ j(x)-ψ^ j(x_0))) κ_δ(x_0,x) =[t] ∑_ j = (j_1,…, j_m)(∫_(x_0, x) ∈ (U^j)^2(c^ j(ψ^ j(x))-T_ψ^ j(x_0) c^ j(ψ^ j(x)-ψ^ j(x_0))) κ_δ(x_0,x) + . . ∑_1≤ i ≤ m∫_(x_0, x) ∈ (U_i^j_i)^2(ϕ_i^j_i(ψ_i^j_i(x))-T_ψ_i^j_i(x_0)ϕ_i^j_i(ψ_i^j_i(x)-ψ_i^j_i(x_0))) (e_i,e_i)_♯κ_δ(x_0,x)). Now, since c^ j is λ-concave on each Ω^ jψ^ j(U^ j), whenever x_0-x≤δ we have c^ j(ψ^ j(x))-T_ψ^ j(x_0) c^ j(ψ^ j(x)-ψ^ j(x_0)) ≤λψ^ j(x)-ψ^ j(x_0)^2/2≤λ L^ j/2δ^2, where L^ jmax{(ψ^ j),((ψ^ j)^-1)}. Besides, we may apply <Ref> to each ϕ_i^j_i over Ω_i^j_i to get ∫_(x_0, x) ∈ (U_i^j_i)^2(ϕ_i^j_i(ψ_i^j_i(x))-T_ψ_i^j_i(x_0)ϕ_i^j_i(ψ_i^j_i(x)-ψ_i^j_i(x_0))) ≤ ∫_x_0 ∈ U_i^j_isup_y ∈ B_L^ jδ(ψ_i^j_i(x_0))∩Ω_i^j_i*ϕ_i^j_i(y)-T_ψ_i^j_i(x_0)ϕ_i^j_i(y-ψ_i^j_i(x_0)) (e_i,e_i)_♯κ_δ(x_0,x) = ∫_U_i^j_isup_y ∈ B_L^ jδ(ψ_i^j_i(x_0))∩Ω_i^j_i*ϕ_i^j_i(y)-T_ψ_i^j_i(x_0)ϕ_i^j_i(y-ψ_i^j_i(x_0))μ_i(x_0) ≤ ∫_Ω_i^j_isup_y ∈ B_L^ jδ(y_0)∩Ω_i^j_i*ϕ_i^j_i(y)-T_y_0ϕ_i^j_i(y-y_0) (ψ_i^j_i)_♯μ_i(y_0) ≤ μ_i_L^∞(^d_i) L^ j C (L^ jδ)^2 ^d_i-1(∂Ω_i^j_i) ([ϕ_i^j_i]_^0,1(Ω_i^j_i) + λ(Ω_i^j_i)) ≤ C^ jδ^2, for some constant C^ j∈ (0,+∞). Reporting ineq_c and ineq_phi in upper_ineq_cost yields ∫_ X c γ_δ - ∫_ X c γ_0 ≤∑_j=1^m (λ L^ j/2 + C^ j) δ^2 C' δ^2. Finally, we proceed as in the end of the proof of <Ref>, taking γ_δ as competitor in the primal formulation MEOTpb, so as to obtain _ - _0 ≤∫_ X c γ_δ - ∫_ X c γ_0 + ∑_i≤ m-1 H_δ(μ_j) + ≤ C' δ^2 + ∑_i≤ m-1 (d_i log(1/δ) + C”), where C”∈ (0,+∞) is a constant such that H_δ(μ_i) ≤ d_i log(1/δ) + C” -1. Taking δ = √() provides the desired estimate _ -_0 ≤1/2(∑_i=1^m-1 d_i)log(1/) + (C'+C”), recalling that the index i=m was chosen merely to simplify notations. § LOWER BOUND FOR C2 COSTS WITH A SIGNATURE CONDITION In this section we consider a cost c ∈^2( X,_+) where X = X_1 ×…× X_m and X_i ⊆^N is a ^2 sub-manifold of dimension d_i and μ_i is a compact subset of X_i for every i ∈{1,…, m}. If g is a bilinear form we denote by (d^+(g),d^-(g),d^0(g)) its signature. Let c ∈^2( X, _+) and (ϕ_1,…, ϕ_m) ∈(K_1)×⋯(K_m) be a system of c-conjugate functions on subsets K_i ⊆ X_i. We set c- ϕ_1⊕…⊕ϕ_m on K K_1 ×…× K_m. Take x̅∈ K and g ={∑_p ∈ P t_pg_p(x̅) for (t_p)_p∈ P∈Δ_P}. Then there exists local coordinates around x̅, i.e. ^2 diffeomorphisms u = (u^0, u^-, u^+) : U_x̅⊆X→^d^0(g)×^d^-(g)×^d^+(g) defined on a neighborhood of x̅ such that if x, x'∈ B_r(x̅) ⊆ U_x̅, (x') + (x)/2≥u^+(x')-u^+( x)^2-u^-(x')-u^-( x)^2 -(r)u(x')-u( x)^2 where (r)≥ 0 tends to 0 as r→ 0. Let p = {p_-,p_+}∈ P. For y ∈∏_i∈ p_± K_i, we set ϕ_p_±(y) = ∑_i∈ p_±ϕ_i(y_i). We identify any x ∈ K with (x_p_-,x_p_+). Since the (ϕ_i) are c-conjugate, if x, x'∈𝐊, we have: E(x') = c(x'_p_-, x'_p_+) - ϕ_p_-(x'_p_-) - ϕ_p_+(x'_p_+) ≥ c(x_p_-',x_p_+') - (c(x_p_-',x_p_+)-ϕ_p_+(x_p_+))- (c(x_p_-,x_p_+')-ϕ_p_-(x_p_-)) = c(x_p_-',x_p_+')-c(x_p_-',x_p_+)-c(x_p_-,x_p_+') + c(x_p_-,x_p_+) - E( x). Now we do computations in local charts ψ_i : U_i ⊆ X_i →ψ_i(U_i) ⊆^d_i which are ^2 diffeomorphisms such that B_R(x̅_i) ⊆ U_i and ψ_i(U_i) are convex. We keep the same notation with a slight abuse[Any linear combination az_i +by_i will designate ψ_i^-1(aψ(z_i) +b ψ_i(y_i).], and use Taylor's integral formula: E( x') + E( x) ≥∫_0^1 ∫_0^1 D^2_p_- p_+ c(x_s,t)(x_p_-'-x_p_-, x_p_+'-x_p_+) where x_s,t (x_p_-+(1-s)x_p_-',x_p_++(1-t)x_p_+') for s,t∈ [0,1]. Since D^2_p_- p_+ c( x_s,t)-D^2_p_- p_+ c(x̅)≤(x̅, r) where (x̅, r) is independent from p and tends to 0 as r → 0, and since by definition D^2 c(x̅)(x_p_-'-x_p_-,x_p_+'-x_p_+) = 1/2 g_p(x̅)(x'-x), it holds: E( x) + E(x') ≥1/2 g_p(x̅)(x'-x) - (x̅,r) x'- x^2. Taking g = ∑_p∈ P t_p g_p(x̅) and averaging the previous inequality yields: E( x) + E(x') ≥1/2 g(x'- x) -(x̅,r) x'- x^2. Finally, we can find a linear isomorphism Q ∈ GL(∑_i d_i, ) which trivializes g, i.e. such that after setting u Q∘ (ψ_1,…, ϕ_m) and denoting u = (u^+,u^-,u^0) : ∏_i U_i→^d^+(g)×^d^-(g)×^d^0(g) then: 1/4 g(x'- x) = u^+(x')-u^+(x)^2-u^-(x')-u^-(x)^2. We get the result by replacing with Q^-1. We will use the following positive signature condition: PS(κ)for every x∈ X_1×…× X_m, d^+_c( x) ≥κ where d^±_c( x) max{ d^±(g) : g ∈_p∈ P{g_p( x)}}. We assume that for every i, μ_i ∈ L^∞(^d_i X_i) and has compact support in X_i. If signature-condition is satisfied, then there exists a constant C_*∈ [0,∞) such that for every >0, _≥_0 + κ/2log(1/) - C_* . The measures μ_i being supported on some compact subsets K_i⊆ X_i, consider a family (ϕ_i)_1≤ i ≤ m∈(K_1)×⋯×(K_m) of c-conjugate Kantorovich potentials. Taking (ϕ_i)_1≤ i ≤ m as competitor in pb:dual_meot_lse, we get the lower bound: _ ≥∑_i=1^m ∫_X_iϕ_i μ_i -log(∫_∏_1≤ i≤ m X_i e^-E/⊗_1≤ i≤ mμ_i) = _0-log(∫_Π_1≤ i≤ mX_i e^-E/⊗_1≤ i≤ mμ_i), where E c-⊕_1≤ i≤ mϕ_i on X∏_1≤ i≤ m X_i as in <Ref>. We are going to show that for some constant C>0 and for every >0, ∫_ X e^-E/⊗_1≤ i≤ mμ_i ≤ C ^κ/2, which yields lower_bound with C_* = log(C). For every x̅∈ K ∏_1≤ i ≤ m K_i, we consider a quadratic form g_x̅ = ∑_p t_p g_p(x̅) of signature (κ, d^-_x̅, d^0_x̅), which is possible thanks to signature-condition, and take a local chart u_x̅ : U_x̅⊆ X → B^κ_R_x̅× B^d^-_x̅_R_x̅× B^d^0_x̅_R_x̅ as given by <Ref>, such that gap_inequality holds with (R_x̅) ≤ 1/2. Notice that u_x̅ is bi-Lipschitz with some constant L_x̅ on V_x̅ u_x̅^-1(B^κ_R_x̅/2× B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2). For every i we may write μ_i = f_i ^d_i_X_i. By applying several times the co-area formula <cit.> to the projection maps onto X_i, we may justify that ^d_ X = ⊗_1≤ i≤ m^d_i_X_i where d = ∑_i d_i. We set E_x̅ E ∘ u_x̅^-1 : B^κ_R_x̅× B^d^-_x̅_R_x̅× B^d^0_x̅_R_x̅→ [0,+∞] and we apply the area formula: ∫_V_x̅ e^-E/⊗_1≤ i≤ mμ_i = ∫_V_x̅ e^-E/ (⊗_1≤ i≤ m f_i) ^d_ X = ∫_B^κ_R_x̅/2× B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2 e^-E_x̅/ (⊗_1≤ i≤ m f_i) J u_x̅^-1^κ⊗^d^-_x̅⊗^d^0_x̅ ≤ L_x̅∏_1≤ i≤ mμ_i_L^∞(^d_i X_i)∫_B^κ_R_x̅/2× B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2 e^-E_x̅(u^+,u^-,u^0)/ (u^+, u^-,u^0). Now, for every (u^-,u^0) ∈ B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2, consider f^+(u^-,u^0) a minimizer of E_x̅(·,u^-,u^0) over B̅^κ_R_x̅/2. By gap_inequality of <Ref>, for every (u^+,u^-,u^0)∈ B^κ_R_x̅/2× B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2, E_x̅(u^+,u^-,u^0) ≥1/2 (E_x̅(f^+(u^-,u^0),u^-,u^0)+ E_x̅(u^+,u^-,u^0)) ≥ (1- 1/2) u^+-f^+(u^-,u^0)^2 =1/2u^+-f^+(u^-,u^0)^2. As a consequence we obtain: ∫_B^κ_R_x̅/2× B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2 e^-E_x̅(u^+,u^-,u^0)/(u^+,u^-,u^0) ≤∫_B^d^-_x̅_R_x̅/2× B^d^0_x̅_R_x̅/2∫_B^κ_R_x̅/2 e^-u^+-f^+(u^-,u^0)^2/2 u^+ (u^-,u^0) ≤^κ/2ω_d^-_x̅ω_d^0_x̅ R_x̅ ^d^-_x̅+d^0_x̅∫_^κ e^-u^2/2 u = C_x̅^κ/2 for some constant C_x̅ > 0. The sets {V_x̅}_x̅∈Σ form an open covering of the compact set Σ{ x ∈ K: E( x) = 0}, hence we may extract a finite covering V_x̅_1, …, V_x̅_I and: ∫_⋃_i=1^I V_x̅_i e^-E/⊗_1≤ i≤ mμ_i ≤^κ/2(∑_1≤ i≤ I L_x̅_i C_x̅_i) (∏_1≤ i≤ mμ_i_L^∞(^d_i X_i)) = c_1 ^κ/2, for some constant c_1 ∈ (0,+∞). Finally, since E is continuous and does not vanish on the compact set K' = K ∖⋃_i=1^I V_x̅_i, it is bounded from below on K' by some constant c_2 > 0. Therefore, for every > 0, ∫_ X e^-E/⊗_1≤ i≤ mμ_i ≤ c_1^κ/2 + e^-c_2/≤ C ^κ/2, for some constant C > 0. This concludes the proof. § EXAMPLES AND MATCHING BOUND We devote this section to applying the results we have stated above to several cost functions. For simplicity we can assume that the dimensions of the X_i are all equal to some common d and the cost function c is ^2. As in <cit.> we consider, for the lower bound, the metric g such that t_p=1/2^m-1-1 for all p∈ P, we remind that P is the set of partition of {1,…,m} into two non empty disjoint subsets. [Two marginals case] In previous works <cit.> concerning the rate of convergence for the two marginals problem, it was assumed that the cost function must satisfy a non degeneracy condition, that is D_x_1x_2^2c must be of full rank. A direct consequence of our analysis is that we can provide a lower bound (the upper bound does not depend on such a condition) for costs for which the non-degeneracy condition fails. Let r be the rank of D_x_1x_2^2 c at the point where the non-degeneracy condition fails, then the signature of g at this point is given by (r,r,2d-2r) meaning that locally the support of the optimal γ_0 is at most 2d-r dimensional. Thus, the bounds become r/2log(1/) - C_* ≤_ - _0 ≤d/2log(1/) + C^*, for some constants C_*,C^*>0. Notice that if D_x_1,x_2^2c has full rank then r=d and we retrieve the matching bound results of <cit.>. [Two marginals case and unequal dimension] Consider now the two marginals case but unequal dimensional, that is for example d_1>d_2. Then, if D_x_1,x_2^2c has full rank, that is r=d_2, we obtain a matching bound depending only on the lower dimensional marginal d_2/2log(1/) - C_* ≤_ - _0 ≤d_2/2log(1/) + C^*, for some constants C_*,C^*>0. If μ_1 is absolutely continuous with respect to ^d_1 on some smooth sub-manifold of dimension d_1, then any optimal transport plan would be concentrated on a set of Hausdorff dimension no less than d_1, and thus the upper bound given in <cit.> would be d_1/2log(1/) + O(), which is strictly worse than our estimate. [Negative harmonic cost] Consider the cost c(x_1,…,x_m)=h(∑_i=1^mx_i) where h is ^c and D^2h>0. Assuming that the marginals have finite second moments, when h(x)=|x|^2 this kind of cost is equivalent to the harmonic negative cost that is c(x_1,…,x_m)=-∑_i<j|x_i-x_j|^2 (here |·| denotes the standard euclidean norm), see <cit.> for more details. It follows now that the signature of the metric g is (d,(m-1)d,0) thus the bounds between _ and _0 that we obtain are d/2log(1/) - C_* ≤_ - _0 ≤1/2((m-1)d) log(1/) + C^*, for some constants C_*,C^*>0. We remark that it is known from <cit.> that a transport plan γ_0 is optimal if and only if it is supported on the set {(x_1,…,x_m) | ∑_i=1^mx_i=l}, where l∈^d is any constant and there exists solutions whose support has dimension exactly (m-1)d. [Gangbo-Święch cost and Wasserstein barycenter] Suppose that c(x_1,…,x_m)=∑_i<j|x_i-x_j|^2, known as the Gangbo-Święch cost <cit.>. Notice that the cost is equivalent to c(x_1,…,x_m)=h(∑_i<jx_i) where h is ^2 and D^2h <0,then the signature of g is ((m-1)d,d,0) and we have a matching bound 1/2((m-1)d)log(1/) - C_* ≤_ - _0 ≤1/2((m-1)d) log(1/) + C^*. Notice now that considering the _0 problem with a cost c(x_1,…,x_m)=∑_i|x_i-T(x_1,…,x_m)|^2, where T(x_1,…,x_m)=∑_i=1^mλ_ix_i is the Euclidean barycenter , is equivalent to the _0 with the Gangbo- Święch cost and the matching bound above still holds. Moreover, the multi-marginal problem with this particular cost has been shown <cit.> to be equivalent to the Wasserstein barycenter, that is T_♯γ_0=ν is the barycenter of μ_1,…,μ_m. Acknowledgments. L.N. is partially on academic leave at Inria (team Matherials) for the year 2022-2023 and acknowledges the hospitality if this institution during this period. His work was supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH and from H-Code, Université Paris-Saclay. P.P. acknowledges the academic leave provided by Inria Paris (team Mokaplan) for the year 2022-2023.
http://arxiv.org/abs/2307.01025v1
20230703135623
Geometric distortion and astrometric calibration of the JWST MIRI Medium Resolution Spectrometer
[ "P. Patapis", "I. Argyriou", "D. R. Law", "A. M. Glauser", "A. Glasse", "A. Labiano", "J. Álvarez-Márquez", "P. J. Kavanagh", "D. Gasman", "M. Mueller", "K. Larson", "B. Vandenbussche", "P. Klaassen", "P. Guillard", "G. S. Wright" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.EP", "astro-ph.GA", "astro-ph.SR" ]
Institute of Particle Physics and Astrophysics, ETH Zürich, Wolfgang-Pauli-Str 27, 8049 Zürich Switzerland Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA UK Astronomy Technology Centre, Royal Observatory, Blackford Hill Edinburgh, EH9 3HJ, Scotland, United Kingdom Telespazio UK for the European Space Agency, ESAC, Camino Bajo del Castillo s/n, 28692 Villanueva de la Cañada, Spain Centro de Astrobiología (CAB), CSIC-INTA, Ctra. de Ajalvir km 4, Torrejón de Ardoz, E-28850, Madrid, Spain Department of Experimental Physics, Maynooth University, Maynooth, Co. Kildare, Ireland SRON Netherlands Institute for Space Research, P.O. Box 800, 9700 AV Groningen, The Netherlands Sterrewacht Leiden, P.O. Box 9513, 2300 RA Leiden, The Netherlands Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98bis bd Arago, 75014 Paris, France Institut Universitaire de France, Ministère de l'Enseignement Supérieur et de la Recherche, 1 rue Descartes, 75231 Paris Cedex 05, France polychronis.patapis@phys.ethz.ch The Medium-Resolution integral field Spectrometer (MRS) of MIRI on board JWST performs spectroscopy between 5 and 28 , with a field of view varying from ∼13 to ∼56 arcsec square. The optics of the MRS introduce substantial distortion, and this needs to be rectified in order to reconstruct the observed astrophysical scene. We aim to use data from the JWST/MIRI commissioning and cycle 1 calibration phase, to derive the MRS geometric distortion and astrometric solution, a critical step in the calibration of MRS data. These solutions come in the form of transform matrices that map the detector pixels to spatial coordinates of a local MRS coordinate system called /, to the global JWST observatory coordinates V2/V3. For every MRS spectral band and each slice dispersed on the detector, the transform of detector pixels to /is fit by a two-dimensional polynomial, using a raster of point source observations. The dispersed trace of the point source on the detector is initially estimated by fitting a one-dimensional empirical function, and then iterating on the fist distortion solution using forward modelling of the point spread function model based on the package. A polynomial transform is used to map the coordinates from /to V2/V3. We calibrated the distortion of all 198 discrete slices of the MIRI/MRS IFUs, and derived an updated Field of View (FoV) for each MRS spectral band. The precision of the distortion solution is estimated to be better than one tenth of a spatial resolution element, with a root mean square (rms) of 10 milli-arcsecond (mas) at 5 , to 23 mas at 27 . Finally we find that the wheel positioning repeatability causes an additional astrometric error of rms 30 mas. We have demonstrated the MRS astrometric calibration strategy and analysis for all four integral field units, and all spectral bands of the MRS enabling the calibration of MRS spectra. This is a critical step in the data pipeline of every MRS observation, especially for science with spatially resolved objects. The distortion calibration was folded into the JWST pipeline in Calibration Reference Data System (CRDS) context jwst_1094.pmap. The distortion calibration precision meets the pre-launch requirement, and the estimated total astrometric uncertainty is 50 mas. JWST/MIRI MRS astrometric calibration P. Patapis et al. Geometric distortion and astrometric calibration of the JWST MIRI Medium Resolution Spectrometer Polychronis Patapis 10000-0001-8718-3732 Ioannis Argyriou20000-0003-2820-1077 David R. Law30000-0002-9402-186X Adrian M. Glauser10000-0001-9250-1547 Alistair Glasse40000-0002-2041-2462 Alvaro Labiano5,60000-0002-0690-8824 Javier Álvarez-Márquez50000-0002-7093-1877 Patrick J. Kavanagh70000-0001-6872-2358 Danny Gasman20000-0002-1257-7742 Michael Mueller8,9 Kirsten Larson30000-0003-3917-6460 Bart Vandenbussche20000-0002-1368-3109 David Lee4 Pamela Klaassen 4 Pierre Guillard 10,110000-0002-2421-1350 Gillian S. Wright40000-0001-7416-7936 Received July 3, 2023, Accepted xx =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The Mid-Infrared Instrument <cit.> is one of the four science instruments on board the James Webb Space Telescope <cit.>, and the only instrument operating in the mid-infrared. The MIRI Medium Resolution Spectrometer <cit.> is an integral field spectrometer (IFS) providing moderate resolution spectroscopy <cit.>, covering the wavelength range from 4.9 – 27.9 <cit.>. In the near-infrared (0.6-5.3 ) the Near Infrared Spectrometer <cit.> also provides an IFS mode similar to the MRS. The advantage of an IFS is that it retains information for both spatial coordinates of the astronomical scene being observed using image slicers <cit.>. The MRS employs one image slicer for each spectral channel. Each image slicer splits one of the spatial dimensions into distinct slices before dispersing the light. The number of slices depends on the physical geometry of the image slicer, optimised for the spectral and spatial resolution element of each MRS channel. The data that are imaged on the MRS detectors are akin to having multiple single slit spectra, one for each slice in a spectral channel. In order to reconstruct the scene observed on the sky one must map each pixel of the detector to a position in right ascension (RA), declination (DEC) and wavelength, accounting for the optical distortion that is typically present in IFS. In this work we present the derivation of the geometric distortion and astrometric calibration of the MRS, an essential product of the JWST data processing pipeline. The MRS is spatially and spectrally under-sampled at all wavelengths by design, and requires at least two dithered exposures to be Nyquist sampled. The sampling artefacts and cube reconstruction algorithm are detailed in <cit.>. The optical distortion itself needs to be corrected to subpixel precision, since even small residuals can introduce systematic errors in the science products, given the under-sampled PSF core. A well calibrated distortion over the full FoV and spectral range is required to maximise the optical quality of the instrument, and restore the diffraction limited performance provided by JWST that is distorted by the optics of the MRS. An accurate plate scale enables a consistent spectro-photometric calibration over the full FoV and reduces errors in the surface brightness estimation for extended sources <cit.>. In Sect. <ref> we introduce the relevant coordinate frames and transforms used in the JWST calibration pipeline, and discuss the origin of the optical distortion in the MRS. In Sect. <ref> we describe the different data sources used to derive the distortion calibration, and the methods used to estimate the point source traces on the detector. In Sect. <ref> we detail the analysis steps that were used to derive the geometric and astrometric distortion transforms. Finally, in Sect. <ref> we discuss the precision of the distortion solution achieved through this work, the astrometric stability due to the repeatability of the grating wheel, and lessons learned for the astrometric calibration of an IFU instrument for application in future projects. § MIRI MEDIUM RESOLUTION SPECTROMETER §.§ Brief Optical Description of the MRS The light enters the MRS from the JWST focal plane via a pickoff mirror and is relayed to a series of three dichroic filters that split the light into the four MRS spectral channels (1 – 4). The dichroic filters are always configured in such a way that the same sub-band SHORT/MEDIUM/LONG or A/B/C is observed at one time[For sub-band B, the MRS will simultaneously observe spectral bands 1B, 2B, 3B, and 4B.]. The dichroic-filtered light is fed to the input of the four MRS integral field units (IFUs), while two blocking filters and corresponding light traps prevent unwanted stray-light from entering the spectrometer. When entering the IFU of one of the four MRS spectral channels the input focal plane is anamorphically magnified and reimaged onto the image slicer of that channel. The image slicer consists of thin mirrors, diamond turned onto a common substrate in a pyramid like manner (see Fig. 6 in <cit.>). Each channel has a different number of slicing mirrors, with 21, 17, 16 and 12 for channels 1, 2, 3 and 4 respectively. Each slice is separated spatially by the angle of the slicer mirrors towards a re-imaging mirror with a pupil mask placed in the intermediate pupil plane to control the stray-light. The re-imaging mirrors create an image of each slice on a slitlet mask that defines the output of the spectral channel IFU. The beams from the slits are then collimated by a mirror and diffracted by the diffraction grating that is located on the reverse side of the same wheel as the dichroic filter that transmitted the wavelengths for the specific spectral channel. The MRS incorporates two wheels, denoted as the dichroic grating wheel assembly A (DGA-A) and dichroic grating wheel assembly B (DGA-B). The first diffraction order of the grating-diffracted beam is imaged onto the detector. The MRS camera optics combine the beams of two channels, 1 and 2, and 3 and 4, and focus the light onto the short wavelength (MIRIFUSHORT) and the long wavelength (MIRIFULONG) Si:As IBC detectors <cit.> respectively. §.§ MRS Coordinate Systems There are three coordinate systems that are relevant for this work and the MRS, illustrated in Fig. <ref>. These are (i) the MRS detector coordinates, (ii) the local MRS coordinates, and (iii) the JWST telescope coordinates which are connected to the right ascension (RA) and declination (DEC) of an astronomical object in the sky. (i) The detector coordinates are defined by the pixels of the detector arrays that have a dimension of (1032, 1024). The horizontal axis of the detector is roughly aligned with the MRS IFU image slicer along-slice (denoted as ) spatial coordinate, and the vertical axis is roughly aligned with the dispersion direction, shown on the left panel of Fig. <ref>. We note that we assume zero-indexed arrays and therefore define the center of the lower left pixel as (x, y) = (0, 0). (ii) The local MRS coordinates are aligned to the image slicer along- and across-slice direction and are denoted as local due to the fact that they are unique to each MRS sub-band. Due to small alignment differences of the slicer optic and the dichroic filters, the slicer location of each MRS sub-band projected on the sky are not perfectly concentric, yielding small boresight and rotation offsets. The along-slice coordinate is defined as and the across-slice coordinate as , and both have units of arcseconds. The coordinate is often used interchangeably with the term "slice", referring to an individual sliced image created by the slicer and dispersed onto the detector. While not having a physical connection to the slicer, a third coordinate referring to the wavelength and denoted as , is often used in conjunction with /to complete the three dimensional coordinate system of the MRS IFU. In the context of the overall MRS instrument calibration, like optical distortion, wavelength calibration, fringing, straylight and PSF <cit.>, and together with the detector coordinates, the vector (, , ) operates as a self-contained coordinate system as it offers an intuitive view of the optics. (iii) The JWST Observatory coordinate system is defined and shared among all instruments. It is defined by two orthogonal coordinates on the JWST primary aperture plane called V3, that points towards the secondary Mirror Support Structure, and V2 orthogonal to V3 [<https://jwst-docs.stsci.edu/jwst-observatory-characteristics/jwst-observatory-coordinate-system-and-field-of-regard>], as shown in the right panel of Fig. <ref>. A third coordinate V1 that is the telescope symmetry axis completes the coordinate system. These coordinates enable the observatory to slew towards a target on sky and align the target with the instrument selected for the observation. Additionally, it enables dithering strategies and pointing offsets that are essential to some observing modes like coronagraphy and IFU spectroscopy. The (V2, V3) coordinates have units of arcseconds. The JWST coordinate system connectes to RA, DEC through the V3 position angle (PA), that measures the rotation of the observatory with respect to north when projected on sky. All this information is contained in the JWST Science Instrument Aperture File (SIAF) on board the observatory, with V2/V3 also often referred to as SIAF coordinates. The MRS FoV is located on the far right of the JWST FoV at a distance of 20 arcseconds from the top right corner of the MIRI Imager, as seen in the right panel of Fig. <ref>. The FoV of MIRI are rotated with respect to V3 by an angle of ∼ 4.8^∘ for the Imager, and ∼ 7.6^∘ to ∼ 8.8^∘ for the MRS, as shown in Fig. <ref>. §.§ Geometric Distortion Ideally the optics would disperse and image the spectra of each slice orthogonal to the detector, and similar to most conventional spectrographs, this is not the case for the MRS. The anamorphic optics, slicer and camera optics introduce significant optical distortion to the imaged field, resulting in the spectra being projected onto the detector as curved lines. This curvature is illustrated in the left panel of Fig. <ref>, and is typical for slit spectrometers, usually referred to as keystone and smile distortion <cit.>. The distortion in the along-slice spatial direction is more subtle than just a curved spectrum on the detector; the plate scale (detector pixel subtended angle on the sky), is changing non-linearly as a function of position in the field and wavelength, differently for each slice. This intra-slice distortion appears in both the dispersion and the spatial direction, and the rectification of these coordinates, from detector pixels to MRS local coordinates (, , ), is a crucial step in the JWST calibration pipeline in order to reconstruct the observed astronomical scene and conserve the optical quality of the MRS. We define two terms. First, lines on the detector that trace a constant value of are denoted as iso-lines. Second, lines that trace constant wavelength () are denoted as iso-lines (the wavelength distortion calibration based on ground test data is described in detail by <cit.>). The transformation of pixel coordinates to local MRS coordinates is referred to as a "detector-to-cube" transformation, and is described by a set of polynomial transforms for each sub-band (1A to 4C). Each transform maps the coordinates (x, y) to , as: α_s(x, y) = ∑^N, N_i, j K_α, s(i, j) (x-x_s)^j y^i , λ_s(x, y) = ∑^N, N_i, j K_λ, s(i, j) (x-x_s)^j y^i , with N being the order of the two-dimensional polynomial transform, K_α, s and K_λ, s the polynomial coefficient matrices and x_s a reference pixel in the middle of each slice. The coordinate is discrete and only depends on the slice number since it is collapsed when dispersing the light on the detector. The value of is given by Eq. <ref>. β(s) = β_0 +(s-1)Δβ, where β_0 is the coordinate of the center of slice 1 and Δβ the slice width of each channel in arcseconds. The specific values for each of the MRS spectral channels are tabulated in Table <ref>, taken from <cit.>. We define s=1 at the center of the first slice, with the edges of the slice being s ± 0.5. There are two transformations to be made. The first is from detector pixels (x, y, s) to (, , ) to account for the MRS specific distortion of each sub-band, and in a second step we transform the local MRS coordinates (, , ) to the JWST global coordinates (V2/V3). This second transform accounts for distortion introduced by the JWST Optical Telescope Element (OTE) which all instruments are subject to, as well as placing the FoV of the MRS (, ), that is slightly rotated, onto V2/V3. This is shown in the right panel of Fig. <ref>. The coordinate transform from (, ) to V2/V3 is given by a second order polynomial: V2_Ch(α, β) = ∑_i,j=0, 0^2, 2 T_Ch, V2(i, j) α^j β^i V3_Ch(α, β) = ∑_i,j=0, 0^2, 2 T_Ch, V3(i, j) α^j β^i, where T_Ch, V2, T_Ch, V3 are the polynomial coefficient matrices for V2 and V3 respectively. § DATA AND METHODS §.§ Commissioning and Cycle 1 Calibration During JWST commissioning dedicated observations listed in Table <ref> were executed, in order to validate and update the distortion of MIRI. For the MRS the goal was to check the geometric distortion, derive the field transform from local MRS coordinates to the JWST SIAF, update the boresight offsets with respect to the MIRI Imager, and test the MRS dither patterns. The first program that was used is JWST Program Identifier (PID) 1012, consisting of exposures of the MRS internal calibration lamp, which fully illuminates the detector slices and was used to derive the detector slice mask. To help derive the astrometric calibration, the strategy was to observe bright stars with the MRS, including simultaneous MIRI Imaging and parallel FGS observations. Both FGS and the MIRI Imager, with their large FoV, contained many stars with very precise astrometric information from GAIA <cit.>. PID 1049, and 1050 were part of the PSF measurement and provided very bright sources for channels 3/4 and channels 1/2 respectively. For PID 1049 a red planetary nebula (SMP LMC-58) was observed in the dither patterns optimized for the long channels (3, 4) of the MRS. In PID 1050 a photometric standard A-star (HD 163466) was observed in the point source optimised dither pattern of the channel 1, as well in an extended source dither pattern and at the instrument boresight. As described in Section <ref>, additional observations were needed in order to improve the distortion solution of the MRS. A Cycle 1 calibration program (PID 1524, observation 16) was designed and executed as one of the first calibration programs post-commissioning, with the goal to characterise the MRS distortion in detail. A custom dither pattern of a bright O-star, 10 Lac, was uploaded to the observatory, which would place the point source to three or more positions in the along-slice direction for most slices of each channel. This enables to fit the plate scale for each slice with the required second order polynomial <cit.>. The custom dither pattern that includes 57 points is shown in Fig. <ref>, which shows its overlap with the slicer of each channel. 10 Lac was chosen as the target since it was bright enough to be efficiently observed with 20 frames per integration, provided sufficient S/N up to channel 4C, and had emission lines that could be used to calibrate the wavelength distortion of the MRS. Finally, subsequent observations of 10 Lac (PID 1524, observation 17) and observations of standard A and G stars through Cycle 1 photometric calibration programs (1536, 1538) that included target acquisition (TA) enabled the astrometric precision monitoring of the MRS. The standard JWST MIRI MRS pipeline[jwst pipeline version 1.9, Calibration Reference Data System (CRDS) version: 11.16.3, CRDS context: jwst_0932.pmap]<cit.> was used to process the raw data files from the observations. First the was ran to obtain rate files , i.e. slope images <cit.>. From the that deals with the calibration of spectroscopic modes of JWST (for the MRS see <cit.>), we used the scattered light and detector fringing effects. The distortion and astrometric calibration folds into the first step of , that is the . Dedicated background observations for each observation were subtracted on the detector level as slope images. §.§ Detector-based point source tracing Here, we briefly present the methods used to trace a point source on the detector of the MRS, also shown in Fig. <ref>. These form the basis of the whole subsequent distortion calibration. There are two quantities that are extracted from the detector: (i) the across-slice position of the point source, and (ii) the along-slice, iso-trace as a function of detector position (x, y). (i) In the across-slice direction the point source is sampled by the IFU image slicer mirror slices and the goal of the detector-fitting is to estimate the sub-slice position (or slicer coordinate ) of the source. The signal in each slice on the detector is summed, to effectively integrate over the along-slice direction , and fitted against the slice index s with a pseudo-Voigt profile[The pseudo-Voigt function, a linear combination of a Gaussian and Lorentzian]. The validity of this method was tested by taking a theoretical PSF model, sampling it by the number and width of the slices for each MRS band, and comparing the estimated parameter with the input coordinate. The error of the fit was in the simulated tests was in the order of a few percent of a slice width. In the bottom right panel of Fig. <ref>, an example of the slicer coordinate estimation in Ch. 3 using the bright star from PID 1524 is shown. (ii) The iso-can be traced in each slice in two different ways. First, using an empirical PSF profile fit directly on the detector signal. This can be done in a row-by-row manner, or along an iso-, with the pixels belonging to a given wavelength bin of a slice identified and fitted by the profile. Both a Gaussian and a pseudo-Voigt function were tested with the Voigt function resulting in a better fit (shown in <cit.>, Fig. <ref>). The pseudo-Voigt function is motivated by the fact that the light scattered within the MIRI detectors <cit.> produces elongated wings for the MRS PSF <cit.>, and a similar behaviour is also seen in the MIRI Imager <cit.>. For slices that contain a significant fraction of the PSF core the iso-traces are well fitted this way, since the PSF is symmetric, the centre of the empirical profile coincides with the centre of the PSF. For slices illuminated by the PSF wings and not containing a significant part of the PSF core (mainly in channels 3, 4) we estimate the iso-traces on the detector using forward modeling of the theoretical MRS PSF as shown in Fig. <ref>. The MRS PSF was modeled using the package <cit.>, and was broadened by convolution with a Gaussian kernel to match the optical quality observed in flight. The detailed description of this model and the commissioning PSF analysis for the MRS is described in <cit.>. The PSF model is then interpolated onto the detector using the distortion model, and iterating with each update of the distortion. In Fig. <ref> we show that the PSF model, and distortion solution from ground are sufficient to reproduce the profile of a point source on the detector within 5% for slices close to the PSF core (top right panel in Fig. <ref>), and worsening to residual error of ∼20% for slices further away (bottom right panel in Fig. <ref>). Even with higher residuals, the error on the centre is in the order o 10% of a pixel. This validity of the fitting was tested for multiple locations in the FoV, different rows on the detector, and for all bands. § IN-FLIGHT ASTROMETRIC AND DISTORTION CALIBRATION §.§ Systematic errors from ground calibration A preliminary calibration of the MRS geometric distortion was performed using the ground calibration data <cit.>, based on the optical design model of the MRS (modeled using the ray tracing software Zemax OpticStudio). That original analysis was performed using reconstructed cubes - median-collapsed in wavelength - while applying slice to slice corrections. The solution from the tweaked Zemax OpticStudio model was precise to approximately 0.5-1 resolution element depending on the band, however, the following issues observed in ground and flight data motivated a new derivation of the distortion solution for the MRS. First, the initial calibration used the cubes median-collapsed in wavelength to correct the MRS optical design model. This collapsing in wavelength left a significant systematic error in as a function of wavelength, of the order of a pixel (see Sect. 3.2.3 in <cit.>). The systematic error was partially corrected by re-fitting the ground data on the detector level which mitigated the issue (see Appendix <ref>). However, when tested on flight data, at the top of the detector and for all available points a similar systematic was present, underestimating the iso-trace compared to the middle and bottom of the detector by half a pixel. With comparisons between bands pointing towards a global distortion systematic error, it potentially stems from issues in the optics of the test illumination source setup used in ground testing. Second, by comparing the commanded offsets in V2/V3 coordinates with the ones estimated from fitting the point source centre on the detector, we found misalignment between slices in some bands, while in other bands (especially those in channel 3) there was evident magnification errors within the slices. All these systematic errors are probably tied to discrepancies in the estimated coordinate transforms using the ground test data, given the low S/N, non diffraction-limited and asymmetric PSF and distortion of optical test setup, and lack of point source data for more recent observatory level campaigns. These errors are of the order of 1 pixel. In order to minimize these systematic residuals in the alpha distortion calibration, and to derive the astrometric calibration that maps the local MRS coordinates to the V2/V3 coordinate frame, we use the PID 1524 cycle 1 calibration observations of the O-star 10 Lac shown in Fig. <ref>. The updated distortion transforms are then tested on the commissioning observations in order to estimate the residual error in the distortion, as well as the estimated absolute astrometric error. §.§ Correction of telescope attitude matrix Although target acquisition can reliably place a star in the MIRI MRS to an accuracy of about 30 mas[<https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-operations/miri-target-acquisition/miri-mrs-target-acquisition>], the absolute position information of the world coordinate solution (WCS) association with any given observation is typically only accurate to about 300 mas due to a combination of errors in the guide star catalog and uncertainty in the spacecraft roll angle. For the purposes of calibrating the absolute location of the MRS within the telescope focal plane we therefore obtained simultaneous FGS and MIRI imaging data in parallel with the dithered MRS observations. Using 25 bright stars in the FGS imaging field with Gaia Data Release 3 <cit.> astrometry we re-derived the telescope attitude matrix for each of our exposures, improving the absolute positional accuracy of the WCS to 30 mas. The FGS and MIRI imaging data likewise were used to confirm that the relative accuracy of the commanded dither offsets was good to 10 mas or better, which was deemed more than sufficient for our analysis. §.§ Detector Slice Mask We begin the analysis by deriving the detector slice mask that maps the projected slits on the MRS detectors. For a spatially homogeneous extended source illumination of the MRS, the signal within a slice is flat across the MRS FoV, dropping off rapidly at the edges. This allows for the derivation of the FoV limits along these slices (x-direction) for each detector row. We do so using solely the signal values on the detector, no other calibration information is required. To derive the FoV limits on the detector plane for each slice, one has to measure the drop in throughput at the slice edges. The deviation of the overall throughput in each slice from an ideal boxcar function, however, makes this a non-trivial challenge. The derivation of an accurate slice mask is thus limited by the accurate definition of the throughput at the slice edges. The algorithm to derive a slice mask performs the following three steps for all slices in a MRS channel for each detector row. Firstly, the minima (troughs) in the signal separating the slices are identified as shown in the top plot of Fig. <ref>. These minima bound the slices in each channel. Secondly, the throughput of each slice is determined by first fitting a 3rd order polynomial to the signal values of the pixels that are well within the slice (5 pixels from the slice edges). This is shown in the middle right plot of Fig. <ref>. The signal between two minima surrounding the slice is then divided by the fitted 3rd order polynomial, this yields an estimation of the slice throughput as shown middle left plot of Fig. <ref>. Thirdly, a throughput criterion is used to define the left-most and right-most pixel that contributes to a slice (edges of the MRS FOV). We show the result for one detector row in channel 1, in the bottom plot of Fig. <ref>. Iterating the above steps for each detector row produces a 2D slice mask. In fact, we define not one, but nine slice masks using this method. These nine slice masks correspond to different slice throughput levels, from 10% throughput to 90% throughput. Using a slice mask with 90% throughput is a conservative approach that yields a conservative field of view. It is also possible to use a slice mask with a 50% throughput. This will yield a larger MRS FoV, however, the signal at the FoV edges will be much lower. The JWST MIRI/MRS pipeline uses an 80% transmission cut-off by default. §.§ Field of view definition As seen in Fig. <ref> not every slice of the dither pattern in PID 1524 (observation 16) has point sources centred on it. We identified which exposures had signal in a given slice by iterating through all exposures, and finding the three along that had the maximum signal in the slice. These exposures were then selected to calculate the V2/V3 to /transform. We fitted their slicer coordinate on the detector using the pseudo-Voigt profile fitting shown in Fig. <ref>. Then, using the matrix form of a coordinate rotation Eq. <ref> and Eq. <ref>, the FoV was defined based on the V2/V3 coordinates reported in the metadata. [ α; β ] =[ α; β_0 + (s-1)*Δβ ] =[ cos(θ) sin(θ); -sin(θ) cos(θ) ][ V2-V2_0; V3-V3_0 ] β_0 + (s-1)*Δβ = -sin(θ)(V2-V2_0) + cos(θ)(V3-V3_0) This assumes that the V2/V3 field is not significantly distorted over the size of the FoV of the MRS. Even if the field distortion is not negligible, we implicitly incorporate this distortion term in the distortion model. The absolute coordinate does not need to be correct in this step since we are mainly interested in Δα, the distance of the points along the coordinate in order to calibrate the slice specific distortion. The final /system is defined after the new distortion is derived. §.§ Along-slice distortion calibration With the relative astrometry derived from the V2/V3 transform derived in section <ref>, we can measure and fit the distortion in the iso-traces for every slice in each of the 12 spectral sub-bands (1A-4C). As described in Section <ref>, with the forward modeling of the PSF we are able to fit the signal on the detector even for slices that do not contain the core of the PSF. We fit the distortion with a (Nx, Ny) = (2, 4) order polynomial. These values for the polynomial provided enough flexibility to fit the distortion everywhere on the detector without suffering from over-fitting especially at the edges of slices. Finally, we set the local MRS coordinate field to be centred around zero. We calculate the FoV limits for each slice using the detector slice mask and the newly derived distortion, and subtract the mean FoV value from the intercept coefficient of the polynomial transform of each slice. §.§ Local MRS to JWST V2/V3 coordinates transform The last step of the analysis is to refine the V2/V3 to /transform, this time using the calibrated distortion. This yields important parameters for the boresight offsets of the MRS bands in the V2/V3 frame that is used by FGS to guide targets to the MRS and perform dithering offsets. The optimised dither pattern can be re-calculated given the new distortion and V2/V3 to /transform, and a polynomial transform shown in Eqs. <ref>, <ref> is fitted. In Fig. <ref> we show the final footprints of the MRS FoV of each band in the V2/V3 coordinate system, and in Table <ref> we list the relevant parameters for each band. §.§ Verification and estimated precision To test the validity of the full chain of calibration we use other point source observations from PID 1049 and PID 1050, comparing their predicted locations in /to the ones measured from the data. The results are shown in Fig. <ref>. On the top panel, a single observation of the A-star from PID 1050 was used to compare the iso-trace. For each row the centre of the point source is fitted using the voigt profile and transformed into , using the distortion solution from <cit.>, the pre-launch distortion (Appendix <ref>), and the distortion derived from flight data. Since the star should have a fixed spatial coordinate (, ) as a function of wavelength, this illustrates the improvement of the latest distortion model. The residual modulation of the iso-trace that is seenIn the bottom panel of Fig. <ref>, we plotted the distribution of the residual between expected based on the pointing telemetry, and the measured iso-trace from the detector for all (10-13) individual dithers per band. Since we are interested in the relative error within a given MRS band we subtract the mean residual, such that the distributions are centred around zero. This removes any global offset which can arise from pointing errors or repeatability issues discussed in Section <ref>. The standard deviation of a fitted Gaussian distribution to the residuals is reported, satisfying in all bands the tenth of a resolution element goal set out for the MRS distortion. Another example of illustrating that the distortion solution works as intended is to visualise a point source in the reconstructed cube. In Fig. <ref> we show an A star (PID 1050, observation 9) in band 1A, with all 10 dithers positions combined with the size of the pixel in the cube (also referred to as spaxel, see <cit.> for details) set to 0.05" and binned over a broad wavelength bin of 0.05 in order to increase the signal to noise ratio. This oversampling of the MRS PSF is possible because of the number of dithers available in the given observation, but also the fact that a broad wavelength bin naturally samples the PSF better due to the iso-trace being curved on the detector. The JWST PSF is revealed in detail, and we also observe the wings of the PSF at high S/N. § DISCUSSION §.§ Repeatability of the Grating Wheel Assembly During commissioning it was found that the repeatability of the angular positioning of the wheel induced measurable astrometric offsets. Each time the wheel moves between observations to select a different dichroic filter and grating setting, it returns to a given position with some uncertainty of around 0.5-1 pixels projected on the detector, based on the MIRI optical model. For example, the NIRSpec grating wheel also shows similar repeatability performance, which was already known from ground testing <cit.> and mostly affects the dispersion direction. For NIRSpec, two sensors measure this offsets and accordingly adjust the distortion model to compensate for it <cit.>. Unfortunately for the MRS, the positioning sensors do not have the required resolution to measure the changes in the wheel position. In order to assess the repeatability of the astrometric solution we compared results across ∼ 10 different observations of standard stars obtained throughout the Cycle 1 Calibration program using data from PIDs 1050, 1524, 1536, and 1538. Each of these programs used target acquisition to observe well-known point sources in all twelve MRS bands, and all used a standard dither pattern (except PID 1524 Observation 16, which used the custom 57-point pattern). For each observation, we constructed a full 5-28 micron data cube and measured the centroid location of the target star as a function of wavelength in this cube. We plot the result in Figure <ref>, showing the relative offset from the median position in the alpha and beta directions as a function of wavelength. While the β direction positions are extremely stable from band to band, we note more significant jumps between individual bands in the α direction, with an rms of about 30 milliarcseconds. This represents roughly 1/5 detector pixel or better repeatability overall, albeit with some extreme cases in which the offsets can be as large as about 1/2 pixel. In Channel 2B for instance, we note a significant difference between -0.05 arcsec and +0.07 arcsec between successive observations of 10 Lac (blue and red points) during which the observatory remained in fine guide status and the only mechanism that moved between two sets of dithers was the DGA wheel. Another fact pointing towards the wheel repeatability being the issue is the fact that the Channel 1-4 and 2-3 offsets are anti-correlated, as one would expect from the optics of the MRS. The gratings of these pairs (1-4, 2-3) are respectively located on the same DGA wheel, and due to an additional reflection for channels 3 and 4, an offset in angle of the wheel will move the beam towards the opposite direction compared to channels 1 and 2. For calibration purposes, we fit the average offset from the mean for each of the twelve bands and applied this boresight offset to the latest distortion reference files. This ensures that the astrometric location of a star is on average consistent across all MRS bands, even if repeatability can sometimes cause deviations from this mean position for individual observations. The impact of the wheel repeatability on the data reduction and science products of the MRS was tested using the repeated measurement of PID 1524 (observation 17). We compared the exposures for band 2B which showed the worst offset (Fig. <ref>, 2B blue and red points). No measurable change in the calibrated iso-trace, besides a constant astrometric offset, is found. Presumably, the displacement of the beam due to the diffraction grating is still within the linear regime of the distortion. Therefore, with no significant distortion effects, there are some minor issues that arise from the non-repeatability of the source on the detector between different dichroic settings. First, for extended sources where the astrometry cannot be measured in the data, the alignment of the bands in the 3D cube will not be exact and spatial features will appear to shift between bands. Due to the wavelength overlap between the bands, such cases can potentially be mitigated by examining the common wavelengths and the behaviour of any features from one band to the other. Second, dedicated point source fringe flats based on calibration observations would not be directly applicable since the point source fringes change rapidly from pixel to pixel on the detector <cit.>. Third, astrometric measurements of the same source from different epochs might be biased and should include the repeatability error in its uncertainty. Finally, due to the relatively small FoV, complex scenes involving extended structures of interest or multiple sources need to be planned more carefully, taking into account the additional 100 mas astrometric uncertainty. §.§ Lessons learned for future calibration Due to its design and operating wavelengths, the MRS is a challenging instrument to calibrate. A series of instrumental effects are present due to a combination of design choices and peculiarities of the MIRI detectors, such as spectral fringing, scattered light within the detector, under-sampling of the PSF, and optical distortion <cit.>. The following list identifies some of the insights for testing an IFU, after a decade of calibration work performed by the instrument team. * Testing the MRS in the lab, at conditions representative of its flight status is challenging. The MIRI Telescope Simulator, built specifically for testing MIRI, could not provide a bright diffraction limited point source at all wavelengths, due to a combination of optical distortions of the test setup and lack of flux of the test light source. Therefore, the early investment and development of a robust testing facility is key. * For calibrating the distortion and astrometry pre-flight, a well defined point source that can be steered precisely in the FoV of the IFU is required, correlated to a distortion free reference frame. This could potentially be the steering stage X, Y positions. Even with its limitations the MTS provided many insights into the distortion of the MRS, and was able to deliver a preliminary calibration. * Hardware tested in flight will potentially reveal new instrumental systematic errors that were not caught in the instrument testing on ground. Therefore, planning for dedicated commissioning observations for all key parameters is essential. Additionally, allowing for flexible project management of commissioning was very valuable in order to adapt observations and tests given the first data that were obtained. * Finally, careful planning and development of the optimised dither pattern used for PID 1524 (observation 16) enabled the calibration of the distortion efficiently using a bright star. Additionally the same dataset could be used for multiple additional tests like wavelength, fringing, optical quality, and photometric calibration as a function of the position in the FoV. §.§ Summary We have derived the full astrometric distortion solution of the MRS, based on flight data. The calibration of JWST field coordinates V2/V3 to the local MRS slicer coordinates /is required for the operation of the MRS, used during pointing and dithering of observations. The transform from detector to /and subsequently V2/V3, enables the pipeline to correct for the optical distortion introduced due to the IFUs, a required step for the rectification of the 2D detector data into 3D cubes, that reconstruct the observed astronomical scene and allow for the extraction of spectra. A good understanding of the distortion might enable modelling of the MRS PSF and application of optimised forward modelling approaches for MRS data. The final precision of the distortion solution ranges from 10 mas in channels 1-3, up to 23 mas in channel 4. The total astrometric uncertainty of a given MRS exposure with target acquisition is estimated to be 50 mas, given by √(σ_distortion^2+σ_TA^2+σ_repeatability^2). Contributions: PP lead the calibration work package for the MIRI MRS distortion and astrometry, performed the main analysis, and wrote the manuscript. IA co-lead the calibration and analysis of the distortion, and derived the MRS slice mask. DL planned the dedicated cycle 1 calibration program 1524, performed the astrometric monitoring of the MRS, and implemented the calibation products in the JWST pipeline. IA, DL contributed to the manuscript writing and editing. DL, AMG, AG, AL contributed to the overall calibration and analysis of the distortion. All authors read and provided feedback on the manuscript. This research has made use of the NASA Astrophysics Data System and the packages <cit.>, <cit.>, <cit.> and <cit.>. The authors want to thank the hundreds of engineers and scientists, whose contributions over 25 years made the JWST mission possible. Polychronis Patapis thanks the Swiss Society for Astrophysics and Astronomy (SSAA) for the MERAC Funding and Travel award. Ioannis Argyriou, Danny Gasman, and Bart Vandenbussche would like to thank the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) for their support in the framework of the PRODEX Programme. Patrick J. Kavanagh acknowledges support from the Science Foundation Ireland/Irish Research Council Pathway programme under Grant Number 21/PATH-S/9360. Javier Álvarez-Márquez acknowledges support by grant PIB2021-127718NB-100 by the Spanish Ministry of Science and Innovation/State Agency of Research MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe” Alvaro Labiano acknowledges the support from Comunidad de Madrid through the Atracción de Talento Investigador Grant 2017-T1/TIC-5213, and PID2019-106280GB-I00 (MCIU/AEI/FEDER,UE). The work presented is the effort of the entire MIRI team and the enthusiasm within the MIRI partnership is a significant factor in its success. MIRI draws on the scientific and technical expertise of the following organisations: Ames Research Center, USA; Airbus Defence and Space, UK; CEA-Irfu, Saclay, France; Centre Spatial de Liége, Belgium; Consejo Superior de Investigaciones Científicas, Spain; Carl Zeiss Optronics, Germany; Chalmers University of Technology, Sweden; Danish Space Research Institute, Denmark; Dublin Institute for Advanced Studies, Ireland; European Space Agency, Netherlands; ETCA, Belgium; ETH Zurich, Switzerland; Goddard Space Flight Center, USA; Institute d'Astrophysique Spatiale, France; Instituto Nacional de Técnica Aeroespacial, Spain; Institute for Astronomy, Edinburgh, UK; Jet Propulsion Laboratory, USA; Laboratoire d'Astrophysique de Marseille (LAM), France; Leiden University, Netherlands; Lockheed Advanced Technology Center (USA); NOVA Opt-IR group at Dwingeloo, Netherlands; Northrop Grumman, USA; Max-Planck Institut für Astronomie (MPIA), Heidelberg, Germany; Laboratoire d’Etudes Spatiales et d'Instrumentation en Astrophysique (LESIA), France; Paul Scherrer Institut, Switzerland; Raytheon Vision Systems, USA; RUAG Aerospace, Switzerland; Rutherford Appleton Laboratory (RAL Space), UK; Space Telescope Science Institute, USA; Toegepast- Natuurwetenschappelijk Onderzoek (TNO-TPD), Netherlands; UK Astronomy Technology Centre, UK; University College London, UK; University of Amsterdam, Netherlands; University of Arizona, USA; University of Bern, Switzerland; University of Cardiff, UK; University of Cologne, Germany; University of Ghent; University of Groningen, Netherlands; University of Leicester, UK; University of Leuven, Belgium; University of Stockholm, Sweden; Utah State University, USA. A portion of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. We would like to thank the following National and International Funding Agencies for their support of the MIRI development: NASA; ESA; Belgian Science Policy Office; Centre Nationale D'Etudes Spatiales (CNES); Danish National Space Centre; Deutsches Zentrum fur Luft-und Raumfahrt (DLR); Enterprise Ireland; Ministerio De Economiá y Competividad; Netherlands Research School for Astronomy (NOVA); Netherlands Organisation for Scientific Research (NWO); Science and Technology Facilities Council; Swiss Space Office; Swedish National Space Board; UK Space Agency. We take this opportunity to thank the ESA JWST Project team and the NASA Goddard ISIM team for their capable technical support in the development of MIRI, its delivery and successful integration. aa § PRE-LAUNCH CALIBRATION §.§ Ground test data MIRI Flight Model Test Campaign. The MIRI Flight Model (FM) test campaign took place at the Rutherford Appleton Laboratory in the UK in 2011. The MIRI instrument and its operational modes were tested under cryogenic conditions, and dedicated test runs were performed to characterise all aspects of the instrument (optical alignment, performance of electronics and detectors, optical transmission, etc). The FM test campaign included a fine raster scan of a point source with multiple exposures per slice for each sub-band of the MRS, dithered observations for the FoV measurement, and observations with a spatially uniform extended source. A dedicated optical system, the MIRI Telescope Simulator <cit.>, was built in order to provide the necessary input illumination. A black body source was available operated at a temperature of 400 K, 600 K and 800 K. For simulating point sources, a 100 pinhole mask was placed in the pupil wheel of the MTS and provided a point-like source that could be steered with high precision. It should be noted that due to a non-uniform illumination of the pupil at the field location of the MRS, the pinhole mask did not produce a diffraction limited PSF but rather an elongated semi-extended source. The flux from the source was also relatively low, with a maximum of 3 Digital Numbers per second (DN/s), out of the ∼2000 DN/s that an almost saturating source would provide. This resulted in low signal to noise ratio (S/N) in the raster and FoV observations of the point source. Finally, the most critical issue for the geometric distortion calibration based on FM data was a field distortion of the MTS itself that introduced a systematic error in the reference coordinate system of the MTS source (MTS-X/MTS-Y). This distortion was measured from MIRI Imager exposures and modeled in Zemax OpticStudio, indicating a significant shear in the MTS field which could not be corrected to the precision required for the MRS distortion calibration. Nonetheless, the FM data provide a starting point for deriving the intra-slice distortion, with a raster scan covering all slices and bands of the MRS. Even with a distorted MTS field, all the spectral bands can be placed onto a common coordinate frame. MIRI Cryogenic Vacuum Test Campaign Three Cryogenic Vacuum (CV) test campaigns were conducted at Goddard Space Flight Center between 2013 and 2016, denoted CV1, CV2 and CV3 <cit.>. The whole JWST Integrated Science Instrument Module (ISIM) was placed in a cryostat and cooled to operating temperature (this did not include the JWST telescope optics or bus). The testing setup provided a JWST-like point source with high flux but in a very limited wavelength range. For the MRS the useful signal was limited between 4.9–6.1 . The dither patterns observed were associated with the ISIM coordinate system defined by coordinates XAN/YAN, which provided a robust and distortion-free reference frame. Besides a 9-point dither pattern for the PSF measurement in CV2 and CV3, a very useful across slice scan was performed in CV3, where the source was displaced in fixed steps over a slice. This last dataset is used to evaluate our fitting routines as described in Sect. <ref>. Ground data processing The FM and CV data were taken from the Rutherford Appleton Laboratory (RAL) archive, already processed to slope images (units of Digital Numbers per second) <cit.>. All FM and CV data had dedicated backgrounds associated with the science exposures, which were always subtracted. §.§ Astrometric reference frame using sub-band 1A At the beginning of the analysis to derive the MRS geometric distortion, we address the fact that there is a lack of a reference coordinate system that can connect the FoV of all 12 MRS bands and provide the distance of the FM raster scan points in /coordinates. These distances would then be fitted to estimate the polynomial coefficients of the distortion transform Eq. <ref>. Ideally, this reference frame would have been the MTS commanded coordinates (MTS-X/MTS-Y), but due to the distortion of the MTS field itself, the reference would have been biased. First, the MTS distortion needs to be taken into account, and therefore, we consider the XAN/YAN coordinate frame of the CV2 FoV measurement. The XAN/YAN coordinates are assumed to be non-distorted and the CV2 measurement provided 9 points in the FoV of 1A. Unfortunately band 1A is the only band with enough signal from the ISIM point source. The dither pattern of CV2 was based on the distortion derived from the FM data, and therefore the commanded offsets could not a priori be assumed to be orthogonal to the slicer of 1A. We used the sampling of CV2 point source by the slices to determine the centre of each pointing in the across-slice position, in units of slice number s. We then performed an affine transform to offset, rotate and scale the XAN/YAN coordinates to /. The affine transform is given by Eq. <ref> where Eq. <ref> is least-squares minimized in order to estimate the parameters β_0, Δβ, θ, X_0 and Y_0. The parameters β_0 and Δβ fully define the coordinate of the local MRS slicer coordinate system, and is just defined orthogonal to that. The angle θ provides the rotation of the MRS 1A FoV with respect to the JWST coordinate system V2/V3. For the points of the CV2 campaign, we therefore have a valid and unbiased reference frame. Next, we made use of the Zemax OpticStudio optical model to obtain a set of calibrated slices in the locations where the CV2 data had available points. Since we effectively only have two points per slice from the CV2 FoV measurement, we can only correct the distortion up to a magnification but cannot check or correct the plate scale gradient (the second order term). By examining the Zemax OpticStudio model prediction as it is, we see that the projected slices on the detector are offset compared to the slice mask derived in Sect. <ref>. We correct that by moving the Zemax OpticStudio slices on the middle of the detector (row 512) to match the measured slice edges. This yields a Δ x value for each of the slices in band 1A, corresponding to the shift in detector x between Zemax OpticStudio and measured slices. For each of the slices that have at least two point sources we identified pairs of points that have approximately the same coordinate, and used their coordinates from the XAN/YAN transform to calculate their distance, Δα_XAN/YAN, in units of arcsec. We then measured their iso-on the detector, using the Zemax OpticStudio model we obtained , and again calculated the distance, denoted Δα_Zemax. In Fig. <ref> we show the difference between Δα_XAN/YAN and Δα_Zemax as a function of position along detector columns. This difference reflects a discrepancy in the magnification of the field in the Zemax OpticStudio model, which we correct by multiplying by a scaling factor for each slice. Interestingly there seems to be a correlation between detector-X position of the slice and the magnification error. Another systematic effect that can be seen in Fig. <ref> is a difference between pairs in the same slice. The darker colours of the points refer to the point source pairs that are well centred in the slice, while the lighter colors correspond to the half-slice offset of the dither pattern, with the point source centred in between two slices. Finally, there is a slice dependent offset between Zemax OpticStudio and the reference frame which is expected from slight slit mask offsets which would cover different parts of the field for each slice. This is corrected with a constant offset for each of the slices, concluding the calibration of slices 3, 4, 8, 9, 13, 14, 17 and 18 (21 slices in total). §.§ MTS to reference frame transform With several calibrated slices in band 1A, but lacking any reference frame in other bands, we proceeded to derive a transform linking the MTS coordinates to the reference frame defined in Sect. <ref>. This step is essential for enabling the distortion calibration of all other bands (that did not have any useful signal in the CV campaign). We used the main raster scan exposures from the FM campaign, selected the calibrated slices and measured their /coordinates from the detector. We then fitted the affine transform in Eq. <ref>, including translation, rotation, scale and shear, to map the MTS-X/MTS-Y coordinates to and respectively. In Fig. <ref> we show a square field in MTS coordinates projected onto reference 1A /frame, as well as the raster scan pointings of each of the SHORT bands 1A, 2A, 3A, 4A. [ α; β ] =[ α; β_0 + (s-1)*Δβ ] = R[θ]S[s] [ MTS_X-X_0; MTS_Y-Y_0 ], where R[θ] = [ cos(θ) sin(θ); -sin(θ) cos(θ) ] and S[s] = [ σ_x s_x; s_y σ_y ]. The parameters σ_x/y, s_x/y correspond to a field magnification and shear between MTS and the local MRS coordiantes. With the raster scans mostly aligned to the IFU image slicer for each band, we can observe the slight differences between the band-to-band boresight and rotation of the slicers. Using a similar model as in Eq. <ref>, we calculate the transform from the reference frame 1A to the local /frame for each band, based on the detector slice coordinates of the raster scans. This yields the β_0 and Δβ parameters for defining the coordinate, as well as the relative rotation and boresight offset with respect to band 1A, for all 12 sub-bands. The final value of the boresight shift in between the bands were determined after the final calibration of the alpha distortion in Sect. <ref>. §.§ Along-Slice Distortion Transformation Matrix The procedure was the same for each band. The main raster scan of the given band was used, with each slice having available three exposures centred on the slice. The iso-trace on the detector was fitted for each exposure using a Gaussian profile on the detector as detailed in Sect. <ref>, going up the detector rows. The derived MTS to local MRS transforms provide the required astrometric information to assign each of the raster scan exposures an coordinate based on their MTS coordinates. The detector fringes modulate the centre along the dispersion direction and affect its estimation, and since we expect the distortion to be a smooth function we fitted a two-dimensional polynomial of order 2 in detector-X and order 4 in detector-Y. §.§ Iteration of distortion solution Having derived a first distortion solution based on the FM data, we used the FM PSF measurement to reconstruct the PSF in /space. To do so we need to interpolate the detector pixels onto the MRS local coordinate frame. We used an inverse distance weighting algorithm, with exponential decaying weights (referred to in the pipeline cube building as [<https://jwst-pipeline.readthedocs.io/en/latest/jwst/cube_build/main.html>]). The offset of each dither was estimated using its MTS coordinates and transforming them into /coordinates. In Fig. <ref> the reconstructed and over-sampled PSF of band 2A is plotted, together with the individually reconstructed dithered exposures. We observe that the PSF is semi-extended and does not show the expected diffraction limited pattern. Not only that, but for the purpose of deriving the distortion solution, as seen in Fig. <ref>, the centroid estimated by a Gaussian does not match the centre of the PSF. Therefore, we iterate on the distortion solution by using the reconstructed PSF as a forward model to estimate the centre of a given FM point source observation. This activity was implemented in two ways, illustrated in Fig. <ref>. First, for band 1A, we used the 2D PSF to fit the points of the raster scan and to iterate on the MTS to /transform. The high resolution PSF is projected onto the local MRS coordinates with the from the package using linear interpolation. The optimisation used the as a loss function to minimise the difference between model and the data, with three free parameters of amplitude, offset , and offset . To ensure the minimisation converged, we restricted the evaluation of the loss function within an aperture of 0.5" around the point source (∼3 slices on the image slicer). This process yielded new centroids which were in turn used to calculate the new transform with Eq. <ref>. Second, using the distortion solution derived earlier, the PSF model is projected onto the detector for every row to estimate the iso-trace accounting for the shape of the PSF. This is repeated for every point in the raster scan, and the distortion was estimated again on the updated traces. The projection uses the estimate of the slice coordinate from the detector to provide the PSF -profile expected in the slice of interest. Typically this only works for slices either containing the peak of the PSF, or the slices adjacent to that, due to the signal to noise ratio of the FM data being very low. Here the function was used, again with linear interpolation. With the iso-traces and MTS to /field transform based upon the FM PSF, we re-calculate all the distortion polynomial coefficients as in Sect. <ref>.
http://arxiv.org/abs/2307.01560v2
20230704082958
Robust crystal structure identification at extreme conditions using a density-independent spectral descriptor and supervised learning
[ "Paul Lafourcade", "Jean-Bernard Maillet", "Christophe Denoual", "Eléonore Duval", "Arnaud Allera", "Alexandra M. Goryaeva", "Mihai-Cosmin Marinica" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Crystal Structure Atomic Descriptors Supervised Learning Molecular dynamics 1,2]P. Lafourcadecor1 paul.lafourcade@cea.fr 1,2]J.-B. Maillet 1,2]C. Denoual 3]E. Duval 4]A. Allera 4]A. M. Goryaeva 4]M.-C. Marinica [1]CEA DAM DIF, 91297 Arpajon, France [2]Université Paris-Saclay, LMCE, 91680 Bruyères-le-Châtel, France [3]Laboratoire d'Acoustique de l'Université du Mans (LAUM), UMR 6613, CNRS, Le Mans Université, Le Mans, France [4]Université Paris-Saclay, CEA, Service de recherche en Corrosion et Comportement des Matériaux, SRMP, F-91191 Gif-sur-Yvette, France The increased time- and length-scale of classical molecular dynamics simulations have led to raw data flows surpassing storage capacities, necessitating on-the-fly integration of structural analysis algorithms. As a result, algorithms must be computationally efficient, accurate, and stable at finite temperature to reliably extract the relevant features of the data at simulation time. In this work, we leverage spectral descriptors to encode local atomic environments and build crystal structure classification models. In addition to the classical way spectral descriptors are computed, i.e. over a fixed radius neighborhood sphere around a central atom, we propose an extension to make them independent from the material's density. Models are trained on defect-free crystal structures with moderate thermal noise and elastic deformation, using the linear discriminant analysis (LDA) method for dimensionality reduction and logistic regression (LR) for subsequent classification. The proposed classification model is intentionally designed to be simple, incorporating only a limited number of parameters. This deliberate simplicity enables the model to be trained effectively even when working with small databases. Despite the limited training data, the model still demonstrates inherent transferability, making it applicable to a broader range of scenarios and datasets. The accuracy of our models in extreme conditions (high temperature, high density, large deformation) is compared to traditional algorithms from the literature, namely adaptive common neighbor analysis (a-CNA), polyhedral template matching (PTM) and diamond structure identification (IDS). Finally, we showcase two applications of our method: tracking a solid-solid BCC-to-HCP phase transformation in Zirconium at high pressure up to high temperature, and visualizing stress-induced dislocation loop expansion in single crystal FCC Aluminum containing a Frank-Read source, at high temperature. § INTRODUCTION In the last decades, atomic scale simulations such as ab-initio calculations and molecular dynamics (MD) have been increasingly used to model materials properties based on atomic-scale processes. As large-scale simulations are needed to realistically simulate the dynamics of extended systems (e.g. linear defects, interfaces) over long simulated times, MD simulations have been successfully used to investigate a wide range of thermodynamic regimes due to their favorable linear scaling of computational time vs. system's size. More recently, MD simulations based on empirical potentials have seen a considerable increase of their accuracy and accessible time- and length-scale, as they benefited from the development of ab-initio calculations and machine learning (ML) methods –resulting in more accurate force fields, in conjunction with the development of high performance computing (HPC) facilities and more efficient computational methods. As a result, simulations based on ab-initio-accurate ML force fields have reached the scale of tens <cit.> to hundreds <cit.> of billions atoms, allowing studies to be carried out at the micrometer scale where direct microstructure comparison with experiments becomes possible <cit.>. However, trajectories of atoms obtained via atomistic simulations must be further processed in order to extract the properties and statistics of objects of interest, i.e. defects, phases, interfaces or precipitates for example. Present-day standard visualization methods can help to identify structural changes by analyzing atomic trajectories. However, the automation of such quantitative analysis as well as a robust identification and extraction of crystalline defects is still challenging. For this purpose, numerous computational methods have been developed in order to enable post-processing analysis of particle-position datasets. These methods generally proceed by comparing individual atomic environments to that of a reference structure, while allowing a certain tolerance in the result. The most common methods for the local structure analysis include basic energy thresholding, censtrosymmetry parameter (CSP) analysis <cit.>, bond order analysis <cit.>, common neighbor analysis (CNA) <cit.>, adaptative common neighbor analysis (a-CNA) <cit.>, bond angle analysis (BAA) <cit.>, Voronoi analysis <cit.>, neighbor distance analysis (NDA) <cit.> and polyhedral template matching (PTM) analysis <cit.>. Another set of techniques, oriented towards continuum mechanics measures, has been proposed in the literature to identify the deformation state as well as defects such as dislocation lines. In order to observe dislocation-mediated plasticity, such tools can be used to filter the data, remove crystalline atoms, and extract the atoms that constitute the core of defects. However, no information can be extracted concerning the type of crystal defects, which can be represented by vacancies, interstitials, or dislocation lines. Also, atoms identification can fail, and cases where defects overlap (e.g. an interstitial is found in a dislocation core) is not taken into account. Continuum-like measures, computed as per-atom variables over the current neighborhood, with respect to a reference configuration, have also been introduced <cit.>, allowing the computation of local deformation gradient tensors, slip vectors or Nye tensor. The combination of the latter techniques, along with structural identification methods, have also been used to identify dislocation lines and their Burgers vectors. However they become inefficient at high temperatures or when dislocation interactions come into play. Another widely-used method, known as the dislocation extraction algorithm (DXA) <cit.> allows to consecutively mesh the atomistic configuration, map the local tetrahedra to perfect crystal structures, extract the distorted tetrahedra (i.e. disordered atoms), and perform the Burgers circuit in order to extract the dislocation lines as well as their Burgers vectors. This algorithm is very helpful for monitoring the evolution of dislocation densities over time <cit.> and characterizing crystal plasticity features <cit.>. Most of the methods listed above are directly available in the Open Visualization Tool () <cit.>, enabling a straightforward comparison with newly developed structure identification tools. However, a common weakness of traditional analysis methods is their sensitivity to thermal noise which can be limiting when simulating crystals at finite temperature. To remove thermal vibrations while preserving the features of the high-temperature structure, vibration-averaging can be used, as well as structure denoising based on graph neural networks <cit.>. Some of the techniques described above have been used to process atomistic simulations on-the-fly, benefiting from their ease of implementation and high data throughput since they are easily integrated in MD codes such as  <cit.> or ExaSTAMP <cit.>. However, more computationally demanding methods, such as DXA, remain challenging to use for on-the-fly, large-scale application. Although the DXA implementation available in is highly optimized, the algorithm is computationally expensive and requires 1 GB of free RAM memory per million atoms. The memory should be distributed across several nodes to scale to large systems, or a parallelism scheme should be adopted, which is not the case in the official distribution but seems to be a work in progress on their side according to very recent communications. For example, one cubic micrometer sample of BCC Tantalum with only the positions written to a single ASCII file would contain approximately 55 billion atoms and occupy 24 GB of disk space. In order to run DXA on this sample, up to 55,000 GB of memory would be required, which is technically difficult to achieve without special hardware design. This shows the urgent necessity to develop in-situ analysis tools, since filtering such simulations on-the-fly can cut the needs of storage capacity by orders of magnitude. In addition to basic properties, other descriptors like deformation gradient tensors, velocity-gradient tensors or bond-order parameters for example would drastically increase the necessary disk space. It becomes clear that storing a few terabytes per snapshot over a few nanoseconds, even with a low output frequency, cannot be considered as viable. More lately, additional atomic descriptors enabling local structure analysis have been introduced <cit.>, such as Behler-Parrinello Chebyshev polynomial representations (CPR) <cit.>, Behler-Parrinello symmetry functions (BP) <cit.>, smooth overlap of atomic positions (SOAP) <cit.>, atomic cluster expansion (ACE) <cit.>, adaptative generalizable neighborhod informed features (AGNI) <cit.>. In addition, machine learning aided crystal structure identifiers have been published, either based on Bayesian deep learning (ARISE) <cit.> or neural networks <cit.>. Finally, structural defects in crystalline solids can be effectively detected as structural outliers using distortion scores of local atomic environments <cit.>. This method uses minimum covariance determinant (MCD) in conjunction with compact atomic descriptors like bispectrum <cit.> and allows for accurate structural analysis even in noisy structures. However, until now, it has not been coupled with an automated structure classifier. These methods, while being highly accurate, also have a non-negligible computational cost compared to traditional methods presented above. However, the performance of BSO4 for example, has been dramatically improved since the initial version of SNAP (see <cit.>) and was ported to GPU, making it one of the most efficient force-field framework with near-ab-initio accuracy. For practical use in on-the-fly MD analysis, the calculation of BSO4 every Nth step should thus not be a bottleneck –even when using another force-field than SNAP. The present methodology tightly integrates with the MD engine with acceptable overhead. In this work, we address in-situ analysis of large-scale MD trajectories and strive to minimize the amount of information to be stored. While today the research community aims at simulating larger and larger samples through MD simulations, the bottleneck is not only in performing the simulation itself, but mainly in effectively and accurately analyzing it, which represents a paradigm shift. However, there is always a trade-off to find between computational cost, accuracy, and robustness since a low sensitivity to atomic displacements usually comes at the price of a reduced capability of the identification method to distinguish similar structures <cit.> Here, we propose a novel algorithm (see Algorithm <ref>) for the identification and classification of crystal structures, also allowing for the accurate detection of atoms that contribute to defects. The method uses machine learning (ML) techniques and can be used for the analysis of materials under extreme conditions, including thermal noise, large deformations as well as large hydrostatic pressures. The paper is organized as follows. The first section details the construction of the training database used for different models. The second section describes the training process for the different algorithms employed in this work. Finally, in the last section, we demonstrate the performance of our crystal structure identification algorithm and apply it to analyze large-scale MD simulations at finite temperature. We present two cases of interest: solid-solid phase transition in hexagonal close-packed Zirconium and Frank-Read source dislocation loops expansion in face-centered cubic Aluminum. § DATABASE PREPARATION The training database contains four different crystal structures: body-centered cubic (BCC), face-centered cubic (FCC), hexagonal close-packed (HCP), and cubic diamond (c-DIA). The extension to other crystalline structures is straightforward. For each structure, a model metal is considered, and modelled using the following interatomic potentials: Aluminium <cit.> for BCC, Iron <cit.> for FCC, Zirconium <cit.> for HCP and Silicon <cit.> for c-DIA. A common aspect of supervised machine learning techniques is that their application range is given by the information contained in the learning database. Hence the elements of the training database should be carefully chosen with respect to target applications. Here, we pay attention to include the structures carrying information about temperature and small deformations. Then, once built, the database is mapped into a descriptor space onto which learning will be performed. These procedures are detailed below. §.§ Construction of the database in Cartesian space §.§.§ Finite temperature molecular dynamics trajectories The effects of temperature must be accounted for when developing a robust crystal structure classifier suitable for materials at extreme conditions. In order to sample finite-temperature configurations for the database, we compute MD trajectories in the NPT ensemble for each material representing the different crystal structures. The simulation cell dimensions correspond to the equilibrium density of each material at 300 K and ambient pressure, and contain 864 atoms. Trajectories are integrated with a timestep of 1 fs, and the coupling parameters for the thermostat and barostat are set to 0.1 and 1.0 ps, respectively. NPT simulations are performed at ambient pressure while ramping up the temperature from 0 to 2/3 of each material's melting temperature over a 500 ps time window. Configurations are extracted every 5 ps along each trajectory, leading to an ensemble of 100 configurations per crystal structure. §.§.§ Deformation measure A macroscopic deformation gradient tensor F is applied to the entire system while remapping the 3N atoms coordinates 𝐪 = r_1 ⊕…⊕r_N ∈ℝ^3N (where r_i ∈ℝ^3 are the cartesian coordinates of the i^th atom) into the deformed simulation cell. For every configuration extracted from the NPT trajectories: 𝐫_i, deformed = F𝐫_i, where 𝐫_i, deformed∈ℝ^3 stands for the cartesian coordinates of the i^th atom subjected to a homogeneous deformation governed by F. This deformation gradient tensor reads: F = [ F_11 F_12 F_13; 0 F_22 F_23; 0 0 F_33 ] leading to 6 independent variables describing all homogeneous deformation possibilities in any material. In the present work, we focus on the measure of deviatoric deformation ϵ_eq = √(3/2dev (E) : dev (E)) as a threshold criterion. Here, dev(E) = E - 1/3tr(E) I with E the Green-Lagrange strain tensor constructed from F with E=1/2(F^TF-I) and I the identity. Only small strains, i.e. within the elastic deformation regime, are considered by imposing a threshold value on ϵ_eq, i.e. ϵ_eq<0.05. This way, subsets of atomistic configurations subjected to large local deformation such as dislocation-mediated plasticity or amorphous shear banding should emerge as outliers during the classification process. §.§ Mapping the database into the descriptor space §.§.§ Bispectrum SO4 descriptor In this work, we use the bispectrum SO4 <cit.> to map the local atomic density function into invariant representations. This descriptor has several advantages compared to using the Cartesian coordinates of the surrounding atoms, e.g., it has constant dimension and is invariant to atomic permutation, rotation, and translation. It has been widely used in the context of machine learning interatomic potentials - MLIP, including spectral neighbor analysis potentials (SNAP) <cit.> and other similar forms <cit.>. Bispectrum SO4 also has the capacity to provide an accurate description of the atomic neighborhood, suitable for advanced structural analysis <cit.>. Below, we briefly recall the key concepts and the algebraic formalism used to compute this descriptor. The fully detailed mathematical definition is given in <cit.>. For all monoatomic systems employed in the present study, the atomic neighbor density around atom i at location r_i reads: ρ_i(r)=δ(r)+∑_r_ii'<r_cut f_c(r_ii')δ(r-r_ii') where r_ii' = r_i - r_i' is the distance between the central atom i and the neighbor atom i', and the cutoff function f_c ensures that the contribution of neighboring atoms smoothly decreases to zero at r_cut. By mapping radial neighbor coordinates r to an angular component θ_0=θ_0^max r/r_cut, the atomic neighbor density can be expanded in the basis functions of the unit 3-sphere, the 4D hyper-spherical harmonics U^j_m,m'(θ_0,θ,ϕ): ρ(r) = ∑_j=0,1/2,...^∞∑_m=-j^j∑_m'=-j^j u^j_m,m' U^j_m,m'(θ_0,θ,ϕ), where the expansion coefficients u^j_m,m' are a sum over discrete values of the corresponding basis function evaluated at each neighbor position, u^j_m,m' = U^j_m,m'(0) + ∑_r_ii'<r_cut f_c(r_ii')U^j_m,m'(θ_0,θ,ϕ). Finally, using the scalar triple products of these expansion coefficients, the real-value bispectrum components can be expressed as: B_j_1,j_2,j = ∑_m,m' u^j*_m,m'∑_m_1,m'_1 m_2,m'_2 Hjmm' j_1m_1m'_1 j_2m_2m'_2 u^j_1_m_1,m'_1 u^j_2_m_2,m'_2, with * the complex conjugation operator and where the constants Hjmm' j_1m_1m'_1 j_2m_2m'_2 are the Clebsch-Gordan coefficients for the hyper-spherical harmonics. The final coefficient is invariant to rotation and permutation. The order of the expansion J_max determines the accuracy of the geometrical representation of the atomic neighborhood, although bispectrum coefficients are not listed in order of importance. However, increasing the value of J_max leads to better accuracy but also to a higher computational cost. In the following, we choose a value of the expansion parameter J_max=4 that represents a good compromise between the accuracy of geometrical description and computational cost <cit.>. This leads to a bispectrum 𝐁 with 55 real components, i.e. ∈ℝ^55, which corresponds to the dimensionality of feature space. §.§.§ Fixed cutoff or fixed number of neighbors computation Concerning the cutoff parameter r_cut used to define the neighborhood of a central atom for which the bispectrum is computed, two strategies are proposed. Firstly we consider a fixed cutoff radius, as for the calculation of the potential which is the common procedure in the context of MLIP. Each neighbor in the cutoff sphere is included in the bispectrum calculation, weighted by the cut-off function which smoothly switches from 1 for distances lower than r_cut to exactly zero above r_cut. The number of neighbors N_neigh of a central atom is determined by the magnitude of r_cut, resulting in a larger number of neighbors for denser materials. The second alternative is to compute the bispectrum of an atomic environment containing a fixed number of neighbors N_neigh. Its main advantage is to remove the dependence of the crystal structure analysis on the density of the material. Hence different materials at varying densities (even locally) could be mapped to an equivalent descriptor representation. A simple way to achieve the selection of the N_neigh neighbors would be to sort them by their distance to the central atom and consider only the N_neigh first ones while choosing the cutoff radius as the distance between the central atom and its farthest neighbor. More formally, using a basic dichotomy algorithm, one can compute the optimal cutoff radius r_ cut that satisfies these requirements by defining the following equality: W_i(r_ cut) = ∑_j^M H(r_ cut - |r_ij|) = N_neigh, where W_i(r_ cut) is the total weight factor associated with the central atom i, H is the Heavyside function, N_neigh is the target number of neighbors and M is the actual number of neighbors, related to r_ cut, required to satisfy W_i(r_ cut)=N_neigh. Using the Heavyside weight function systematically leads to the solution with M=N_neigh and r_ cut equal to the distance to the N^th neighbor. However, at finite temperature, the BSO4 descriptor strongly depends on local thermal fluctuations since the position (and the identity) of the last neighbor may vary from one step to another. A way to regularize the neighborhood construction by limiting thermal temperature effects is to use a smooth weight function such as tanh: W_i(r_ cut) = ∑_j^M 1/2(1 - tanhr_ij-r_ cut/δ) = N_neigh . The δ parameter ensures the smooth transition for the weights from 1 to 0 and is set to 0.3 in the present work, close to thermal fluctuations of atomic positions. In Algorithm <ref> we present an algorithm able to compute a general descriptor for a constant numbers of neighbours. In Figure <ref>, we highlight the distinctions between the standard bispectrum with a fixed cutoff radius, denoted as _cut, and the proposed approach that utilizes a fixed number of neighbors, denoted as ^Heavyside_nn or ^tanh_nn (depending on the weight function employed). We set up NPT simulations of BCC Fe at 100 K, during which the pressure was gradually increased to around 40 GPa (corresponding to a volumetric compression of approximately 20%). The variations of the _cut and _nn components with respect to the volumetric ratio V/V_0 are illustrated in Figure <ref>-a, b, c, respectively. The two ways of computing the bispectrum lead to very different results. Indeed, _nn is almost insensitive to density while the _cut components evolve significantly with it. Thus, the _nn should be better at identifying crystal structures even during MD simulations involving a large change in material density. Besides, one can notice a difference between ^Heavyside_nn and ^tanh_nn. Indeed, even if it does not depend on local density, ^Heavyside_nn displays some slight evolution that is caused by thermal fluctuations around the sharp step of the Heavyside function. On the contrary, ^tanh_nn appears satisfyingly stable with density, exhibiting minor sensitivity. In the following, we consider BSO4 descriptors computed using either a fixed cutoff radius or a fixed target number of neighbors using the tanh regularization. The two descriptors will be respectively labelled ^N_nn and ^R_cut with N and R the values of the corresponding target number of neighbors or cutoff radius. §.§.§ Size of the database As described above, configurations included in the database are selected at different temperatures and after distinct instantaneous deformations. The combination of these two effects makes the local environment of each atom of the supercell unique. Hence, there is no need to apply a peculiar sparsification procedure to avoid redundancy in the database. For each of the four different crystal structures considered, 100 snapshots containing 864 atoms each are extracted from the NPT trajectory up to 2/3 T_m. Then, the 6 non-zero components of the deformation gradient tensor F are sampled using the Latin Hypercube Sampling with Multi-Dimensional Uniformity (LHSMDU) <cit.>, in order to obtain 100 draws to be applied to the different snapshots, while ensuring the condition ϵ_eq<0.05. Following this procedure, our total database contains N_atoms x N_snapshots = 864 x 100 = 86400 bispectrum vectors 𝐁∈ℝ^55, for each crystal structures, giving a total of M=345600 𝐁 vectors. § CRYSTAL STRUCTURE CLASSIFIER In this section we present the different steps of our procedure to build the supervised learning crystal structure analysis (SL-CSA) tool. Firstly, we delve into the configuration of our current classifier, which is constructed through a two-step process involving dimensionality reduction and logistic regression (LR). Subsequently, we compare our classification models to established tools in the literature, with particular emphasis on the adaptive common neighbor analysis (a-CNA), polyhedral template matching (PTM) and diamond structure identification (IDS), three methods available in  <cit.>. §.§ Dimensionality reduction The database is composed of 4 different crystal structures namely body centered cubic (BCC), face-centered cubic (FCC), hexagonal close packed (HCP) and cubic diamond (c-DIA), each containing 86400 local atomic environments encoded by descriptor vectors ∈ℝ^55. For the initial step of classification, we perfomed a supervised dimensionality reduction. We have uses the linear discriminant analysis (LDA), a statistical technique that is commonly used for supervised classification and feature extraction in machine learning <cit.>. LDA is particularly relevant for dimension reduction while preserving the most important discriminatory information. The underlying assumption is that the covariance matrix of each class is the same. LDA works by finding a linear combination of the original features that maximizes the separation between different classes in the data. The separation between classes is achieved by maximizing the ratio of the between-class variance to the within-class variance. The resulting linear combination, or discriminant function, is then used to project the data onto a lower-dimensional space, with dimension equal to the number of labels reduced by one. In the general case, one can reduce the dimension of the initial descriptor ∈ℝ^D leading to a new projected descriptor 𝐱 = P_LDA() : ℝ^D→ℝ^d=C-1: P_LDA() = 𝐂_LDA^T· (-^_db) with 𝐂_LDA∈ℝ^Dxd the reduction coefficients matrix of LDA and ^_db∈ℝ^D the average descriptor of the entire database. In the present case, the initial dimension D=55 corresponds to the BSO4 dimension imposed by the choice J_max=4 and the new projector has a dimension d=3 equal to the number of crystal structures of the database minus one. Thanks to LDA, the separation between classes is maximized in this low dimensional space, hence facilitating the subsequent classification step. §.§ Logistic Regression Once the dimension reduction step is performed by means of LDA, the new descriptor 𝐱 is considered as an input for performing a multinomial logistic regression which provides a probability vector 𝐩 = P_LR (𝐱) P: ℝ^d→ℝ^C defined as: P_LR (𝐱) = exp (𝐬(𝐱))/∑_i exp (𝐬(𝐱)[i] ) where 𝐬(𝐱) ∈ℝ^C corresponds to the score vector that reads: 𝐬(𝐱) = 𝐛_LR + 𝐃_𝐋𝐑·𝐱^T with 𝐛_LR∈ℝ^C and 𝐃_LR∈ℝ^Cxd the bias vector and decision matrix of the logistic regression model after training. In the end, the crystal structure assigned to each atom with descriptor 𝐱 is computed as the argmax of the probability vector 𝐩(𝐱). In the present work, C=4 e.g. the total number of crystal structures. The logistic regression step will systematically attributes a crystal structure to an atom which can lead to misclassification. Some atoms, e.g. defective ones, will be wrongly classified as crystalline. This misclassification is expected because the LDA dimension-reduced descriptors 𝐱 are constructed based on the assumption that the covariance matrix of each class is identical. This pitfall can be overcome by methods such as QDA (Quadratic Discriminant Analysis), which are less strict and allow for different feature covariance matrices for different classes. However, these methods result in a quadratic decision boundary, which is more challenging to train and stabilize. For this reason, we stick to the framework of LDA and make the final decision in the classification by employing statistical distances with respect to each class based on the full covariance matrix of each class, similar to <cit.>. Consequently, an additional step, serving as a sanity check (referred to as step 4 in Algorithm <ref>), is performed and described in the subsequent section. §.§ Crystal structure classifier The full database presented in <ref> is replicated into six different databases computed with different flavors of the descriptors. We use the ^R_cut descriptor with R equal to 3.0, 6.0, and 9.0, and the ^N_nn descriptor with N set to 24, 48, and 72 respectively. A different instance of our classification model (SL-CSA) is trained on each of the six databases. The results obtained with each model are compared against each other and against crystal structure classifiers of the literature in the next section. The purpose of the classification procedure is to identify local crystal structure with high fidelity or to detect atoms that do not belong to any of the reference crystal structures, considered as outliers. Since the logistic regression assigns a crystal structure to all atoms, we consider a final sanity check step using the Mahalanobis distance of a given recued descriptor 𝐱 to a given crystal structure cs: d^cs_Maha (𝐱) = √( (𝐱 - _cs )^T·Σ^-1_cs· (𝐱-_cs)) where Σ_cs and _cs are the sample covariance matrix and average of the descriptors in the reference database associated to crystal structure CS. The decision to assign the cs to the descriptor x is made based on a threshold criterion on this distance: if the distance is lower than an acceptance threshold, the cs is assigned to the descriptor. Hence, this acceptance threshold determines the accuracy of the classifier. It is chosen by investigating the properties of the distributions of distances in the reference database: for each descriptor, the distance to each cs is computed, and the results are gathered by crystal structure so that the distribution of within- and between-class distances can be computed for each crystal structure. Results corresponding to ^24_nn are shown in Figure <ref> whereas the distributions for other descriptors are given in the Supporting information (see Figures S1 to S4). The separation between c-DIA and other classes is systematically large due to the geometrical particularities of the diamond structure compared to BCC, FCC and HCP. On the other hand, between-class distances for BCC, FCC and HCP crystal structures are smaller and may even overlap. The acceptance threshold needs to be carefully chosen to ensure a minimal error during classification. In order to calculate this threshold for each class, we define the error rate of a crystal structure cs as: τ_cs = n_d^cs_Maha≥threshold/n^cs_tot where n_d^cs_Maha corresponds to the number of atoms of the database with a distance greater than the threshold and n^cs_tot is the total number of atoms with crystal structure cs. We also define a second error rate that quantifies the number of misassigned atoms (i.e. atoms of another crystal structure with a between-class distance with cs lower than the defined threshold): τ_cs = n_d^cs_Maha≤threshold/n_tot-n^cs_tot where n_d^cs_Maha corresponds to the number of atoms belonging to another crystal structure with a distance to cs lower than the threshold, and n_tot is the total number of samples in the database. Finally, the optimal threshold for each crystal structure is defined as the Mahalanobis distance that minimizes |τ_cs-τ_cs|. Both error rates are displayed in blue and red lines in Figure <ref> for each crystal structure as a function of the acceptance threshold. The optimal threshold is displayed as a vertical dashed line and is defined by the crossing of the two error rates. Since the distributions of Mahalanobis distance are entirely disjointed for the c-DIA case, the acceptance threshold is arbitrarily set to 6. This allows to get 99 % of correct predictions on the database. For BCC, FCC and HCP crystal structures, the calculated optimal thresholds are equal to 5.1, 5.4 and 5.1 respectively. This means that atoms exhibiting within-class distances greater than the acceptance threshold for each crystal structure will be considered as defective. In addition, the Mahalanobis distance to a specific crystal structure can also be used as a tool to classify defects, as was done previously <cit.> and this will be discussed in Section <ref>. §.§ Comparison to standard classifiers In the following, we focus on the two classifiers built on the ^24_nn and ^6_cut, descriptors as they systematically provide better scores when identifying the structures in the testing database. The selected classifiers are tested against well-established crystal structure identification methods, namely the a-CNA, PTM, and IDS algorithms (where applicable). It is important to mention that certain algorithms may require specific configurations. Therefore, in this section, we provide a detailed description of the parameters used in the current study. Prior to analyzing a particle neighborhood, a-CNA determines the optimal cutoff radius automatically for each individual particle by computing a local length scale specific to each crystal structure. It is to be noted that a-CNA does not have a tolerance criterion associated with its classification, i.e. if an atom neighborhood cannot be mapped to one of the known crystal structures, it is classified as non-crystalline, and labeled “unknown". In contrast to a-CNA, PTM requires a user-defined RMSD parameter, typically set to 0.1 by default. This parameter is the same for all structures, and atoms with RMSD greater than the threshold are assigned as non-crystalline. Setting a much larger RSMD threshold can reduce the number of “unknown" labels, however, it also favors the appearance of false positives. Calibration of the RMSD parameter for different crystal structures is out of the scope of the present study and we only perform the analysis with the most commonly used standard settings, i.e. RMSD=0.1. The acceptance threshold of the SL-CSA classifier is defined for each type of structure as described in Section <ref>. Below we explore the performance of the structure identification methods for different thermo-mechanical states. In Section <ref> we investigate the effect of thermal fluctuations; in Section <ref> sensitivity to the material's density is examined; and finally, in Section <ref> we consider the sensitivity to large non-hydrostatic deformation. §.§.§ Sensitivity to high temperature Here we perform an analysis of NPT trajectories in BCC Iron, FCC Aluminium, HCP Zirconium, and c-DIA Silicon, where pressure was maintained at 0 GPa as the temperature is increased up to each material's melting point T_m. These simulations are distinct from those of the training database. We define the accuracy score of a crystal structure classifier algorithm as the number of atoms identified as crystalline over the total number of atoms in the simulation, assuming that the analyzed materials are fully crystalline at 2/3T_m. Figure <ref> reports the evolution of the accuracy score as a function of the average temperature in the simulation cell for the four different crystal structures. The SL-CSA classifier clearly outperforms a-CNA, and PTM for BCC Fe, FCC Al, and HCP Zr. The conventional methods shift from 100 % accuracy before reaching 2/3T_m, while the SL-CSA retains more than 98 % accuracy along the whole NPT trajectory. The IDS tool used for c-DIA Si appears not very sensitive to thermal noise and provides comparable results with SL-CSA with an accuracy above 99 % for this structure along the entire trajectory. Finally, the classifier built with ^24_nn appears less sensitive to thermal fluctuations than its counterpart using ^6_cut. §.§.§ Sensitivity to material's density In order to examine the performance of our classifiers in extrapolation conditions with changing density (beyond their trained domain at a density of ρ_0), we generate 100 synthetic samples for each of the four crystal structures at densities ρ∈ [0.5ρ_0, 1.5ρ_0]. These samples correspond to simulation cells at different lattice parameters containing 864 atoms, each atom being shifted from its crystalline position by Gaussian noise with σ=0.1 Å. This setup allows building artificial configurations far from the domain of validity of the interatomic potential, spanning a wide range of material densities. Figure <ref> compares the results between a-CNA, IDS, PTM, and the two classifiers based on ^6_cut and ^24_nn descriptors. Here, ρ_0 is taken as the reference density for all measurements and classifiers since we aim at comparing the results w.r.t. the samples present in the database. For the three metallic systems, a-CNA and PTM algorithms provide very good results and correctly predict 100 % of the crystal structures in the entire range of density spanned. The CNA-based classifiers are not adapted for the diamond structure, and we use the IDS classifier instead. Both IDS and PTM predict 100% of c-DIA structure for any density, proving that they are not sensitive to volumetric strain. Thus, the three tested standard classifiers can perform well in the presence of large volumetric deformation, at least in the presence of moderate thermal noise. The ^6_cut based classifier performs well for the densities near ρ_0. However, due to the fixed cutoff, it largely fails at predicting correct crystal structures when density changes and may even predict a wrong crystal structure, e.g. BCC instead of FCC/HCP. Using the classifier trained on the ^24_nn descriptor allows for the full restitution of the correct crystal structures in BCC, FCC, HCP, and c-IDA systems, over the whole range of densities. Together with the low sensitivity to thermal noise, these results prove its applicability to various thermodynamic conditions, i.e. when large hydrostatic pressure and high temperature are involved. §.§.§ Sensitivity to large deformation In order to explore the sensitivity of the classifiers to large deformations, we design a database with NPT trajectories for the four tested structures. For each material, the trajectories are performed at ambient pressure and at 2/3T_m for 200 ps. Along each trajectory, 200 snapshots are extracted for subsequent application of large deformations. All configurations are different from those of the learning database, and simulations were carried out with different seeds for temperature initialization. In order to explore independently diagonal and deviatoric deformations, we consider two subsets of structures, where (F_11, F_33) and (F_12, F_13) are applied. Each component of the deformation tensor F_ij is drawn from a uniform distribution in the intervals [0.7, 1.3] and [-0.3, 0.3] for longitudinal and deviatoric strains components respectively. The 200 couples of diagonal deformation tensor components (F_11, F_33) are assigned to the 200 snapshots of each crystal structure, by applying the corresponding deformation gradient tensor F to the simulation cell while rescaling atomic positions. Finally, the different classifiers a-CNA, PTM, _cut^6 and _nn^24 are used to analyse the deformed samples. The same process using deviatoric deformation tensor components (F_12, F_13) has been employed. Results for each crystal structure are displayed in Figure <ref>. Similar trends are observed for BCC, FCC, HCP, and c-DIA, where the performance of the classifiers is roughly independent of the crystalline structure. For diagonal deformations associated with compression and tension of the simulation cell, the CNA and PTM analysis are limited to small or very small deformation only (lower than 5%), and their accuracy is strongly reduced beyond this point. This effect is likely attributed to the combined effects of deformation and temperature. On the other hand, the performances of the _cut^6 and _nn^24 classifiers remain robust up to deformation as large as 20% (30% in the best cases). We note that _nn^24 even shows an extended range of accuracy compared to _cut^6, and both classifiers exhibit some anisotropic response to diagonal deformation. Concerning deviatoric deformations the performances of our classifiers are even better than CNA and PTM, with correct structural assignment up to 30% deformation. Once again we attribute the high accuracy of our classifiers to their capabilities to handle temperature and deformation effect conjointly. This is highly valuable in the context of in situ classification of large-scale simulations of materials at extreme conditions. In the following, we demonstrate the applications of our most robust classifier constructed with the ^24_nn descriptor for the analysis of structures challenging to perform with traditional methods. § APPLICATIONS Two examples of interest to the materials science community are outlined below. The first example considers the crystalline Zr solid-solid phase transition from HCP to BCC under high pressure, and the second explores the identification of dislocations as they form and expand from a Frank-Read source in Al. The difficulty in the first example is in the accurate identification of the crystal structures where volume discontinuity may be present, whereas in the second example it relies on the extraction of defective atoms present in the dislocations' cores. The comparison of the results provided by SL-CSA and traditional methods is made for both applications. §.§ HCP-BCC phase transition in HCP Zr Here we perform the analysis of the high-pressure HCP → BCC transition in Zr, following the MD simulation procedure previously described in Refs. <cit.>. We aim to reveal the effect of the accuracy of the classification on the characterization of this transition. To this end, we track the number of atoms belonging to different crystalline structures as a function of time. To investigate this transition, we performed MD simulations in a simulation cell with 442,368 atoms at high temperature and high pressure. The system was first equilibrated at 0 GPa and 1500 K for 10 ps in the NPT ensemble using a 2 fs timestep. Then, the target pressure was set to 18 GPa, allowing the simulation cell to relax independently in the three directions of space. The temperature was maintained at 1500 K for the entire simulation with a Nosé-Hoover thermostat. At high pressure, a few ps are needed for the first BCC seed to nucleate in the HCP single crystal, meaning that the BCC structure becomes thermodynamically more stable than its HCP counterpart. Four snapshots of the MD simulation are taken at different times and are depicted in Figure <ref>. The results of structure identification during the phase transition revealed unexpected differences between the methods, which can be grouped into two distinct categories- CNA and PTM0.1 for one and PTM0.15 and SL-CSA for the other, as shown in Figure <ref>. From the beginning of the simulation, CNA and PTM0.1 correctly identify 80% of the atoms, this proportion rapidly decreasing to 70%. Then, as the structural transition occurs, the number of unclassified atoms increases drastically up to 60%, demonstrating the inability of these methods to extract the correct underlying mechanism. The proportion of BCC atoms finally increases slowly, reaching asymptotic values between 60 and 70%, well below the expected proportion. In this context, using these results to characterize the kinetic of this phase transition would probably lead to wrong predictions. On the other hand, PTM0.15 and SL-CSA exhibit similar trends. Initially, there is a high population of HCP atoms followed by a rapid transition towards the BCC structure. Finally, the system reaches an asymptotic behavior, resulting in a significant population of BCC atoms. The population of unclassified atoms during the transition remains low. Although these two methods show similar behavior, there are still some differences, especially during the transition phase. SL-CSA exhibits a lower population of unclassified atoms compared to PTM0.15. Additionally, at the end of the simulation, there is an approximate 10% difference in the population of BCC atoms between the two methods, with SL-CSA leading to a higher population of the BCC phase. Based on these results, we conclude that the SL-CSA classifier yields better results and, that this method is well suitable for a quantitative evaluation of the mechanism and kinetics of phase transitions. In addition, the proposed method does not require any parameter-tuning in opposition with PTM and its corresponding RMSD, to which the results are very sensitive. §.§ Frank-Read source in Aluminium at 700 K In this example, we investigate the capabilities of the classifier to extract the atoms that belong to defect structures from the bulk, and, in particular, to identify atoms belonging to dislocation cores. We emphasize that the classifier was not trained to distinguish any defective configuration. Hence, the analysis developed below only concerns the identification of crystalline and non-crystalline (or defective) atoms. The present simulation involves the double emission of Frank-Read dislocation sources in FCC Al at high temperature, an example that has been previously used (at low temperature) to demonstrate the capability of the PTM algorithm in <cit.>, with a setup similar to the one described in <cit.>. The only difference is that the dislocation sources have been pinned by two cylindrical pores periodically along the z direction. The system with 2,300,504 atoms was initially equilibrated in the NPT ensemble at 0 GPa and 700 K for 10 ps, using similar coupling constants as for the HCP to BCC simulation described in the previous section. Finally, a shear stress σ_yz of 1.5 GPa was applied to the simulation cell, while keeping the temperature at 700 K in order to trigger the expansion of dislocation lines emerging from the Frank-Read sources. A snapshot of the simulation cell at t = 14 ps is displayed in Figure <ref>. The aim here is to investigate the capability of the SL-CSA procedure to extract defects, i.e. atoms that have not been assigned by the classifier to a specific crystal structure. Figure <ref>-a, b depicts the microstructure analyzed using PTM0.1 and SL-CSA, after removing atoms belonging to the FCC structure. The significant thermal noise in the simulation cell causes the presence of atomic environments deviating from the ideal FCC structure, including non-crystalline and BCC atoms. Here, PTM tends to identify more non-crystalline atoms than SL-CSA, the latter leading to the identification of HCP atoms in the FCC bulk. Since FCC Al tends to easily nucleate HCP stacking faults, the fact that thermally disturbed FCC bulk atoms are being identified as HCP is not that uncommon. A main difference subsists between both classifiers: PTM method appears more dependent on the local thermal/stress state than SL-CSA. Indeed, the area that is relaxed by the dislocation loop (i.e. at lower stress) appears less noisy than the one that is not in its vicinity, in opposition to the SL-CSA, where the noisy atoms look homogeneously distributed across the sample. In the end, what really matters is the ability of the present methods to extract defective parts of atomistic simulations for subsequent analysis. For example, defective atoms identified with SL-CSA or PTM could be fed to the Dislocation eXtraction Analysis (DXA) tool <cit.> for dislocation identification. Both PTM and SL-CSA seem able to identify the dislocation loop stacking faults generated by the Frank-Read sources, constituted by atoms assigned with the HCP crystal structure. Employing the cluster analysis available in allows for removing noisy bulk atoms and the remaining defective structure extracted from both PTM and SL-CSA classifications are displayed in Figure <ref>-c, d respectively. In comparison to the PTM0.1 analysis workflow, SL-CSA looks more robust and leads to the extraction of almost only dislocation core atoms. Such a structure would be a good candidate for performing extended analysis in terms of dislocation density calculations. However, dislocation line extraction is not the object of the present work and will be part of further studies. Overall, our crystal structure classification procedure SL-CSA performs rather well compared to the existing tools from the literature, even when both non-hydrostatic stresses and high temperature are involved such as in this dislocation-mediated plasticity simulation toy model. § CONCLUSION AND PERSPECTIVES We have introduced a novel classifier that surpasses the capabilities of conventional approaches, such as a-CNA and PTM, when it comes to identifying crystal structures under extreme conditions like high temperature, high pressure, and large deformation. This makes our method particularly suitable for real-time analysis of molecular dynamics (MD) simulations. Our proposed classifier operates on a simple learning process, utilizing a training database that encompasses various structures of interest, including BCC, FCC, HCP, and c-DIA. The characterization of local structures is facilitated by a spectral descriptor, which captures the geometric arrangement of neighboring atoms and represents it as a vector. We have proposed a modification to the conventional bispectrum descriptor, ensuring that a fixed number of neighbors is incorporated within the descriptor. This modification enables the analysis of materials at varying densities without loosing accuracy. The newfound insensitivity of the modified descriptor to density changes has a significant impact on the size of the required training database for the classifiers, while maintaining transferability. The training database includes configurations at high temperatures and small deformations, i.e. in the elastic regime. A simple logistic regression is employed for the classifier, carefully controlling the balance between false positives and false negatives. The classifier is applied following dimensionality reduction using an LDA discriminator. While LDA may lose information about the covariance matrix differences between the classes, we mitigate this by refining the results with a test based on Mahalanobis distance within each and across the classes. We compare the performance of our current SL-CSA classifier to standard classification tools (a-CNA and PTM) across various densities, temperatures, and deformations. Notably, the SL-CSA classifier demonstrates superior reliability, even in scenarios where temperature and deformation interact. Finally, the SL-CSA classifier was examined on large-scale simulations of solid-solid phase transformations and the detection of dislocation core atoms. These simulations showed that our classifier can conduct an analysis of crystalline structures with higher precision than traditional techniques, allowing to accurately estimate a proportion of atoms that belong to a given crystalline structure. We are optimistic that this capability will be advantageous for raising the accuracy of coarse-grained models of such processes. In perspective, given its transferability and capability to analyze unknown features, the present method holds potential for further expansion in identifying more complex crystalline structures and directly detecting specific defect types, such as dislocation cores. Additionally, its application can extend to novelty detection in the field of materials science, such as recent nano-phases inclusions <cit.>. § DATA AVAILABILITY Data will be made available on request. 10 nguyen2021billion Kien Nguyen-Cong, Jonathan T Willman, Stan G Moore, Anatoly B Belonoshko, Rahulkumar Gayatri, Evan Weinberg, Mitchell A Wood, Aidan P Thompson, and Ivan I Oleynik. Billion atom molecular dynamics simulations of carbon at extreme conditions and experimental time and length scales. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–12, 2021. guo2022extending Zhuoqiang Guo, Denghui Lu, Yujin Yan, Siyu Hu, Rongrong Liu, Guangming Tan, Ninghui Sun, Wanrun Jiang, Lijun Liu, Yixiao Chen, et al. Extending the limit of molecular dynamics with ab initio accuracy to 10 billion atoms. In Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 205–218, 2022. johansson2022micron Anders Johansson, Yu Xie, Cameron J Owen, Jin Soo, Lixin Sun, Jonathan Vandermause, Boris Kozinsky, et al. Micron-scale heterogeneous catalysis with bayesian force fields from first principles and active learning. arXiv preprint arXiv:2204.12573, 2022. Zepeda2017 L. A. Zepeda-Ruiz & A. Stukowski & T. Oppelstrup & V. V. Bulatov. Probing the limits of metal plasticity with molecular dynamics simulations. Nature, 550:492–495, 2017. Kelchner1998 Cynthia L. Kelchner, S. J. Plimpton, and J. C. Hamilton. Dislocation nucleation and defect structure during surface indentation. Phys. Rev. B, 58:11085–11088, Nov 1998. Steinhardt1983 Paul J. Steinhardt, David R. Nelson, and Marco Ronchetti. Bond-orientational order in liquids and glasses. Phys. Rev. B, 28:784–805, Jul 1983. Honeycutt1987 J. Dana. Honeycutt and Hans C. Andersen. Molecular dynamics study of melting and freezing of small lennard-jones clusters. The Journal of Physical Chemistry, 91(19):4950–4963, 1987. Faken1994 D Faken and H Jónsson. Systematic analysis of local atomic structure combined with 3d computer graphics. Comput. Mater. Sci., 2:279–286, 1994. Stukowski2012 Alexander Stukowski. Structure identification methods for atomistic simulations of crystalline materials. Modelling and Simulation in Materials Science and Engineering, 20(4):045021, may 2012. Ackland2006 G. J. Ackland and A. P. Jones. Applications of local crystal structure measures in experiment and simulation. Phys. Rev. B, 73:054104, Feb 2006. Lazar2015 Emanuel A. Lazar, Jian Han, and David J. Srolovitz. Topological framework for local structure analysis in condensed matter. Proceedings of the National Academy of Sciences, 112(43):E5769–E5776, 2015. Larsen2016 Peter Mahler Larsen, Søren Schmidt, and Jakob Schiøtz. Robust structural identification via polyhedral template matching. Modelling and Simulation in Materials Science and Engineering, 24(5):055007, may 2016. Tucker2009 G J Tucker, J A Zimmerman, and D L McDowell. Shear deformation kinematics of bicrystalline grain boundaries in atomistic simulations. Modelling and Simulation in Materials Science and Engineering, 18(1):015002, dec 2009. Tucker2011 Garritt J. Tucker, Jonathan A. Zimmerman, and David L. McDowell. Continuum metrics for deformation and microrotation from atomistic simulations: Application to grain boundaries. International Journal of Engineering Science, 49(12):1424–1434, 2011. Advances in generalized continuum mechanics. Zimmerman2009 Jonathan A. Zimmerman, Douglas J. Bammann, and Huajian Gao. Deformation gradients for continuum mechanical analysis of atomistic simulations. International Journal of Solids and Structures, 46(2):238–253, 2009. Hartley2005 Craig S. Hartley and Y. Mishin. Representation of dislocation cores using nye tensor distributions. Materials science & engineering. A, Structural materials : properties, microstructure and processing, 400:18–21, 2005. Cermelli2001 Paolo Cermelli and Morton E. Gurtin. On the characterization of geometrically necessary dislocations in finite plasticity. Journal of the mechanics and physics of solids, 49(7):1539–1568, 2001. Stukowski2010dxa A. Stukowski & K. Albe. Extracting dislocations and non-dislocation crystal defects from atomistic simulation data. Modelling Simul. Mater. Sci. Eng., 18(085001), 2010. Stukowski2012dxa A. Stukowski et al. Automated identification and indexing of dislocations in crystal interfaces. Modelling Simul. Mater. Sci. Eng., 20(085007), 2012. Bulatov2006 V. V. Bulatov & W. Cai. Computer Simulations Of Dislocations. Oxford University Press, 2006. Zepeda2020 L. A. Zepeda-Ruiz & A. Stukowski & T. Oppelstrup & N. Bertin & N. R. Barton & R. Freitas & V. V. Bulatov. Atomistic insights into metal hardening. Nature Materials, pages 1–6, 2020. Bertin2020 Nicolas Bertin, Ryan B. Sills, and Wei Cai. Frontiers in the simulation of dislocations. Annual Review of Materials Research, 50(1):437–464, 2020. ovito_ref Alexander Stukowski. Visualization and analysis of atomistic simulation data with ovito-the open visualization tool. Modelling and Simulation in Materials Science and Engineering, 18(1), JAN 2010. hsu2023 Tim Hsu, Babak Sadigh, Nicolas Bertin, Cheol Woo Park, James Chapman, Vasily Bulatov, and Fei Zhou. Score-based denoising for atomic structure identification, 2023. Plimpton1995 Steve Plimpton. Fast parallel algorithms for short-range molecular dynamics. Journal of Computational Physics, 117(1):1–19, 1995. Thompson2022 Aidan P. Thompson, H. Metin Aktulga, Richard Berger, Dan S. Bolintineanu, W. Michael Brown, Paul S. Crozier, Pieter J. in 't Veld, Axel Kohlmeyer, Stan G. Moore, Trung Dac Nguyen, Ray Shan, Mark J. Stevens, Julien Tranchida, Christian Trott, and Steven J. Plimpton. Lammps - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Computer Physics Communications, 271:108171, 2022. Cieren2014 Emmanuel Cieren, Laurent Colombet, Samuel Pitoiset, and Raymond Namyst. Exastamp: A parallel framework for molecular dynamics on heterogeneous clusters. In Luís Lopes, Julius Žilinskas, Alexandru Costan, Roberto G. Cascella, Gabor Kecskemeti, Emmanuel Jeannot, Mario Cannataro, Laura Ricci, Siegfried Benkner, Salvador Petit, Vittorio Scarano, José Gracia, Sascha Hunold, Stephen L. Scott, Stefan Lankes, Christian Lengauer, Jesús Carretero, Jens Breitbart, and Michael Alexander, editors, Euro-Par 2014: Parallel Processing Workshops, pages 121–132, Cham, 2014. Springer International Publishing. Cieren2015Thesis Emmanuel Cieren. Molecular Dynamics for Exascale Supercomputers. Theses, Université de Bordeaux, October 2015. Prat2019Thesis Raphael Prat. Équilibrage dynamique de charge sur supercalculateur exaflopique appliqué à la dynamique moléculaire. Theses, Université de Bordeaux, October 2019. Prat2020 Raphael Prat, Thierry Carrard, Laurent Soulard, Olivier Durand, Raymond Namyst, and Laurent Colombet. Amr-based molecular dynamics for non-uniform, highly dynamic particle simulations. Computer Physics Communications, 253:107177, 2020. Musil2021 Felix Musil, Andrea Grisafi, Albert P. Bartók, Christoph Ortner, Gábor Csányi, and Michele Ceriotti. Physics-inspired structural representations for molecules and materials. Chemical Reviews, 121(16):9759–9815, 2021. PMID: 34310133. Chung2022 Heejung W. Chung, Rodrigo Freitas, Gowoon Cheon, and Evan J. Reed. Data-centric framework for crystal structure identification in atomistic simulations using machine learning. Phys. Rev. Mater., 6:043801, Apr 2022. Artrith2017 Nongnuch Artrith, Alexander Urban, and Gerbrand Ceder. Efficient and accurate machine-learning interpolation of atomic energies in compositions with many species. Physical Review B, 96(1):014112, 2017. Behler2011 Jörg Behler. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. The Journal of chemical physics, 134(7):074106, 2011. De2016 Sandip De, Albert P Bartók, Gábor Csányi, and Michele Ceriotti. Comparing molecules and solids across structural and alchemical space. Physical Chemistry Chemical Physics, 18(20):13754–13769, 2016. Zeni2021 Claudio Zeni, Kevin Rossi, Aldo Glielmo, and Stefano De Gironcoli. Compact atomic descriptors enable accurate predictions via linear models. The Journal of Chemical Physics, 154(22):224112, 2021. Batra2019 Rohit Batra, Huan Doan Tran, Chiho Kim, James Chapman, Lihua Chen, Anand Chandrasekaran, and Rampi Ramprasad. General atomic neighborhood fingerprint for machine learning-based methods. The Journal of Physical Chemistry C, 123(25):15859–15866, 2019. Chandrasekaran2019 Anand Chandrasekaran, Deepak Kamal, Rohit Batra, Chiho Kim, Lihua Chen, and Rampi Ramprasad. Solving the electronic structure problem with machine learning. npj Computational Materials, 5(1):1–7, 2019. Leitherer2021 Andreas Leitherer, Angelo Ziletti, and Luca M Ghiringhelli. Robust recognition and exploratory analysis of crystal structures via bayesian deep learning. Nature communications, 12(1):1–13, 2021. DeFever2019 Ryan S DeFever, Colin Targonski, Steven W Hall, Melissa C Smith, and Sapna Sarupria. A generalized deep learning approach for local structure identification in molecular simulations. Chemical science, 10(32):7503–7515, 2019. Goryaeva2020 Alexandra M Goryaeva, Clovis Lapointe, Chendi Dai, Julien Dérès, Jean-Bernard Maillet, and Mihai-Cosmin Marinica. Reinforcing materials modelling by encoding the structures of defects in crystalline solids into distortion scores. Nature communications, 11(1):1–14, 2020. goryaeva_compact_2023 Alexandra M. Goryaeva, Christophe Domain, Alain Chartier, Alexandre Dézaphie, Thomas D. Swinburne, Kan Ma, Marie Loyer-Prost, Jérôme Creuze, and Mihai-Cosmin Marinica. Compact A15 Frank-Kasper nano-phases at the origin of dislocation loops in face-centred cubic metals. Nature Communications, 14(1):3003, May 2023. bispectre Albert Bartók, Risi Kondor, and Gábor Csányi. On representing chemical environments. Physical Review B, 87:184115, 2013. mendelev_pm_2008 M.I. Mendelev, M.J. Kramer, C.A. Becker, and M. Asta. Analysis of semi-empirical interatomic potentials appropriate for simulation of crystalline and liquid al and cu. Philosophical Magazine, 88(12):1723–1750, 2008. mendelev_pm_2003 M. I. Mendelev, S. Han, D. J. Srolovitz, G. J. Ackland, D. Y. Sun, and M. Asta. Development of new interatomic potentials appropriate for crystalline and liquid iron. Philosophical Magazine, 83(35):3977–3994, 2003. mendelev_pml_2007 M. I. Mendelev and G. J. Ackland. Development of an interatomic potential for the simulation of phase transformations in zirconium. Philosophical Magazine Letters, 87(5):349–359, 2007. stillinger_prb_1985 Frank H. Stillinger and Thomas A. Weber. Computer simulation of local order in condensed phases of silicon. Phys. Rev. B, 31:5262–5271, Apr 1985. snap Aidan Thompson, L.P. Swiler, Christian Trott, S.M. Foiles, and Garritt Tucker. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials. Journal of Computational Physics, 285, 2015. snap2 Mitchell A. Wood and Aidan P. Thompson. Extending the accuracy of the snap interatomic potential form. J. Chem. Phys., 148(24), 2018. goryaeva2021 Alexandra M Goryaeva, Julien Dérès, Clovis Lapointe, Petr Grigorev, Thomas D Swinburne, James R Kermode, Lisa Ventelon, Jacopo Baima, and Mihai-Cosmin Marinica. Efficient and transferable machine learning potentials for the simulation of crystal defects in bcc Fe and W. Phys. Rev. Mater., 5(10):103803, 2021. Anruo2023 Anruo Zhong, Clovis Lapointe, Alexandra M. Goryaeva, Jacopo Baima, Manuel Athènes, and Mihai-Cosmin Marinica. Anharmonic thermo-elasticity of tungsten from accelerated bayesian adaptive biasing force calculations with data-driven force fields. Phys. Rev. Mater., 7:023802, Feb 2023. goryaeva2019 Alexandra M Goryaeva, Jean-Bernard Maillet, and Mihai-Cosmin Marinica. Towards better efficiency of interatomic linear machine learning potentials. Comput. Mater. Sci., 166:200–209, 2019. lhsmdu Jared Deutsch and Clayton Deutsch. Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons. Journal of Statistical Planning and Inference, 142:763–772, 2012. fisher_1936 R.A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179, 1936. Willaime1989 Fran çois Willaime and Carlo Massobrio. Temperature-induced hcp-bcc phase transformation in zirconium: A lattice and molecular-dynamics study based on an n-body potential. Phys. Rev. Lett., 63:2244–2247, Nov 1989. Ahuja1993 Rajeev Ahuja, John M. Wills, Börje Johansson, and Olle Eriksson. Crystal structures of ti, zr, and hf under compression: Theory. Phys. Rev. B, 48:16269–16279, Dec 1993. Greeff2005 C W Greeff. Phase changes and the equation of state of zr. Modelling and Simulation in Materials Science and Engineering, 13(7):1015, aug 2005. Zong2019 Hongxiang Zong, Yufei Luo, Xiangdong Ding, Turab Lookman, and Graeme J. Ackland. hcp → ω phase transition mechanisms in shocked zirconium: A machine learning based atomic simulation study. Acta Materialia, 162:126–135, 2019. Xia1991 Hui Xia, Arthur L. Ruoff, and Yogesh K. Vohra. Temperature dependence of the -bcc phase transition in zirconium metal. Phys. Rev. B, 44:10374–10376, Nov 1991. Stukowski_2010 Alexander Stukowski and Karsten Albe. Dislocation detection algorithm for atomistic simulations. Modelling and Simulation in Materials Science and Engineering, 18(2):025016, mar 2010. Koning_2003 Maurice de Koning, Wei Cai, and Vasily V. Bulatov. Anomalous dislocation multiplication in fcc metals. Phys. Rev. Lett., 91:025503, Jul 2003.
http://arxiv.org/abs/2307.00895v1
20230703094612
Synthesis of Contrast-Enhanced Breast MRI Using Multi-b-Value DWI-based Hierarchical Fusion Network with Attention Mechanism
[ "Tianyu Zhang", "Luyi Han", "Anna D'Angelo", "Xin Wang", "Yuan Gao", "Chunyao Lu", "Jonas Teuwen", "Regina Beets-Tan", "Tao Tan", "Ritse Mann" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Synthesis of contrast-enhanced breast MRI T. Zhang et al. Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands Dipartimento di diagnostica per immagini, Radioterapia, Oncologia ed ematologia, Fondazione Universitaria A. Gemelli, IRCCS Roma, Roma, Italy Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands Faculty of Applied Science, Macao Polytechnic University, 999078, Macao, China † T. Z. and L. H. contributed equally to this work. Corresondence: taotanjs@gmail.com Synthesis of Contrast-Enhanced Breast MRI Using Multi-b-Value DWI-based Hierarchical Fusion Network with Attention Mechanism Tianyu Zhang1,2,3, † Luyi Han1,3,† Anna D'Angelo4 Xin Wang1,2,3 Yuan Gao1,2,3 Chunyao Lu1,3 Jonas Teuwen5 Regina Beets-Tan1,2 Tao Tan1,6,* Ritse Mann1,3 ============================================================================================================================================================ Magnetic resonance imaging (MRI) is the most sensitive technique for breast cancer detection among current clinical imaging modalities. Contrast-enhanced MRI (CE-MRI) provides superior differentiation between tumors and invaded healthy tissue, and has become an indispensable technique in the detection and evaluation of cancer. However, the use of gadolinium-based contrast agents (GBCA) to obtain CE-MRI may be associated with nephrogenic systemic fibrosis and may lead to bioaccumulation in the brain, posing a potential risk to human health. Moreover, and likely more important, the use of gadolinium-based contrast agents requires the cannulation of a vein, and the injection of the contrast media which is cumbersome and places a burden on the patient. To reduce the use of contrast agents, diffusion-weighted imaging (DWI) is emerging as a key imaging technique, although currently usually complementing breast CE-MRI. In this study, we develop a multi-sequence fusion network to synthesize CE-MRI based on T1-weighted MRI and DWIs. DWIs with different b-values are fused to efficiently utilize the difference features of DWIs. Rather than proposing a pure data-driven approach, we invent a multi-sequence attention module to obtain refined feature maps, and leverage hierarchical representation information fused at different scales while utilizing the contributions from different sequences from a model-driven approach by introducing the weighted difference module. The results show that the multi-b-value DWI-based fusion model can potentially be used to synthesize CE-MRI, thus theoretically reducing or avoiding the use of GBCA, thereby minimizing the burden to patients. Our code is available at <https://github.com/Netherlands-Cancer-Institute/CE-MRI>. § INTRODUCTION Breast cancer is the most common cancer and the leading cause of cancer death in women <cit.>. Early detection of breast cancer allows patients to receive timely treatment, which may have less burden and a higher probability of survival <cit.>. Among current clinical imaging modalities, magnetic resonance imaging (MRI) has the highest sensitivity for breast cancer detection <cit.>. Especially, contrast-enhanced MRI (CE-MRI) can identify tumors well and has become an indispensable technique for detecting and defining cancer <cit.>. However, the use of gadolinium-based contrast agents (GBCA) requires iv-cannulation, which is a burden to patients, time consuming and cumbersome in a screening situation. Moreover, contrast administration can lead to allergic reactions and finaly CE-MRI may be associated with nephrogenic systemic fibrosis and lead to bioaccumulation in the brain, posing a potential risk to human health <cit.>. In 2017, the European Medicines Agency concluded its review of GBCA, confirming recommendations to restrict the use of certain linear GBCA used in MRI body scans and to suspend the authorization of other contrast agents, albeit macrocyclic agents can still be freely used <cit.>. With the development of computer technology, artificial intelligence-based methods have shown potential in image generation and have received extensive attention. Some studies have shown that some generative models can effectively perform mutual synthesis between MR, CT, and PET <cit.>. Among them, synthesis of CE-MRI is very important as mentioned above, but few studies have been done by researchers in this area due to its challenging nature. Li et al. analyzed and studied the feasibility of using T1-weighted MRI and T2-weighted MRI to synthesize CE-MRI based on deep learning model <cit.>. Their results showed that the model they developed could potentially synthesize CE-MRI and outperform other cohort models. However, MRI source data of too few sequences (only T1 and T2) may not provide enough valuable informative to effectively synthesize CE-MRI. In another study, Chung et al. investigated the feasibility of using deep learning (a simple U-Net structure) to simulate contrast-enhanced breast MRI of invasive breast cancer, using source data including T1-weighted non-fat-suppressed MRI, T1-weighted fat-suppressed MRI, T2-weighted fat-suppressed MRI, DWI, and apparent diffusion coefficient <cit.>. However, obtaining a complete MRI sequence makes the examination costly and time-consuming. On the other hand, the information provided by multi-sequences may be redundant and may not contain the relevant information of CE-MRI. Therefore, it is necessary to focus on the most promising sequences to synthesize CE-MRI. Diffusion-weighted imaging (DWI) is emerging as a key imaging technique to complement breast CE-MRI <cit.>. DWI can provide information on cell density and tissue microstructure based on the diffusion of tissue water. Studies have shown that DWI could be used to detect lesions, distinguish malignant from benign breast lesions, predict patient prognosis, etc <cit.>. In particular, DWI can capture the dynamic diffusion state of water molecules to estimate the vascular distribution in tissues, which is closely related to the contrast-enhanced regions in CE-MRI. DWI may be a valuable alternative in breast cancer detection in patients with contraindications to GBCA <cit.>. Inspired by this, we develop a multi-sequence fusion network based on T1-weighted MRI and multi-b-value DWI to synthesize CE-MRI. Our contributions are as follows: i From the perspective of method, we innovatively proposed a multi-sequence fusion model, designed for combining T1-weighted imaging and multi-b-value DWI to synthesize CE-MRI for the first time. ii We invented hierarchical fusion module, weighted difference module and multi-sequence attention module to enhance the fusion at different scale, to control the contribution of different sequence and maximising the usage of the information within and across sequences. iii From the perspective of clinical application, our proposed model can be used to synthesize CE-MRI, which is expected to reduce the use of GBCA. § METHODS §.§ Patient collection and pre-processing This study was approved by Institutional Review Board of our cancer institute with a waiver of informed consent. We retrospectively collected 765 patients with breast cancer presenting at our cancer institute from January 2015 to November 2020, all patients had biopsy-proven breast cancers (all cancers included in this study were invasive breast cancers, and ductal carcinoma in situ had been excluded). The MRIs were acquired with Philips Ingenia 3.0-T scanners, and overall, three sequences were present in the in-house dataset, including T1-weighted fat-suppressed MRI, contrast-enhanced T1-weighted MRI and DWI. DWI consists of 4 different b-values (b=0, b=150, b=800 and b=1500). All MRIs were resampled to 1 mm isotropic voxels and uniformly sized, resulting in volumes of 352×352 pixel images with 176 slices per MRI, and subsequent registration was performed based on Advanced Normalization Tools (ANTs) <cit.>. §.§ Model Figure. <ref> illustrates the structure of the proposed model. First, the reconstruction module is used to automatically encode and decode each input MRI sequence information to obtain the latent representation of different MRI sequences at multi-scale levels. Then, the hierarchical fusion module is used to extract the hierarchical representation information and fuse them at different scales. In each convolutional layer group of the reconstruction module, we use two 3 × 3 filters (same padding) with strides 1 and 2, respectively. The filters are followed by batch normalization, and after batch normalization, the activation functions LeakyReLU (with a slope of 0.2) and ReLU are used in the encoder and decoder, respectively. The l_1-norm is used as a reconstruction loss to measure the difference between the reconstructed image and the ground truth. Figure. <ref> shows the detailed structure of the hierarchical fusion module, which includes two sub-modules, a weighted difference module and a multi-sequence attention module. The calculation of the apparent diffusion coefficient (ADC) map is shown in Eq. <ref>, which provides a quantitative measure of observed diffusion restriction in DWIs. Inspired by ADC, a weighted difference module is designed, in which the neural network is used to simulate the dynamic analysis of the ln function, and the element-wise subtraction algorithm is used to extract the differentiation features between DWIs with different b-values, and finally the features are weighted to obtain weighted feature maps (F_DWI, Eq. <ref>). ADC=-ln(S_h/S_l)/(b_h-b_l)=[ln(S_l)-ln(S_h)]/(b_h-b_l) F_DWI=[f_θ^l(S_l)-f_θ^h(S_h)]/(b_h-b_l) where S_l and S_h represent the image signals obtained from lower b value b_l and higher b_h, f_θ^l and f_θ^h represent the corresponding neural networks for DWI with a lower and higher b value. In the multi-sequence attention module, a channel-based attention mechanism is designed to automatically apply weights (A_s) to feature maps (F_concat) from different sequences to obtain a refined feature map (F_concat^'), as shown in Eq. <ref>. The input feature maps (F_concat) go through the maximum pooling layer and the average pooling layer respectively, and then are added element-wise after passing through the shared fully connected neural network, and finally the weight map A_s is generated after passing through the activation function, as shown in Eq. <ref>. F_concat^'∈ℝ^C×H×W=F_concat⊗A_s A_s=σ(f_θ^fc(AvgPool(F_concat))⊕f_θ^fc(MaxPool(F_concat))) where ⊗ represents element-wise multiplication, ⊕ represents element-wise summation, σ represents the sigmoid function, θ^fc represents the corresponding network parameters of the shared fully-connected neural network, and AvgPool and MaxPool represent average pooling and maximum pooling operations, respectively. In the synthesis process, the generator 𝒢 tries to generate an image according to the input multi-sequence MRI (d_1, d_2, d_3, d_4, t_1), and the discriminator 𝒟 tries to distinguish the generated image G(d_1, d_2, d_3, d_4, t_1) from the real image y, and at the same time, the generator tries to generate a realistic image to mislead the discriminator. The generator's objective function is as follows: ℒ_G(G,D)= E_d_1,d_2,d_3,d_4,t_1∼pro_data(d_1,d_2,d_3,d_4,t_1) [log(1-D(d_1,d_2,d_3,d_4,t_1,(G(d_1,d_2,d_3,d_4,t_1))))] +λ_1E_d_1,d_2,d_3,d_4,t_1,y[y-G(d_1,d_2,d_3,d_4,t_1)_1] and the discriminator's objective function is as follows: ℒ_D(G,D)= E_y∼pro_data(y)[logD(y)] +E_d_1,d_2,d_3,d_4,t_1∼pro_data(d_1,d_2,d_3,d_4,d_1) [log(1-D(G(d_1,d_2,d_3,d_4,t_1)))] where pro_data(d_1,d_2,d_3, d_4, t_1) represents the empirical joint distribution of inputs d_1 (DWI_b0), d_2 (DWI_b150), d_3 (DWI_b800), d_4 (DWI_b1500) and t_1 (T1-weighted MRI), λ_1 is a non-negative trade-off parameter, and l_1-norm is used to measure the difference between the generated image and the corresponding ground truth. The architecture of the discriminator includes five convolutional layers, and in each convolutional layer, 3 × 3 filters with stride 2 are used. Each filter is followed by batch normalization, and after batch normalization, the activation function LeakyReLU (with a slope of 0.2) is used. The numbers of filters are 32, 64, 128, 256 and 512, respectively. §.§ Visualization The T1-weighted images and the contrast-enhanced images were subtracted to obtain a difference MRI to clearly reveal the enhanced regions in the CE-MRI. §.§ Experiment settings Based on the ratio of 8:2, the training set and independent test set of the in-house dataset have 612 and 153 cases, respectively. The trade-off parameter λ_1 was set to 100 during training, and the trade-off parameter of the reconstruction loss in the reconstruction module is set to 5. Masks for all breasts were used (weighted by a factor of 100 during the calculation of the loss between generated and real CE-MRI) to reduce the influence of signals in the thoracic area. The batch was set to 8 for 100 epochs, the initial learning rate was 1e-3 with a decay factor of 0.8 every 5 epochs (total run time is about 60 hours). Adam optimizer was applied to update the model parameters. MMgSN-Net <cit.> and the method of Chung et al. <cit.> were used as cohort models, and all models were trained on NVIDIA RTX A6000 48 GB GPU. §.§ Evaluation metrics Results analysis was performed by Python 3.7. Structural Similarity Index Measurement (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Normalized Mean Squared Error (NMSE) were used as metrics, all formulas as follows: SSIM=(2μ_y(x)μ_G(x)+c_1)(2σ_y(x)G(x)+c_2)/(μ_y(x)^2+μ_G(x)^2+c_1)(σ_y(x)^2+σ_G(x)^2+c_2) PSNR=10log_10max^2(y(x), G(x))/1/Ny(x)-G(x)_2^2 NMSE=y(x)-G(x)/y(x)_2^2 where G(x) represents a generated image, y(x) represents a ground-truth image, μ_y(x) and μ_G(x) represent the mean of y(x) and G(x), respectively, σ_y(x) and σ_G(x) represent the variance of y(x) and G(x), respectively, σ_y(x)G(x) represents the covariance of y(x) and G(x), and c_1 and c_2 represent positive constants used to avoid null denominators. § RESULTS First, we compare the performance of different existing methods on synthetic CE-MRI using our source data, The quantitative indicators used include PSNR, SSIM and NMSE. As shown in Table  <ref>, the SSIM of MMgSN-Net <cit.> and the method of Chung et al. <cit.> in synthesizing ceT1 MRI is 86.61 ± 2.52 and 87.58 ± 2.68, respectively, the PSNR is 26.39 ± 1.38 and 27.80 ± 1.56, respectively, and the NMSE is 0.0982 ± 0.038 and 0.0692 ± 0.035, respectively. In contrast, our proposed multi-sequence fusion model achieves better SSIM of 89.93 ± 2.91, better PSNR of 28.92 ± 1.63 and better NMSE of 0.0585 ± 0.026 in synthesizing ceT1 MRI, outperforming existing cohort models. MMgSN-Net <cit.> combined T1-weighted and T2-weighted MRI in their work to synthesize CE-MRI. Here we combined T1-weighted MRI and DWI with a b-value of 0 according to their method, but the model did not perform well. It may be because their model can only combine bi-modality and cannot integrate the features of all sequences, so it cannot mine the difference features between multiple b-values, which limits the performance of the model. In addition, although the method of Chung et al. <cit.> used full-sequence MRI to synthesize CE-MRI, it would be advantageous to obtain synthetic CE-MRI images using as little data as possible, taking advantage of the most contributing sequences. They did not take advantage of multi-b-value DWI, nor did they use the hierarchical fusion module to fully fuse the hierarchical features of multi-sequence MRI. As described in Methods, the proposed model consists of several key components, including a hierarchical fusion generation module, a weighted difference module, and a multi-sequence attention module. Therefore, ablation studies were performed to demonstrate the importance and effectiveness of our three key components. Several network structures were selected for comparison, as follows: (1) Input-level fusion network without other modules (called IF-Net), (2) Hierarchical fusion generation network combined with reconstruction module, without weighted difference module and multi-sequence attention module (called HF-Net), (3) Hierarchical fusion generation network with weighted difference module (called HFWD-net), (4) Hierarchical fusion generation network with weighted difference module and multi-sequence attention module (proposed model). As shown in Table  <ref>, IF-Net achieves SSIM of 87.25 ± 2.62 and PSNR of 26.51 ± 1.52 in synthesizing ceT1 MRI. HF-Net achieves SSIM of 88.32 ± 2.70 and PSNR of 27.95 ± 1.59 in synthesizing ceT1 MRI. After adding the weighted difference module, SSIM and PSNR were improved to 89.18 ± 2.73 and 28.45 ± 1.61, respectively. Finally, the addition of the multi-sequence attention module further improved the performance of the model, with SSIM of 89.93 ± 2.91, PSNR of 28.92 ± 1.63, and NMSE of 0.0585 ± 0.026. The visualization results of random samples are shown in Figure. <ref>. It can be seen from the visualization results that after the difference between the generated CE-MRI and the original T1-weighted MRI, the lesion position of the breast is highlighted, which is the same as the real enhanced position. See Supplementary Material for more visualization results, including visualizations of breast CE-MRI synthesized in axial, coronal, and sagittal planes. § CONCLUSION We have developed a multi-sequence fusion network based on multi-b-value DWI to synthesize CE-MRI, using source data including DWIs and T1-weighted fat-suppressed MRI. Compared to existing methods, we avoid the challenges of using full-sequence MRI and aim to be selective on valuable source data DWI. Hierarchical fusion generation module, weighted difference module, and multi-sequence attention module have all been shown to improve the performance of synthesizing target images by addressing the problems of synthesis at different scales, leveraging differentiable information within and across sequences. Given that current research on synthetic CE-MRI is relatively sparse and challenging, our study provides a novel approach that may be instructive for future research based on DWIs. Our further work will be to conduct reader studies to verify the clinical value of our research in downstream applications, such as helping radiologists on detecting tumors. Our proposed model can potentially be used to synthesize CE-MRI, which is expected to reduce or avoid the use of GBCA, thereby optimizing logistics and minimizing potential risks to patients. splncs04
http://arxiv.org/abs/2307.03324v1
20230706223228
The one-message-per-cell-cycle rule: A conserved minimum transcription level for essential genes
[ "Teresa W. Lo", "Han Kyou James Choi", "Dean Huang", "Paul A. Wiggins" ]
physics.bio-ph
[ "physics.bio-ph" ]
Department of Physics, University of Washington, Seattle, Washington 98195, USA pwiggins@uw.edu Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Bioengineering, University of Washington, Seattle, Washington 98195, USA Department of Microbiology, University of Washington, Seattle, Washington 98195, USA The inherent stochasticity of cellular processes leads to significant cell-to-cell variation in protein abundance. Although this noise has already been characterized and modeled, its broader implications and significance remain unclear. In this paper, we revisit the noise model and identify the number of messages transcribed per cell cycle as the critical determinant of noise. In yeast, we demonstrate that this quantity predicts the non-canonical scaling of noise with protein abundance, as well as quantitatively predicting its magnitude. We then hypothesize that growth robustness requires an upper ceiling on noise for the expression of essential genes, corresponding to a lower floor on the transcription level. We show that just such a floor exists: a minimum transcription level of one message per cell cycle is conserved between three model organisms: Escherichia coli, yeast, and human. Furthermore, all three organisms transcribe the same number of messages per gene, per cell cycle. This common transcriptional program reveals that robustness to noise plays a central role in determining the expression level of a large fraction of essential genes, and that this fundamental optimal strategy is conserved from E. coli to human cells. The one-message-per-cell-cycle rule: A conserved minimum transcription level for essential genes Paul A. Wiggins ================================================================================================== § INTRODUCTION All molecular processes are inherently stochastic on a cellular scale, including the processes of the central dogma, responsible for gene expression <cit.>. As a result, the expression of every protein is subject to cell-to-cell variation in abundance <cit.>. Many interesting proposals have been made to describe the potential biological significance of this noise, including bet-hedging strategies, the necessity of feedback in gene regulatory networks, etc <cit.>. However, it is less clear to what extent noise plays a central role in determining the function of the gene expression process more generally. For instance, Hausser et al. have described how the tradeoff between economy (e.g. minimizing the number of transcripts) and precision (minimizing the noise) explains why genes with high transcription rates and low translation rates are not observed <cit.>. Although these results suggest that noise may provide some coarse limits on the function of gene expression, this previous work does not directly address a central challenge posed by noise: How does the cell ensure that the lowest expression essential genes, which are subject to the greatest noise, have sufficient abundance in all cells for robust growth? To investigate this question, we first focus on noise in Saccharomyces cerevisiae (yeast), and find that the noise scaling with protein abundance is not canonical. We re-analyze the canonical stochastic kinetic model for gene expression, the telegraph model <cit.>, to understand the relationship between the underlying kinetic parameters and the distribution of protein abundance in the cell. As previously reported, we find that the protein abundance for a gene is described by a gamma distribution with two parameters: the message number, defined as the total gene message number transcribed per cell cycle, and the translation efficiency, which is the mean protein number translated per message. Protein expression noise is completely determined by the message number <cit.>. Although these results have been previously reported, the distinction between message number per cell versus per cell cycle and even between mean protein number and mean message number is often neglected (e.g. <cit.>). To explore the distinction between these parameters and provide clear evidence of the importance of the message number, we return to the analysis of noise in yeast. In yeast, the translation efficiency increases with message number <cit.>. By fitting an empirical model for the translation efficiency, we demonstrate that the noise should scale with a half-power of protein abundance. We demonstrate that this non-canonical scaling is observed and that our translation model makes a parameter-free prediction for the noise. The prediction is in close quantitative agreement with observation <cit.>, confirming that the message number is the key determinant of noise strength. Finally, we use this result to explore the hypothesis that there is a minimum expression level for essential genes, dictated by noise. The same mean expression level can be achieved by a wide range of different translation and transcription rates with different noise levels. We hypothesize that growth robustness requires that essential genes (but not non-essential genes) are subject to a floor expression level, below which there is too much cell-to-cell variation to ensure growth. To test this prediction, we analyze transcription in three model organisms, Escherichia coli, yeast, and Homo sapiens (human), with respect to three related gene characteristics: transcription rate, cellular message number, and message number per cell cycle. As predicted by the noise-based mechanism, we observe an organism-independent floor for the number of messages transcribed per cell cycle for essential genes, but not non-essential genes. We conclude that virtually all essential genes are transcribed at a rate of at least once per cell cycle. This analysis strongly supports the hypothesis that the same biological optimization imperatives, which determine the transcription rates of many low-expression genes, are conserved from E. coli to human. § RESULTS Implications of noise on growth robustness. With the realization of the stochasticity of central dogma processes, a key question is how cells can grow robustly in spite of cell-to-cell variations in protein expression. The noise in protein abundance is defined as the coefficient of variation squared <cit.>: CV^2_p ≡σ^2_p/μ^2_p, where σ^2_p is the variance of protein number and μ_p ≡N_p its mean. It is important to emphasize that protein abundance must double between birth and cell division in symmetrically dividing cells during steady state growth. The protein abundance should therefore be interpreted either as expression per unit volume <cit.> or the abundance associated with cells of a defined volume <cit.>. The coefficient of variation is inversely related to protein abundance and therefore low-copy proteins have the highest noise <cit.>. The challenge faced by the cell is that many essential proteins, strictly required for cell growth, are relatively low abundance. How does the cell ensure sufficient protein abundance in spite of cell-to-cell variation in protein number? It would seem that growth robustness demands that, for essential proteins, the mean should be greater than the standard deviation: CV^2_p < 1, in order to ensure that protein abundance is sufficiently high enough to avoid growth arrest. To what extent do essential proteins obey this noise threshold? What determines the strength of the noise? Usually, noise is argued to be proportional to inverse protein abundance (e.g. <cit.>): CV^2_p ∝μ_p^-1, for low abundance proteins, motivated both by theoretical and experimental results <cit.> and in some cases obeying a low-translation efficiency limit <cit.>: CV^2_p ≈μ_p^-1. Can this model be used to make quantitative predictions of the noise? E.g., is the scaling of Eq. <ref> correct? Can the coefficient of proportionality be predicted? Although Eq. <ref> appears to describe E. coli quite well <cit.>, the situation in yeast is more complicated [Although there have been claims that Eq. <ref> is consistent with the data <cit.>, these authors did not fit competing models, nor did they perform a proteome-wide analysis of protein abundance and noise.]. To analyze the statistical significance of the deviation from the canonical noise model in yeast, we can fit an empirical model to the noise <cit.>: CV_p^2 = b/μ_p^a+c. In the null hypothesis, a=1 (canonical scaling), while b and c are unknown parameters. c corresponds to the noise floor. In the alternative hypothesis, all three coefficients are unknown. (A detailed description of the statistical model is given in the Supplemental Material Sec. <ref>.) The canonical model fails to fit the noise data for yeast as reported by Newman et al. <cit.>: The null hypothesis is rejected with p-value p = 6× 10^-36. The model fit to the data is shown in Fig. <ref>. The estimated scaling exponent for protein abundance in the alternative hypothesis is a = 0.57± 0.02, and a detailed description of the statistical model and parameter fits is provided in Supplementary Material Sec. <ref>. As shown in Fig. <ref>, even from a qualitative perspective, the scaling of the yeast noise at low copy number is much closer to μ_p^-1/2 than to canonical assumption μ_p^-1 (Eq. <ref>). In particular, above the detection threshold, the noise is always larger than the low-translation efficiency limit (Eq. <ref>). Stochastic kinetic model for central dogma. To understand the failure of the canonical assumptions, we revisit the underlying model. The telegraph model for the central dogma describes multiple steps in the gene expression process: Transcription generates mRNA messages <cit.>. These messages are then translated to synthesize the protein gene products <cit.>. Both mRNA and protein are subject to degradation and dilution <cit.>. (See Fig. <ref>A.) At the single cell level, each of these processes are stochastic. We will model these processes with the stochastic kinetic scheme <cit.>: DNA @>β_m>> mRNA @>β_p>> Protein @Vγ_mVV @Vγ_pVV Ø Ø, where β_m is the transcription rate (s^-1), β_p is the translation rate (s^-1), γ_m is the message degradation rate (s^-1), and γ_p is the protein effective degradation rate (s^-1). The message lifetime is τ_m≡γ_m^-1. For most protein in the context of rapid growth, dilution is the dominant mechanism of protein depletion and therefore γ_p is approximately the growth rate <cit.>: γ_p = T^-1ln 2, where T is the doubling time. We will discuss a more general scenario below. Statistical model for protein abundance. To study the stochastic dynamics of gene expression, we used a stochastic Gillespie simulation <cit.>. (See Supplemental Material Sec. <ref>.) In particular, we were interested in the explicit relation between the kinetic parameters (β_m, γ_m, β_p, γ_p) and experimental observables. Consistent with previous reports <cit.>, we find that the distribution of protein number per cell (at cell birth) was described by a gamma distribution: N_p ∼Γ(θ_Γ,k_Γ), where N_p is the protein number at cell birth and Γ is the gamma distribution which is parameterized by a scale parameter θ_Γ and a shape parameter k_Γ. (See Supplementary Material Sec. <ref>.) The relation between the four kinetic parameters and these two statistical parameters has already been reported, and have clear biological interpretations <cit.>: The scale parameter: θ_Γ = εln 2, is proportional to the translation efficiency: ε≡β_p/γ_m, where β_p is the translation rate and γ_m is the message degradation rate. ε is understood as the mean number of proteins translated from each message transcribed. The shape parameter k_Γ can also be expressed in terms of the kinetic parameters <cit.>: k_Γ = β_m/γ_p ; however, we will find it more convenient to express the scale parameter in terms of the cell-cycle message number: μ_m≡β_m T = k_Γln 2, which can be interpreted as the mean number of messages transcribed per cell cycle. Forthwith, we will abbreviate this quantity message number in the interest of brevity. In terms of two gamma parameters, the mean and the squared coefficient of variation are: μ_p = k_Γθ_Γ = μ_mε CV^2_p = 1/k_Γ = ln 2/μ_m, where the noise depends on the message number (μ_m), not the mean protein number (μ_p). (Eq. <ref> only applies when ε≫ 0 <cit.>.) Are these theoretical results consistent with the canonical model (Eq. <ref>)? We can rewrite the noise in terms of the protein abundance and translation efficiency: CV_p^2 = εln 2/μ_p, which implies that the canonical model only applies when the translation efficiency (ε) is independent of expression (μ_p). Measuring the message number. The prediction for the noise (Eq. <ref>) depends on the message number (μ_m). However, mRNA abundance is typically characterized by a closely related, but distinct quantity: Quantitative RNA-Seq and methods that visualize fluorescently-labeled mRNA molecules typically measure the number of messages per cell <cit.>. We will call the mean of this number the cellular message number μ_m/c. In the kinetic model, these different message abundances are related: μ_m = T/τ_mμ_m/c, by the message recycling ratio, T/τ_m, which can be interpreted as the average number of times messages are recycled during the cell cycle. To estimate the message number, we will scale the observed cellular message number μ_m/c by the message recycling ratio, using the mean message lifetime. Fig. <ref>C illustrates the difference between the message number and the cellular message number. The mean lifetimes, message recycling ratios, as well as the total message number for three model organisms are shown in Tab. <ref>. Construction of an empirical model for protein number. To model the noise as a function of protein abundance (μ_p), we will determine the empirical relation between mean protein levels and message abundance by fitting to Eq. <ref>. Note that the objective here is only to estimate μ_m from μ_p, not to model the process mechanistically (e.g. <cit.>.) The message numbers are estimated from RNA-Seq measurements, scaled as described above (Eq. <ref>). The protein abundance numbers come from fluorescence and mass-spectrometry based assays <cit.>, with overall normalization chosen to match reported total cellular protein content. (See Supplemental Material Sec. <ref>.) The resulting fit generates our empirical translation model for yeast: μ_p = 8.0 μ_m^2.1, where both means are in units of molecules. (An error analysis for both model parameters is described in Supplementary Material Sec. <ref>.) The data and model are shown in Fig. <ref>A. Prediction of the noise scaling with abundance. Now that we have fit an empirical model that relates μ_p and μ_m, we return to the problem of predicting the yeast noise. We apply the relation (Eq. <ref>) to Eq. <ref> to make a parameter-free prediction of the noise as a function of protein abundance: CV_p^2 = 1.9 μ_p^-0.48. An error analysis for both model parameters is described in Supplementary Material Sec. <ref>. Our noise model (Eq. <ref>) makes both a qualitative and quantitative prediction: (i) From a qualitative perspective, the model suggests that the μ_p exponent should be roughly 1/2 for yeast, rather than the canonically assumed scaling exponent of 1. (ii) From a quantitative perspective, the model also predicts the coefficient of proportionality if the empirical relation between protein and message abundances is known (Eq. <ref>). Observed noise in yeast matches the predictions of the empirical model. Newman et al. have characterized protein noise by flow cytometry of strains expressing fluorescent fusions expressed from their endogenous promoters <cit.>. The comparison of this data to the prediction of the statistical expression model (Eq. <ref>) are shown in Fig. <ref>. From a qualitative perspective, the predicted scaling exponent of -0.48 comes very close to capturing the scaling of the noise, as determined by the direct fitting of the empirical noise model (Eq. <ref> and Fig. <ref>). From a quantitative perspective, the predicted coefficient of Eq. <ref> also fits the observed noise. From both the statistical analysis (Eq. <ref>) and visual inspection (Fig. <ref>C), it is clear that the noise in yeast does not obey the canonical model (Eq. <ref>). However, the noise in E. coli does obey the canonical model for low copy messages <cit.>. (See Fig. <ref>C.) Why does the noise scale differently in the two organisms? The key difference is that the empirical relation between the protein and message numbers are different. In E. coli, μ_p ∝μ_m^1 <cit.>. Our analysis therefore predicts the canonical model (Eq. <ref>) should hold for E. coli, but not for yeast, as illustrated schematically in Fig. <ref>. (Additional discussion can be found in the Supplementary Material Sec. <ref>.). Implications of growth robustness for translation. Before continuing with the noise analysis, we to focus on the significance of the empirical relationship between the protein and message numbers (Eq. <ref>). How can the cell counteract noise-induced reductions in robustness? Eq. <ref> implies that gene expression can be thought of as a two-stage amplifier <cit.>: The first stage corresponds to transcription with a gain of message number μ_m, and the second stage corresponds to translation with a gain in translation efficiency ε. (See Fig. <ref>AB.) The noise is completely determined by the first stage of amplification, provided that ε≫ 0 <cit.>. Genes with low transcription levels are the noisiest. For these genes, the cell can achieve the same mean gene expression (μ_p) with lower noise by increasing the gain of the first stage (increasing message number) and decreasing the gain of the second stage (the translation efficiency) by the same factor. This is most clearly understood by reducing ε at fixed μ_p in Eq. <ref>. Highly transcribed genes have low noise and can therefore tolerate higher translation efficiency in the interest of economy (decreasing the total number of messages) <cit.>. Growth robustness therefore predicts that the translation efficiency should grow with transcription level. Translation efficiency increases with expression level in yeast. The translation efficiency (Eq. <ref>) can be determined from the empirical translation model (Eq. <ref>): ε = 8.0 μ_m^1.1, as a function of message number. (An error analysis for both model parameters is described in Supplementary Material Sec. <ref>.) In yeast, the translation efficiency clearly has a strong dependence on message number μ_m, and grows with the expression level, exactly as predicted by robustness arguments. We note the contrast to the translation efficiency in E. coli, which is roughly constant <cit.>. (See Supplementary Material Sec. <ref>.) We will speculate about the rationale for these differences in the discussion below. Implications of growth robustness for transcription. In addition to the prediction of translation efficiency depending on transcription, a second qualitative prediction of growth robustness is that essential gene expression should have a noise ceiling, or maximum noise level (Eq. <ref>), where noise above this level would be too great for robust growth. The fit between the statistical model and the observed noise has an important implication beyond confirming the predictions of the telegraph and statistical models for noise: The identification of the message number, μ_m, as the key determinant of noise allows us to use this quantity as a proxy for noise in quantitative transcriptome analysis. To identify a putative transcriptional floor, we now broaden our consideration beyond yeast to characterize the central dogma in two other model organisms: the bacterium Escherichia coli and Homo sapiens (human). We will also analyze three different transcriptional statistics for each gene: transcription rate (β_m), cellular message number (μ_m/c), and message number (μ_m). Analysis of these organisms explores orders-of-magnitude differences in characteristics of the central dogma, including total message number, protein number, doubling time, message lifetime, and number of essential genes. (See Tab. <ref>.) In particular, as a consequence of these differences, the three statistics describing transcription: transcription rate, cellular message number and message number are all distinct. Genes with matching message numbers in two different organisms will not have matching transcription rates or cellular message numbers. We hypothesize that cells must express essential genes above some threshold message number for robust growth; however, we expect to see that non-essential genes can be expressed at much lower levels since growth is not strictly dependent on their expression. The signature of a noise-robustness mechanism would be the absence of essential genes for low message numbers. No organism-independent threshold is observed for transcription rate or cellular message number. Histograms of the per-gene transcription rate and cellular message number are shown in Fig. <ref> for E. coli, yeast, and human. Consistent with existing reports, essential genes have higher expression than non-essential genes on average; however, there does not appear to be any consistent threshold in E. coli (even between growth conditions), yeast, or human transcription, either as characterized by the transcription rate (β_m) or the cellular message number (μ_m/c). For instance, the per gene rate of transcription is much lower in human cells than E. coli under rapid growth conditions, with yeast falling in between. An organism-independent threshold is observed for message number for essential genes. In contrast to the other two transcriptional statistics, there is a consistent lower limit, or floor, on message number (μ_m) of somewhere between 1 and 10 messages per cell cycle for essential genes. (See Fig. <ref>.) Non-essential genes can be expressed at a much lower level. This floor is consistent not only between E. coli, growing under two different conditions, but also between the three highly-divergent organisms: E. coli, yeast and human. We will conservatively define the minimum message number as μ_m^ min≡ 1, and summarize this observation as the one-message-per-cell-cycle rule for essential gene expression. In addition to the common floor for essential genes, there is a common gene expression distribution shape shared between organisms dependent on the message numbers, especially for low-expression essential genes. This is observed in spite of the significantly larger number of essential genes in human relative to E. coli. (See Fig. <ref>.) Interestingly, there is also a similarity between the non-essential gene distributions for E. coli and human, but not for yeast, which appears to have a much lower fraction of genes expressed at the lowest message numbers. What genes fall below-threshold? We have hypothesized that essential genes should be expressed above a threshold value for robustness. It is therefore interesting to consider the function of genes that fall below this proposed threshold. Do functions of these genes give us any insight into essential processes that do not require robust gene expression? Since our own preferred model system is E. coli, we focus here. Our essential gene classification was based on the construction of the Keio knockout library <cit.>. By this classification, 10 essential genes were below threshold. (See Supplementary Material Tab. <ref>.) Our first step was to determine what fraction of these genes were also classified as essential using transposon-based mutagenesis <cit.>. Of the 10 initial candidates, only one gene, ymfK, was consistently classified as an essential gene in all three studies, and we estimate that its message number is just below the threshold (μ_m = 0.4). ymfK is located in the lambdoid prophage element e14 and is annotated as a CI-like repressor which regulates lysis-lysogeny decision <cit.>. In λ phase, the CI repressor represses lytic genes to maintain the lysogenic state. A conserved function for ymfK is consistent with it being classified as essential, since its regulation would prevent cell lysis. However, since ymfK is a prophage gene, not a host gene, it is not clear that its expression should optimize host fitness, potentially at the expense of phage fitness. In summary, closer inspection of below-threshold essential genes supports the threshold hypothesis. Maximum noise for essential genes. The motivation for hypothesizing a minimum threshold for message number was noise-robustness, or the existence of a hypothesized noise ceiling above which essential gene expression is too noisy to allow robust cellular proliferation. With the one-message-per-cell-cycle rule, μ_m^min≡ 1, we can estimate the essential gene noise ceiling using Eq. <ref>: CV_p^2 ≤ 0.7, for essential genes. Since noise depends only on the message number, we expect to observe the same limit in all organisms if the message number floor is conserved. Estimating the floor on central-dogma parameters. If message number floor is conserved, a limit can be estimated for the floor value on other transcriptional parameters. Using Eq. <ref>, we can estimate the floor on the cellular message number (as measured in RNA-Seq measurements): μ_m/c^ min = τ_m/T, for essential genes. Similarly, we can use Eq. <ref> to estimate the minimum transcription rate: β_m^ min = 1/T, for essential genes. Again, this result has an intuitive interpretation as the one-message-per-cell-cycle rule. Finally, we can estimate a floor on essential protein abundance, assuming a constant translation efficiency using Eq. <ref>: μ_p^ min = ε, for essential genes, where ε is the translation efficiency (which we will assume is well approximated by the mean in the context of the estimate). All four floor estimates for each model organism are shown in Tab. <ref>. § DISCUSSION Noise by the numbers. Although there has already been significant discussion of the scaling of biological noise with protein abundance <cit.>, our study is arguably the first to test the predictions of the telegraph and statistical noise models against absolute measurements of protein and message abundances. This approach is particularly important for the message number (μ_m), which determines the magnitude of the noise in protein expression, and facilitates direct comparisons of noise between organisms as well as identifying the common distributions of message number for genes, that are conserved from bacteria to human. Noise scaling in E. coli versus yeast. A key piece of evidence for the significance of the message number was the observation of the non-canonical scaling of the yeast noise with protein abundance (Fig. <ref>); however, the canonical model (Eq. <ref>) does accurately describe the noise in E. coli (see Fig. <ref>). Why does the noise scale differently? In E. coli, the translation efficiency is only weakly correlated with the gene expression <cit.>, and therefore the canonical model is a reasonable approximation (Supplementary Material Sec. <ref>). However, we also argued that translation efficiency should grow with expression level. Why is this not observed in E. coli? Due to the high noise floor in E. coli, nearly all essential genes are expressed at a sufficiently high expression level such that the noise is dominated by the noise floor <cit.>. As a consequence, increasing the message number, while decreasing translation efficiency, does not decrease the noise even as it increases the metabolic load as a result of increased transcription. (A closely related point has recently been made in Bacillus subtilis <cit.>, where Deloupy et al. report that the noise cannot be tuned by adjusting the message number due to the noise floor.) Our expectation is therefore that other bacterial cells will look similar to E. coli: They will have a higher noise floor and a similar scaling of noise with protein abundance. In contrast, due to the lower noise floor, we expect eukaryotic cells to optimize the central dogma processes like yeast and as a result will have a similar non-canonical scaling of noise with protein abundance. Although this non-canonical scaling is clear from the abundance data (Fig. <ref>B), there is an important qualification to emphasize: the mechanism that gives rise to the non-canonical scaling is due to the correlation between translation efficiency and transcription. Regulatory changes that effect only transcription (i.e. increase μ_m) and not translation (ε) should obey the canonical noise model (Eq. <ref>). This scenario may help explain why Bar-Even et al. claim to observe canonical noise scaling in yeast <cit.>, studying a subset of genes under a range of conditions resulting in differential expression levels. The failure of the canonical noise model (Eq. <ref>) at the proteome level in yeast (Eq. <ref>) is a consequence of genome-wide optimization of the relative transcription and translation rates. Essential versus non-essential genes. What genes are defined as essential is highly context specific <cit.>. It is therefore important to consider whether the comparison between these two classes of genes is informative in the context of our analysis. We believe the example of lac operon in E. coli is particularly informative in this respect. The genes lacZYA are conditionally essential: they are required when lactose is the carbon source; however, these genes are repressed when glucose is the carbon source. Our expectation is that these conditionally essential genes will obey the one-message-per-cell-cycle rule when these genes are required; however, they need not obey this rule when the genes are repressed. By analyzing essential genes, we are limiting the analysis to transcriptionally-active genes, whereas the non-essential category contains both transcriptionally-active and silenced genes. Protein degradation and transcriptional bursting. Two important mechanisms can act to significantly increase the noise above the levels we predict: protein degradation and transcriptional bursting. Although the dominant mechanism of protein depletion is dilution in E. coli, protein degradation plays an important role in many organisms, especially in eukaryotic cells <cit.>. If protein degradation depletes proteins faster than dilution, the shape parameter decreases below our estimate (Eq. <ref>), increasing the noise. Likewise, the existence of transcriptional bursting, in which the chromatin switches between transcriptionally active and quiescent periods, can also act to increase the noise <cit.>. Since the presence of both these mechanisms increases the noise beyond what is predicted by the message number, they do not affect our estimate of the minimum threshold for μ_m. The biological implications of noise. What are the biological implications of gene expression noise? Many important proposals have been made, including bet-hedging strategies, the necessity of feedback in gene regulatory networks, etc <cit.>. Our analysis suggests that noise influences the optimal function of the central dogma process generically. Hausser et al. have already discussed some aspects of this problem and use this approach to place coarse limits on transcription versus translation rates <cit.>. The transcriptional floor for essential genes that we have proposed places much stronger limits on the function of the central dogma. Although we describe our observations as a floor, a more nuanced description of the phenomenon is a common distribution of gene message numbers, peaked at roughly 15 messages per cell cycle and cutting off close to one message per cell cycle. Does this correspond to a hard limit? We expect that this does not since there are a small fraction of genes, classified as essential, just below this limit; however, it does appear that virtually all essential genes have optimal expression levels above this threshold. The common distribution of message number clearly suggests that noise considerations shape the function of the central dogma for virtually all genes. Exploring this hypothesis will require quantitative models that explicitly realize the high cost of noise-induced low essential-protein abundance. We will present such an analysis elsewhere. Adapting the central dogma to increased cell size and complexity. Although core components of the central dogma machinery are highly-conserved, there has been significant complexification of both the transcriptional and translational processes in eukaryotic cells <cit.>. Given this increased regulatory complexity, it is unclear how the central dogma processes should be adapted in larger and more complex cells. An important clue to this adaptation comes from E. coli proliferating with different growth rates. Although there are very significant differences between the cellular message number as well as the overall transcription rate under the two growth conditions, there is very little difference in message number. In short, roughly the same number of messages are made during the cell cycle, but they are made more slowly under slow growth conditions. How does this picture generalize in eukaryotic cells? Although both the total number of messages and the number of essential and non-essential genes are larger in both yeast and human cells, the distribution of the message number per gene is essentially the same as E. coli (Fig. <ref>). The conservation of the message number between organisms is consistent with all of these organisms being optimized with respect to the same trade-off between economy and robustness to noise. Data availability. We include a source data file which includes the estimated message numbers as well as essential/nonessential classifications for each organism. Acknowledgments. The authors would like to thank B. Traxler, A. Nourmohammad, J. Mougous, K. Cutler, M. Cosentino-Lagomarsino, S. van Teeffelen, and S. Murray. This work was supported by NIH grant R01-GM128191. Author contributions: T.W.L., H.K.J.C., D.H. and P.A.W. conceived the research. T.W.L. and P.A.W. performed the analysis. H.K.J.C. and D.H. performed experiments and analysis. T.W.L., H.K.J.C., D.H. and P.A.W. wrote the paper. Competing interests: The authors declare no competing interests. naturemag § SUPPLEMENTAL ANALYSIS §.§ Gillespie Simulation of the telegraph model Protein distributions of the telegraph model for E. coli were simulated with a Gillespie algorithm. Assuming the lifetime of the cell cycle (T_cc = 30 min) <cit.>, mRNA lifetime (τ_m = 2.5 min) <cit.>, and translation rate (β_p ≈ 500 hr^-1), the protein distributions for several mean expression levels were numerically generated for exponential growth with 100,000 stochastic cell divisions, with protein partitioned at division following the binomial distribution. The gamma distributions for each mean message number with scale and shape parameters determined by the corresponding translation efficiency and message number (θ = εln 2, k = μ_m/ln 2) as used for the Gillespie simulation were also plotted with the protein distributions. p(n | θ, k) = 1/Γ(k) θ^k n^k-1 e^-n/θ §.§ Selection of central dogma parameter estimates The estimates for central dogma model parameters come from two types of data: (i) quantitative measurement of cellular-scale parameters for each organism (total number of messages in the cell, cell cycle duration, etc) and (ii) genome-wide studies quantitative of mRNA and protein abundance. For the cellular-scale central dogma parameters, we relied heavily on an online compilation of biological numbers: BioNumbers <cit.>. This resource provides a collection of curated quantitative estimates for biological numbers, as well as their original source. In the interest of conciseness, we have cited only the original source in the Tab. <ref>, although we are extremely grateful and supportive of the creators of the BioNumbers website for helping us very efficiently identify consensus estimates for the parameters of the central dogma parameters. For the selection of genome-wide studies on abundance, we used many of the same resources cited in BioNumbers as well as studies selected by a previous study of a quantitative analysis of the central dogma: Hausser et al. <cit.>. §.§.§ E. coli data Message lifetimes: The message lifetimes (and median lifetime) were taken from a recent transcriptome-wide study by Chen et al. <cit.>. These investigators measured the lifetime in both rapid (LB) and slow growth (M9). Noise: Taniguchi et al. have performed a beautiful simultaneous study of the proteome and transcriptome with single-molecule sensitivity <cit.>. Although we use the noise analysis data from this study for our supplemental analysis of E. coli noise, it is not the source for our E. coli transcriptome data due to the extremely slow growth of the cells in this study (150 minute doubling time), which is not consistent with the growth conditions for the other sources of data. mRNA abundance: Instead, we used data from the more recent Bartholomaus et al. study <cit.>, which characterizes the transcriptome in both rapid (LB) and slow growth (M9). Total cellular message number. This study was chosen since it was the source of the BioNumbers estimates of cellular message number in E. coli (BNID 112795 <cit.>). Doubling time: The source of the doubling times for rapid (LB) and slow (M9) growth of E. coli comes from Bernstein <cit.>. Essential gene classification. The classification of essential genes in yeast comes from the construction of the Keio knockout collection from Baba et al. <cit.>. Protein number. The total protein number in E. coli came from Milo's recent review of this subject <cit.>. §.§.§ Yeast data Message lifetimes: The message lifetimes (and median lifetime) were taken from Chia et al. <cit.>. Noise: The noise data was taken from the Newman et al. study, which used flow cytometry of a library of fluorescent fusions to characterize protein abundance with single-cell resolution <cit.>. mRNA abundance: The transcriptome data comes from the very recent Blevins et al. study <cit.>. Total cellular message number. There are a wide-range of estimates for the total cellular message number in yeast: 1.5× 10^4 <cit.> (BNID 104312 <cit.>), 1.2×10^4 <cit.> (BNID 102988 <cit.>), 6.0× 10^4 <cit.> (BNID 103023 <cit.>), 2.6× 10^4 <cit.> (BNID 106763 <cit.>) and 3.0× 10^4 <cit.>. We used the compromise value of 2.9× 10^4. Doubling time: The doubling time was taken from <cit.>. Protein number. The total protein number in yeast comes from Futcher et al. <cit.>. Essential gene classification. The classification of essential genes in yeast comes from van Leeuwen et al. <cit.>. Proteome abundance data: The proteome abundance data came from two sources: flow cytometry of fluorescent fusions from Newman et al. <cit.> as well as mass-spec data from de Godoy et al. <cit.>. §.§.§ Human data Message lifetimes: The message lifetimes (and median lifetime) were taken from Yang et al. <cit.> who reported a median half life of 10 h which corresponds to a lifetime of 14 h. mRNA abundance: The transcriptome data comes from the data compiled by the Human Protein Atlas <cit.>, which we averaged over tissue types. Total cellular message number. The total cellular message number in human comes from Velculescu et al. <cit.> (BNID 104330 <cit.>). Doubling time: The doubling time was taken from <cit.>. Protein number. The total protein number in human came from Milo's recent review of this subject <cit.>. Essential gene classification. The classification of essential genes in human comes from Wang et al. <cit.>. §.§ Quantitative estimates of central dogma parameters §.§.§ Estimating the cellular message number: μ_m/c For each model organism (and condition), we found a consensus estimate from the literature for the total number of mRNA messages per cell N_m/c^ tot. This number and its source are provided in Tab. <ref>. To estimate the number of messages corresponding to gene i, we re-scaled the un-normalized abundance level r_i: N_m/c, i = N_m/c^ totr_i/∑_j r_j, where the sum over gene index j runs over all genes. §.§.§ Estimating the transcription rate: β_m To estimate the transcription rate for gene i, we start from the estimated cellular message number N_m/c,i and use the telegraph model prediction for the cellular message number: N_m/c,i = β_m,i/γ_m,i, where γ_m,i is the message decay rate. Since gene-to-gene variation in message number is dominated by the transcription rate (e.g <cit.>), we estimate the decay rate as the inverse gene-median message life time: γ_m,i = τ_m^-1, for which a consensus value was found from the literature. This number and its source are provided in Tab. <ref>. We then estimate the gene-specific transcription rate: β_m,i=N_m/c,i/τ_m. §.§.§ Estimating the message number: μ_m To estimate the message number of gene i, we use the predicted value from the telegraph model: N_m,i = T β_m,i = T/τ_m N_m/c,i, where T is the doubling time and N_m/c,i is the cellular message number (Eq. <ref>). §.§.§ Estimating the protein number: μ_p The protein abundance data for yeast grown in YEPD media and measured with flow cytometry fluorescence <cit.> were given in arbitrary units (AU). In order to convert from AU to protein number, the fluorescence values were rescaled by comparing with mass-spectrometry protein abundance data for yeast grown in YNB media <cit.>. Since the protein abundance from mass-spectrometry was given in terms of Intensity, the Intensity values were first rescaled by the total number of proteins in yeast, 5 × 10^7. The mass-spectrometry protein data was thresholded at 10 proteins, based on the assumption that the noise of the data for 10 and fewer proteins makes the data unreliable. Next, the log of the fluorescence protein abundance in AU as a function of the log of thresholded mass-spectrometry protein abundance was fit as a linear function with an assumed slope of 1 to find the offset, 3.9, (Fig. <ref>) which corresponds to a multiplicative scaling factor (Eqn. <ref>). We then used that offset value to rescale the fluorescence data from AU to protein number. We also compared to yeast grown in SD media <cit.> and found a similar offset result. logμ_P^F = m logμ_P^MS + b μ_P^F = b(μ_P^MS)^m §.§ Empirical models for yeast gene expression To generate the empirical model for protein number as a function of message number, we used protein abundance data from Newman et al. <cit.>, re-scaled to estimate protein number (Sec. <ref>) and transcriptome data from Lahtvee et al. <cit.>, re-scaled to estimate message number (Sec. <ref>). §.§.§ The meaning of the error estimates Before providing a detailed error analysis, it is important to place our error estimates in perspective. The error that we will be estimating is the statistical error associated with the finite sample size; however, this is not the dominant source of error. A far more important consideration are systematic problems with our analysis and the underlying experiments. For instance, since we do not have a detailed model for the error of the experiments analyzed, there are multiple distinct analyses (i.e. assumptions about the error model) that could be implemented for the data fitting, each giving slightly different model parameters. These model to model differences still give rise to predictions consistent with our qualitative conclusions; however, they are likely larger than the statistical uncertainty we compute (while assuming a particular model). §.§.§ Empirical model for protein number We initially fit the empirical model for protein number, μ_p = C_0 μ_m^α_0, to the data using a standard least-squares approach; however, the algorithm led to a very poor fit since it does not account for uncertainty in both independent and dependent variables. We therefore used an alternative approach <cit.>, which assumes comparable error in both variables. The model parameters are: α_0 = 2.1± 0.04, C_0 = 8.0 ± 1.0, where the uncertainties are the estimated standard errors. §.§.§ Empirical model for message number For the prediction of the coefficient of variation, it is useful to invert Eq. <ref> to generate a model for message number as a function of protein number: μ_m = C_0^-1/α_0μ_p^1/α0, = C_1 μ_p^α_1, where the last line defines two new parameters: a coefficient C_1 and an exponent α_1. The resulting parameters and uncertainties are: α_1 ≡ 1/α_0, = 0.48± 0.01, C_1 ≡ C_0^-1/α_0, = 0.37± 0.02, where the uncertainties are the estimated standard errors. §.§.§ Empirical model for translation efficiency To generate an empirical model for translation efficiency, we started from the empirical model for protein number (Eq. <ref>), and then use Eq. <ref> to relate protein number, message number, and translation efficiency: ε = μ_p/μ_m, = C_0 μ_m^α_0-1, = C_2 μ_m^α_2, where the last line defines two new parameters: a coefficient C_2 and an exponent α_2. The resulting parameters and uncertainties are: α_2 = α_0-1, = 1.07 ± 0.04, C_2 = C_0, = 8.0± 1.0, where the uncertainties are the estimated standard errors. §.§.§ Empirical model for the coefficient of variation To generate an empirical model for the coefficient of variation, we started from the empirical model for message number (Eq. <ref>), and then substitute this into the statistical model prediction for CV_p^2 (Eq. <ref>): CV_p^2 = log 2/μ_m, = C_0^1/α_0log 2 ·μ_p^-1/α_0, = C_3 μ_p^α_3, where the last line defines two new parameters: a coefficient C_3 and an exponent α_3. The resulting parameters and uncertainties are: α_3 ≡ -1/α_0, = -0.48±0.01, C_3 ≡ C^1/α_0_0log 2, = 1.9± 0.1, where the uncertainties are the estimated standard errors. §.§ Supplemental analysis of gene expression noise The quantitative model for gene expression noise includes multiple contributions: CV^2_p ≈1/μ_p + log 2/μ_m + c_0, where the first term can be understood to represent the Poisson noise from translation, the second term the Poisson noise from transcription, and the last term, c_0, is called the noise floor and is believed to be caused by the cell-to-cell variation in metabolites, ribosomes, and polymerases etc <cit.>. §.§.§ Inclusion of noise floor in the yeast analysis In the main text of the paper, we have ignored the role of the noise floor in the analysis of noise in yeast. Unlike E. coli, where the noise floor is high (CV^2_p=0.1) and is determinative of the noise associated with almost all essential genes <cit.>, in yeast the noise floor is much lower (CV^2_p=0.01) and therefore affects only genes with the highest expression. In this section, we will consider models that include the noise floor, since its presence can make the noise scaling more difficult to interpret. To determine if the scaling of the noise is consistent with the canonical assumption that the noise is proportional to μ_p^-1 for low expression, we will consider two competing empirical models for the noise (Fig. <ref>). In the null hypothesis, we will consider a model: η_0(μ_p; b,c ) = b/μ_p+c, and an alternative hypothesis with an extra exponent parameter a: η_1(μ_p; a,b,c ) = b/μ_p^a+c. We will assume that CV^2_p is normally distributed about η with unknown variance σ^2_η. In this context, a maximum likelihood analysis is equivalent to least-squares analysis. Let the sum of the squares be defined: S_I(θ) ≡∑_i [ CV_p,i^2- η_I(μ_p,i; θ )]^2 for model I where θ represents the parameter vector. The maximum likelihood parameters are θ̂ = max_θ S_I(θ), with residual norm: Ŝ_I = S_I(θ̂). To test the null hypothesis, we will use the canonical likelihood ratio test with the test statistic: Λ≡ 2logq_1/q_0, where q_0 and q_1 are the likelihoods of the null and alternative hypotheses, respectively. Wilks' theorem states that Λ has a chi-squared distribution of dimension equal to the difference of the dimension of the alternative and null hypotheses (3-2=1). §.§.§ Hypothesis test I In our first analysis, we will estimate the variance directly. We computed the mean-squared difference for successive CV^2_p values, sorted by mean protein number μ_p. The variance estimator is σ̂^2_η = 1/2<( CV^2_p,i- CV^2_p,i+1)^2 >_i=6.3× 10^-4, where the brackets represent a standard empirical average over gene i for the μ_p-ordered gene CV_p^2 values. The test statistic can now be expressed in terms of the residual norms: Λ = (Ŝ_1-Ŝ_2)/σ̂^2_η, = 3.3× 10^4, which corresponds to a p-value far below machine precision. We can therefore reject the null hypothesis. §.§.§ Hypothesis test II In a more conservative approach, we can use maximum likelihood estimation to estimate the variance of each model independently as a model parameter. In this case, the test statistic can again be expressed in terms of the residual norms: Λ = N logŜ_1/Ŝ_2, = 1.6× 10^2, where N is the number of data points. In this case, the p-value can be computed assuming the Wilks' theorem (i.e. the chi-squared test): p = 6×10^-36, again, strongly rejecting the null hypothesis. §.§.§ Maximum likelihood estimates of the parameters In the alternative hypothesis, the maximum likelihood estimate (MLE) of the empirical noise model (Eq. <ref>) parameters are (Fig. <ref>): a = 0.57 ± 0.02, b = 3.0 ± 0.5, c = 0.013 ± 0.001, where the parameter uncertainty has been estimated using the Fisher Information in the usual way using the MLE estimate of the variance. The noise model parameters were also determined for E. coli: a = 1.22 ± 0.01, b = 1.27 ± 0.02, c = 0.154 ± 0.002, with the corresponding fit shown in Fig. <ref>. Since a is close to 1, the canonical model with a=1 (Eqn. <ref>) is a somewhat reasonable approximation for the noise in E. coli. §.§.§ Statistical details MLE estimate of the variance The minus-log-likelihood for the normal model I is: h_I(θ̂,σ^2) = N/2log 2πσ^2 + 1/2σ^2Ŝ_I, where Ŝ_I is the least-square residual. We then minimize h_I with respect to the variance σ^2: ∂_σ^2 h|_σ̂^2 = 0, to solve for the MLE σ̂^2: σ̂^2 = 1/NŜ_I. Next we evaluate h at the variance estimator: h_I(θ̂,σ̂^2) = N/2[log 2πŜ_I/N + 1]. The test statistics can be written in terms of the h's: Λ = 2h_0(h_1(θ̂,σ̂^2)-2h_2(θ̂,σ̂^2), = N logŜ_1/Ŝ_1, which can be evaluated directly in terms of the residual norms for the null and alternative hypotheses. §.§.§ Detailed discussion of noise in E. coli In general, the Telegraph model predicts that the noise will have a coefficient of variation <cit.>: CV_p^2 ≈1/μ_p + εln 2/μ_p, where the first term is significant whenever the translation efficiency isn't ε≫ 1. In both E. coli (ε≈ 30) and yeast (ε≈ 420), this would seem naively to be the case. However, since translation efficiency in yeast is not uniform, we must consider its variation for low-expression proteins. We estimate that the detection efficiency in yeast is roughly 10^3 molecules. Using Eq. <ref>, we estimate that ε≈ 100 and our approximation holds at the low-expression detection limit. In E. coli, the situation is somewhat more complicated. Unlike yeast, the translation efficiency is roughly constant (at high to intermediate expression levels) with respect to expression level <cit.>, and therefore both terms in Eq. <ref> are expected to scale like the canonical model (∝μ_p^-1). However, it is clear that the translation efficiency must significantly decrease for the lowest abundance proteins. This is visible even in Ref. <cit.> Fig. 1B, where the data falls below the predicted protein abundance at low message number. Note that these mass-spec measurements are not as sensitive as fluorescence-based measurements (e.g. only 64% proteome could be detected <cit.>). Furthermore, fits to the E. coli noise (Eq. <ref>) are consistent only with low values of ε. At sufficiently high expression levels such that we are confident about the translation efficiency, the noise is already very close to the noise floor.
http://arxiv.org/abs/2307.02714v1
20230706013844
Nature of the Antiferromagnetic Order in GdCu$_2$
[ "Koji Kaneko", "Chihiro Tabata", "Masato Hagihala", "Hiroki Yamauchi", "Masato Kubota", "ToyotakaOsakabe", "Yoshichika Ōnuki" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
A composition law and refined notions of convergence for periodic continued fractions [ ===================================================================================== Trivalent Gd and divalent Eu ions show unique magnetic properties among rare-earth ions owing to an absence of orbital moment. Intermetallic compounds with Gd^3+ and Eu^2+ can be understood as spin magnetism mediated by RKKY interaction. Recently, topological magnetic orders, represented by magnetic skyrmion lattice, were discovered in Eu- and Gd-based compounds<cit.>. Magnetic skyrmions in f-electron systems have characteristic properties such as short periodicity and anisotropy, compared with d electron systems, which can result from different formation mechanism. An orthorhombic compound attracts renewed interests in this respect. undergoes an antiferromagnetic transition around 40 K<cit.>. The magnetic structure in the ground state was reported to be a helical one characterized by the ordering vector <cit.>. Because of the strong neutron absorption of Gd, hot neutron, a wavelength of ∼0.5 Å, was used to minimize attenuation. On the other hand, a short wavelength made it difficult to achieve high resolution in a momentum Q space. In order to get detailed insights into the magnetic order in , single crystal neutron diffraction experiment was performed using thermal neutron. A single crystal sample of was grown by the Czochralski method as same as in Ref. Koyanagi1998. The plate-like sample of ∼4×4×1.8 mm^3 with a plane normal to the b-axis was used in the present study. A single crystal neutron diffraction experiment was carried out on the thermal neutron triple-axis spectrometer TAS-2, installed at the T2-4 beam port in the guide hall of the research reactor JRR-3 in Tokai. Neutrons with a wavelength of 2.359 Å were obtained and analyzed by pyrolytic graphite (PG) monochromator and analyzer. A collimation set of open-40'-S-40'-80' was employed with a PG filter before the sample. The sample was attached to a cold finger of a closed-cycle refrigerator to have (h k 0) in the horizontal scattering plane. A linear absorption coefficient for with natural Gd is enormously large; roughly 9 μm thickness will reduce neutron flux down to 1/e. In order to avoid strong neutron attenuation, reflections close to the reciprocal (0 k 0) axis were chosen to have a reflection geometry for the present sample. Figure <ref> shows a scan along h through (1 3 0) measured at 4.6 K below . Three peaks were observed, which consists of a nuclear peak at center at (1 3 0) accompanied by satellite peaks on both sides. Note that the nuclear Bragg peak at (1 3 0) is forbidden for the symmetry Imma . As the central peak serves as a good reference to determine magnetic satellite peak positions precisely, the PG filter was partly removed and use λ/2 to obtain the peak at (1 3 0). In smaller h region, a sharp magnetic peak was observed at 0.678(1) whose width corresponds to expected resolution of ∼0.013 r. l. u.. In contrast, a subtle, but clear magnetic peak was observed in larger h at 1.323(1) plotted in the right axis, which needs 10 times longer counting time to obtain the similar statistics. By using the single parameter δ to fit the both peak positions with respect to the nuclear peak, δ was determined as 0.678(1). This is evidently smaller than 2/3, which is indicated by broken lines in the figure. On the other hand, a scan along k confirms that the magnetic peak is centered at k=3, namely the k component is 1. The results indicate that the magnetic order in is not commensurate with , but has an incommensurate nature with . This finding is also supported by its temperature dependence. Figure <ref> shows the scan along h through (δ 3 0) collected at several temperatures below . With increasing temperature, the magnetic peak at the incommensurate position at h=0.678 becomes weaker, and vanished at 40.4 K just above . Concomitantly, the magnetic peak exhibits gradual peak shift to smaller h as temperature increases. This is contrast to the adjacent nuclear peak 1 3 0, which stays at the same position as shown in the inset of Fig. <ref>. Figure <ref> summarizes the temperature variation of the peak position and intensity obtained by a Gaussian fitting. Concerning the peak position, the peak at 0.678 at 4.6 K exhibits monotonous decrease with the temperature, and becomes 0.671 at 38.2 K. The peak positions in the entire temperature range are larger than 2/3, as indicated by the broken line in the figure. No visible difference exists between upon cooling and heating. A change of the peak position with temperature also supports an incommensurate nature of the transition in . The magnetic peak intensity shown in the inset develops gradually with decreasing temperature. No visible hysteresis was observed as in the peak position, consistent with a second-order nature of the magnetic transition in . The present results reveal the incommensurate nature of the magnetic order in . The deviation from the commensurate order is small, 0.01 r.l.u. along a. Within the previous magnetic structure model, a small change in periodicity results in a slight reduction in pitch angle; the angle between neighboring magnetic moments along a is reduced from 120^∘ to ∼116^∘. The lock-in transition to commensurate structure could not been seen in the present temperature range down to 4.6 K. In general, neutrons with a short wavelength, 1 Å or less, are used to study high neutron absorbing material as absorption cross section follows so-called 1/v low, where v corresponds to speed of neutron. Indeed, the high neutron absorption cross section of Gd, 58500 barns at a typical thermal-neutron wavelength 2.36 Å, can be suppressed to roughly 1/100 at hot neutron region at 0.5 Å. This suppression of neutron absorption enables quantitative structure analysis as in the previous report <cit.>. On the other hand, a short wavelength tends to result in a poor resolution in the Q space, in particular for low Q region. In other words, this setup is not ideal to determine precise periodicity as mentioned in Ref. Rotter2000. Whereas an accurate absorption correction and structural analysis could be problematic, a peak position and its variation with an external parameter can be tracked accurately with thermal neutron. A complementary use of both short- and long-wavelength neutrons leads to obtain a detailed picture of magnetic order. In summary, the present study revealed that the antiferromagnetic order in orthorhombic can be described with the incommensurate ordering vector with with =0.678 at 4.6 K. The gradual change of with the temperature also supports an incommensurate nature of the transition in . We thank Y. Shimojo and M. Sasaki for their support on neutron scattering experiments. This work was supported by JSPS KAKENHI Grants Nos. JP20H01864, JP21H01027, JP21H04987, and No. JP19H04408. ./jpsj
http://arxiv.org/abs/2307.00827v1
20230703081348
Toward a Mapping of Capability and Skill Models using Asset Administration Shells and Ontologies
[ "Luis Miguel Vieira da Silva", "Aljosha Köcher", "Milapji Singh Gill", "Marco Weiss", "Alexander Fay" ]
cs.SE
[ "cs.SE", "cs.CE" ]
IEEEexample:BSTcontrol Toward a Mapping of Capability and Skill Models using Asset Administration Shells and Ontologies Luis Miguel Vieira da Silva1, Aljosha Köcher1, Milapji Singh Gill1, Marco Weiss2 and Alexander Fay1 1Institute of Automation Technology Helmut Schmidt University, Hamburg, Germany {miguel.vieira, aljosha.koecher, milapji.gill, alexander.fay}@hsu-hh.de 2Institute of Maintenance, Repair and Overhaul German Aerospace Center (DLR), Hamburg, Germany ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= In order to react efficiently to changes in production, resources and their functions must be integrated into plants in accordance with the plug and produce principle. In this context, research on so-called capabilities and skills has shown promise. However, there are currently two incompatible approaches to modeling capabilities and skills. On the one hand, formal descriptions using ontologies have been developed. On the other hand, there are efforts to standardize submodels of the Asset Administration Shell (AAS) for this purpose. In this paper, we present ongoing research to connect these two incompatible modeling approaches. Both models are analyzed to identify comparable as well as dissimilar model elements. Subsequently, we present a concept for a bidirectional mapping between AAS submodels and a capability and skill ontology. For this purpose, two unidirectional, declarative mappings are applied that implement transformations from one modeling approach to the other - and vice versa. Capabilities, Skills, Ontologies, AAS, Mapping, RML, RDFex, Transformation, Semantic Web, OWL, RDF § INTRODUCTION Paradigms such as plug and produce are a promising way to achieve customer-specific production with a multitude of product variants. A central aspect of this approach is to reduce the effort required to integrate new resources and their functions. For plug and produce, a machine-interpretable information model of resources including individual mechatronic modules and their functions is indispensable <cit.>. Current research in this area aims to create models for so-called capabilities and skills. In this context, a capability is defined as an implementation-independent specification of a function, whereas a skill is defined to be the executable implementation of said function <cit.>. To this date, different approaches to modeling capabilities and skills have been developed, of which the two most recent are discussed in this paper. On the one hand, there are formal models using ontologies <cit.>. On the other hand, there are approaches using the Asset Administration Shell (AAS) <cit.>. For the latter, the Industrial Digital Twin Association (IDTA) coordinates efforts to standardize AAS submodels. Both of these modeling approaches offer certain advantages and disadvantages, depending on the perspective and requirements derived from a given use case. We aim to develop an automated and bidirectional transformation between capability and skill models implemented using AASs and ontologies. This transformation will allow equipment described with different model types to be used in an interoperable way. Furthermore, it supports switching between model types in order to make use of the advantages and tools developed for each model type. The contribution of this paper is twofold: First, a comparison of capability and skill models with AASs as well as ontologies is carried out in order to identify similarities and differences. Based on this analysis, an approach for a bidirectional mapping between these capability and skill modeling approaches is introduced. Accordingly, the structure of the paper is as follows: In Section <ref>, related work is presented followed by the aforementioned contribution in Section <ref>. A summary and outlook in Section <ref> concludes this paper. § RELATED WORK A working group of Plattform Industrie 4.0 develops shared definitions and an information model of capabilities and skills. As an output of this working group, an abstract reference model is presented in <cit.>. This model is currently implemented with two different technologies. On the one hand, there are modeling approaches using ontologies <cit.>. The capability and skill ontology presented in <cit.> has been continuously improved and is built on a three-layer ontology architecture. It extends the reference model of <cit.> with manufacturing-specific details through reusable ontology design patterns, which are based on standards. There is also an extension of the model presented in <cit.> for capabilities and skills of autonomous robots <cit.>. Ontology available at <https://github.com/aljoshakoecher/caskman> On the other hand, there are modeling approaches for capabilities and skills using the AAS. In <cit.>, Sidorenko et al. present an OPC UA skill model that is intended to be used by proactive AAS in a semantic interaction protocol. Hereby, decentralized approaches to production control should be enabled by allowing Industry 4.0 components to communicate directly with each other as well as to mutually use skills. In <cit.>, a model-driven engineering approach is presented that uses an AAS model of capabilities in order to achieve interoperable components and a flexible production line. To standardize developments of the AAS in this context, working groups of the IDTA create submodel templates and specifications for capabilities <cit.> and skills <cit.>. Beden et al. conducted a literature survey of different research approaches trying to add explicit semantics to multiple types of data models used in the context of Industrie 4.0. They argue that the AAS lacks a formal specification and that many papers propose fragmented models or focus on specific use cases without reusability in mind. Furthermore Beden et al. state that research approaches trying to add formal semantics (e.g., through mappings) are still in early stages but consider such approaches to be an important challenge. <cit.> In <cit.>, Bader and Maleshkova present an RDF model for semantic AASs and a unidirectional mapping to convert an AAS into RDF representation. In order to infer implicit information and validate the automatically generated RDF models, reasoning axioms and constraints are presented, too. Similar to <cit.>, our approach also makes use of mapping rules to convert the JSON representation of an AAS into an ontology. However, while Bader and Maleshkova generate an ontology strictly following the AAS specifications, we transform AAS information into the existing capability and skill ontology of <cit.>. This ontology adheres to the reference model of <cit.>, but additionally benefits from a tool ecosystem with tools for model generation and production execution. As a summary of related publications, it can be said that there are currently two incompatible modeling approaches for capabilities and skills. While there is a unidirectional transformation from AAS to RDF in general, there is no detailed transformation for capabilities and skills until now. Furthermore, a bidirectional transformation does not yet exist. § CONTRIBUTION This section contains a comparison of the AAS and ontology modeling approaches of capabilities and skills. The comparison is based on the reference model published in <cit.>. We contrast the AAS submodels Capability <cit.> and Control Component (type and instance) <cit.> with the CaSkMan ontology initially presented in <cit.>. Based on this comparison, a bidirectional mapping approach is presented. §.§ Comparison and Analysis The model element considered first is capability. In the ontology, capabilities are represented as individuals of the class Capability, which has subclasses for required and provided capabilities. Provided capabilities are related to the providing resource via an object property provides. To an AAS of a resource, offered capabilities can be assigned with a specific submodel of type Capability. Individual capabilities can be added via a SubmodelElement (SME) Cap. Each Cap instance must reside in a SubmodelCollection (SMC) of type CapabilityContainer that contains other capability-related information, such as properties or relationships to other elements. All capability containers of an asset are stored inside an SMC CapabilitySet. The capability submodel does not explicitly distinguish between required and provided capabilities. In the ontology, properties of a capability are modeled as individuals of the class Property and linked to a capability with the object property isSpecifiedBy. To formally express statements about properties (e.g., requirements), a data element according to IEC 61360 is used. In an AAS, a SMC PropertySet is located inside a capability container. Similar to the way capabilities are modeled, there is an SMC PropertyContainer with a nested SME Prop for each property. A capability can be restricted by constraints. In the ontology, a constraint is defined as an individual of the class CapabilityConstraint and additional mathematical expressions according to OpenMath. Every constraint is linked to its capability and the properties to which it refers via isRestrictedBy and references, respectively. In the AAS submodel Capability, there is an SMC ConditionContainer, in which capability constraints can be specified via relations. However, it is yet to be defined how constraints are formulated in detail. According to <cit.>, capabilities are linked to processes. To model the process linked to a capability in detail, the ontology contains process types according to DIN 8580 and VDI 2860. Furthermore, inputs and outputs of a process can be modeled according to VDI 3682. The AAS submodel Capability does currently not allow to model the process linked to a capability in more detail. However, process types can be represented with a semanticId of a capability using a respective ECLASS classification. Nevertheless, it is possible to compose or decompose capabilities in both approaches. In order to model skills, the ontology defines a class Skill. Individuals of this class are linked to a capability via isRealizedBySkill. In the AAS, the submodels Control Component Type and Control Component Instance are used to specify skills. Skills are collected in the SMC Skills and a single skill is modeled via the SMC Skill. In the ontology, parameters of a skill are modeled as individuals of the class SkillParameter and assigned to a skill via hasSkillVariable. In a Control Component, parameters are represented via an SMC Parameter, which resides in an SMC Parameters, which in turn resides in an SMC Skill. Both approaches allow type and values of parameters to be modeled. A skill parameter may realize a capability property. Therefore, the corresponding elements are linked using isRealizedBySkillParameter in the ontology. In the AAS submodel Capability, a property is linked to a related skill parameter via a Relationship element realizedBy that is located in the SMC PropertyRelationShips of the SMC PropertySet. A skill is implemented with a state machine that controls the interaction with a skill (e.g., start or stop). The ontology models a state machine as an individual of the class StateMachine and an ontology according to ISA 88, which is used to explicitly depict all states and transitions. A skill is then linked to its state machine via behaviorConformsTo. In the AAS, a state machine is not explicitly modeled. Instead, different execution modes of a skill are modeled with an SMC Modes. In addition, there is a Property element Disabled, which indicates whether a skill is available. Each skill must have a skill interface that allows to interact with the skill by triggering transitions of the state machine or setting parameters <cit.>. In the ontology, an interface is represented as an individual of the class SkillInterface, which is attached to a skill with the object property accessibleThrough. Specific subclasses for different types of interfaces exist. A skill's interface exposes its state machine, transitions and parameters. When using an AAS, possible interfaces are specified in the SMC Interfaces by individual ReferenceElements Interface of the Control Component Type submodel. In the submodel Control Component Instance, individual endpoints are defined in the SMC Endpoints with a ReferenceElement Endpoint. The SMC Interfaces in its current form lacks features that are needed to model skill interfaces with technologies such as OPC UA or HTTP. For example, it is unclear how endpoints are related to the individual skills and how operations and parameters are modeled. In conclusion, both model approaches enable modeling of capabilities and skills, albeit on different levels of detail. The ontological approach is currently more expressive and models all elements of the CSS model on a detail that allows production planning and execution. On the other hand, the submodels of the AAS are not yet specified on this level of detail. Table <ref> provides an overview of the comparison of the two model approaches. §.§ Mapping Concept After this comparison, we now describe a bidirectional mapping approach that allows to transform between a model represented with the aforementioned AAS submodels M_AAS and a model represented with the CaSkMan ontology M_Onto. For this purpose, the transformation from one AAS model element to the capability and skill ontology is first defined as f_A->O: m_AAS↦ m_Onto ∀ m_AAS∈ M_AAS . This transformation is implemented in a declarative way using the RDF Mapping Language (RML). RML, first introduced in <cit.>, is a generic mapping language that can be utilized to express user-defined rules to transform information from heterogeneous data sources, e.g., XML or JSON into RDF. [ caption=RML mapping rule to transform a capability from an AAS submodel to a capability individual in the ontology, label=lst:rml-mapping] listings/rml-example.txt Listing <ref> contains an example for an individual mapping rule that is used to map a capability modeled in an AAS into the ontology. With , the source file is specified and the elements to iterate over are expressed using JSONPath syntax. The example of Listing <ref> iterates over all capability submodel elements in the given submodel. Please note that the actual iterator expression is too long to fit into this example, but all mappings are available on GitHub[https://github.com/hsu-aut/CSS-AAS-OWL]. The is applied for every iterator element, effectively creating an individual of the class with an IRI that makes use of the current capability's idShort. Besides one , every mapping rule may have multiple elements to create relations of an individual. For the example of Listing <ref>, a could be used to transform a capability's AAS comment into an of the corresponding individual in the ontology. The opposite transformation, i.e., from an ontological capability and skill representation into an AAS is defined as f_O->A = f_A->O^-1 : m_Onto↦ m_AAS ∀ m_Onto∈ M_Onto . Since RML is a unidirectional mapping language, it cannot be used to implement f_O->A. Instead, we make use of RDFex, which was created as a counterpart to RML <cit.>. RDFex has a syntax inspired by RML but allows writing mapping rules that target elements of an ontology and transform them into JSON or XML according to a user-specified schema. [ caption=RDFex mapping rule to transform a capability individual from an ontology to a capability element in an AAS, label=lst:rdfex-mapping] listings/rdfex-example.txt The example of Listing <ref> is the inverse transformation of Listing <ref>. In this example, an ontology contained in a file represents the mapping's source. A SPARQL query is used to select elements in the source ontology that are transformed with the current mapping rule. In this example, all capability individuals are selected and their local names are stored in the variable . As our implementation of f_O->A makes use of an AAS's JSON serialization, the is set to JSON. In RDFex, a defines the parent element into which a mapping's output will be inserted. For this capability mapping, a search path in JSONPath syntax is defined that looks for the CapabilitySet element in a given AAS. And finally, the defines the shape of the actual output, taking into account variables of the SPARQL query. In this example, a complete SMC and capability SME is defined with the variable as the of the capability SME. A snippet is inserted into the container for each variable binding used. § SUMMARY AND OUTLOOK In this paper, a concept for a bidirectional mapping between capability and skill models in AASs and an ontology has been outlined. For this purpose, an ontology modeling approach was compared to AAS submodels before rule-based mappings were introduced using RML and RDFex. With respect to future work, the following points are essential: As the analysis has shown, a complete mapping is not yet possible due to missing model elements in the existing AAS submodels. Therefore, the existing submodels need to be supplemented with more detailed concepts and relations in order to be fully usable for the capability and skill application. We plan on extending our mapping rules and integrating the two types of rules into one application. This application will be embedded into our skill-based control platform so that both AAS and ontological capability and skill models can be used in manufacturing processes. Additionally, the presented approach will also be evaluated in the experimental maintenance, repair and overhaul setting for aircrafts presented in <cit.>. § ACKNOWLEDGMENT This research in the projects ProMoDi and RIVA is funded by dtec.bw – Digitalization and Technology Research Center of the Bundeswehr. dtec.bw is funded by the European Union – NextGenerationEU ./bibliography/IEEEtran
http://arxiv.org/abs/2307.00684v1
20230702233412
A Proximal Algorithm for Network Slimming
[ "Kevin Bui", "Fanghui Xue", "Fredrick Park", "Yingyong Qi", "Jack Xin" ]
cs.CV
[ "cs.CV" ]
Bui et al. University of California, Irvine, CA 92697, USA {kevinb3, fanghuix, yqi, jack.xin}@uci.eduWhittier College, Whittier, CA, 90602, USA fpark1@whittier.edu A Proximal Algorithm for Network SlimmingThe work was partially supported by NSF grants DMS-1854434, DMS-1952644, DMS-2151235, and a Qualcomm Faculty Award. Kevin Bui1 Fanghui Xue1 Fredrick Park2Yingyong Qi1 Jack Xin1 August 1, 2023 ============================================================================================================================================================= As a popular channel pruning method for convolutional neural networks (CNNs), network slimming (NS) has a three-stage process: (1) it trains a CNN with ℓ_1 regularization applied to the scaling factors of the batch normalization layers; (2) it removes channels whose scaling factors are below a chosen threshold; and (3) it retrains the pruned model to recover the original accuracy. This time-consuming, three-step process is a result of using subgradient descent to train CNNs. Because subgradient descent does not exactly train CNNs towards sparse, accurate structures, the latter two steps are necessary. Moreover, subgradient descent does not have any convergence guarantee. Therefore, we develop an alternative algorithm called proximal NS. Our proposed algorithm trains CNNs towards sparse, accurate structures, so identifying a scaling factor threshold is unnecessary and fine tuning the pruned CNNs is optional. Using Kurdyka-Łojasiewicz assumptions, we establish global convergence of proximal NS. Lastly, we validate the efficacy of the proposed algorithm on VGGNet, DenseNet and ResNet on CIFAR 10/100. Our experiments demonstrate that after one round of training, proximal NS yields a CNN with competitive accuracy and compression. § INTRODUCTION In the past decade, convolutional neural networks (CNNs) have revolutionized computer vision in various applications, such as image classification <cit.> and object detection <cit.>. CNNs are able to internally generate diverse, various features through its multiple hidden layers, totaling millions of weight parameters to train and billions of floating point operations (FLOPs) to execute. Consequently, highly accurate CNNs are impractical to store and implement on resource-constrained devices, such as mobile smartphones. To compress CNNs into lightweight models, several directions, including weight pruning <cit.>, have been investigated. Channel pruning <cit.> is currently a popular direction because it can significantly reduce the number of weights needed in a CNN by removing any redundant channels. One straightforward approach to channel pruning is network slimming (NS) <cit.>, which appends an ℓ_1 norm on the scaling factors of the batch normalization layers to the loss function being optimized. Being a sparse regularizer, the ℓ_1 norm pushes the scaling factors corresponding to the channels towards zeroes. The original optimization algorithm used for NS is subgradient descent <cit.>, but it has theoretical and practical issues. Subgradient descent does not necessarily decrease the loss function value after each iteration, even when performed exactly with full batch of data <cit.>. Moreover, unless with some additional modifications, such as backtracking line search, subgradient descent may not converge to a critical point <cit.>. When implemented in practice, barely any of the scaling factors have values exactly at zeroes by the end of training, resulting in two issues. First, a threshold value needs to be determined in order to remove channels whose scaling factors are below it. Second, pruning channels with nonzero scaling factors can deteriorate the CNNs' accuracy since these channels are still relevant to the CNN computation. As a result, the pruned CNN needs to be retrained to recover its original accuracy. Therefore, as a suboptimal algorithm, subgradient descent leads to a time-consuming, three-step process. In this paper, we design an alternative optimization algorithm based on proximal alternating linearized minimization (PALM) <cit.> for NS. The algorithm has more theoretical and practical advantages than subgradient descent. Under certain conditions, the proposed algorithm does converge to a critical point. When used in practice, the proposed algorithm enforces the scaling factors of insignificant channels to be exactly zero by the end of training. Hence, there is no need to set a scaling factor threshold to identify which channels to remove. Because the proposed algorithm trains a model towards a truly sparse structure, the model accuracy is preserved after the insignificant channels are pruned, so fine tuning is unnecessary. The only trade-off of the proposed algorithm is a slight decrease in accuracy compared to the original baseline model. Overall, the new algorithm reduces the original three-step process of NS to only one round of training with fine tuning as an optional step, thereby saving the time and hassle of obtaining a compressed, accurate CNN. § RELATED WORKS Early pruning methods focus on removing redundant weight parameters in CNNs. Han <cit.> proposed to remove weights if their magnitudes are below a certain threshold. Aghasi <cit.> formulated a convex optimization problem to determine which weight parameters to retain while preserving model accuracy. Creating irregular sparsity patterns, weight pruning is not implementation friendly since it requires special software and hardware to accelerate inference <cit.>. An alternative to weight pruning is pruning group-wise structures in CNNs. Many works <cit.> have imposed group regularization onto various CNN structures, such as filters and channels. Li <cit.> incorporated a sparsity-inducing matrix corresponding to each feature map and imposed row-wise and column-wise group regularization onto this matrix to determine which filters to remove. Lin <cit.> pruned filters that generate low-rank feature maps. Hu <cit.> devised network trimming that iteratively removes zero-activation neurons from the CNN and retrains the compressed CNN. Rather than regularizing the weight parameters, Liu <cit.> developed NS, where they applied ℓ_1 regularization on the scaling factors in the batch normalization layers in a CNN to determine which of their corresponding channels are redundant to remove and then they retrained the pruned CNN to restore its accuracy. Bui <cit.> investigated nonconvex regularizers as alternatives to the ℓ_1 regularizer for NS. On the other hand, Zhao <cit.> applied probabilistic learning onto the scaling factors to identify which redundant channels to prune with minimal accuracy loss, making retraining unnecessary. Lin <cit.> introduced an external soft mask as a set of parameters corresponding to the CNN structures (e.g., filters and channels) and regularized the mask by adversarial learning. § PROPOSED ALGORITHM In this section, we develop a novel PALM algorithm <cit.> for NS that consists of two straightforward, general steps per epoch: stochastic gradient descent on the weight parameters, including the scaling factors of the batch normalization layers, and soft thresholding on the scaling factors. §.§ Batch Normalization Layer Most modern CNNs have batch normalization (BN) layers <cit.> because these layers speed up their convergence and improve their generalization <cit.>. These benefits are due to normalizing the output feature maps of the preceding convolutional layers using mini-batch statistics. Let z ∈ℝ^B × C × H × W denote an output feature map, where B is the mini-batch size, C is the number of channels, and H and W are the height and width of the feature map, respectively. For each channel i=1, …, C, the output of a BN layer on each channel z_i is given by z_i' = γ_i z_i-μ_B/√(σ^2_B + ϵ) + β_i, where μ_B and σ_B are the mean and standard deviation of the inputs across the mini-batch B, ϵ is a small constant for numerical stability, and γ_i and β_i are trainable weight parameters that help restore the representative power of the input z_i. The weight parameter γ_i is defined to be the scaling factor of channel i. The scaling factor γ_i determines how important channel i is to the CNN computation as it is multiplied to all pixels of the same channel i within the feature map z. §.§ Numerical Optimization Let {(x_i,y_i)}_i=1^N be a given dataset, where each x_i is a training input and y_i is its corresponding label or value. Using the dataset {(x_i,y_i)}_i=1^N, we train a CNN with c total channels, where each of their convolutional layers is followed by a BN layer. Let γ∈ℝ^c be the vector of trainable scaling factors of the CNN, where for i=1,…, c, each entry γ_i is a scaling factor of channel i. Moreover, let W ∈ℝ^n be a vector of all n trainable weight parameters, excluding the scaling factors, in the CNN. NS <cit.> minimizes the following objective function: min_W, γ1/N∑_i=1^N ℒ(h(x_i,W, γ), y_i) + λγ_1, where h(x_i, W, γ) is the output of the CNN predicted on the data point x_i; ℒ(h(x_i,W, γ), y_i) is the loss function between the prediction h(x_i, W, γ) and ground truth y_i, such as the cross-entropy loss function; and λ > 0 is the regularization parameter for the ℓ_1 penalty on the scaling factor vector γ. In <cit.>, (<ref>) is solved by a gradient descent scheme with step size δ^t for each epoch t: W^t+1 = W^t - δ^t ∇_W ℒ̃(W^t, γ^t), γ^t+1 = γ^t - δ^t ( ∇_γℒ̃(W^t, γ^t) + λ∂γ^t_1 ), where ℒ̃(W, γ) 1/N∑_i=1^N ℒ(h(x_i,W, γ), y_i) and ∂·_1 is the subgradient of the ℓ_1 norm. By (<ref>), we observe that γ is optimized by subgradient descent, which can lead to practical issues. When γ_i = 0 for some channel i, the subgradient needs to be chosen precisely. Not all subgradient vectors at a non-differentiable point decrease the value of (<ref>) in each epoch <cit.>, so we need to find one that does among the infinite number of choices. In the numerical implementation of NS [<https://github.com/Eric-mingjie/network-slimming>], the subgradient ζ^t is selected such that ζ_i^t = 0 by default when γ_i^t = 0, but such selection is not verified to decrease the value of (<ref>) in each epoch t. Lastly, subgradient descent only pushes the scaling factors of irrelevant channels to be near zero in value but not exactly zero. For this reason, when pruning a CNN, the user needs to determine the appropriate scaling factor threshold to remove its channels where no layers have zero channels and then fine tune it to restore its original accuracy. However, if too many channels are pruned that the fine-tuned accuracy is significantly less than the original, the user may waste time and resources by iterating the process of decreasing the threshold and fine tuning until the CNN attains acceptable accuracy and compression. To develop an alternative algorithm that does not possess the practical issues of subgradient descent, we reformulate (<ref>) as a constrained optimization problem by introducing an auxiliary variable ξ, giving us min_W, γ, ξ ℒ̃(W, γ) + λξ_1 s.t. ξ = γ. However, we relax the constraint by a quadratic penalty with parameter β > 0, leading to a new unconstrained optimization problem: min_W, γ, ξ ℒ̃(W, γ) + λξ_1 + β/2γ - ξ_2^2. In (<ref>), the scaling factor vector γ is optimized for both model accuracy and sparsity, which can be difficult to balance when training a CNN. However, in (<ref>), γ is optimized for only model accuracy because it is a variable of the overall loss function ℒ̃(W, γ) while ξ is optimized only for sparsity because it is penalized by the ℓ_1 norm. The quadratic penalty enforces γ and ξ to be similar in values, thereby ensuring γ to be sparse. Let (W, γ) be a concatenated vector of W and γ. We minimize (<ref>) via alternating minimization, so for each epoch t, we solve the following subproblems: (W^t+1, γ^t+1) ∈_W, γℒ̃(W, γ) + β/2γ - ξ^t_2^2 ξ^t+1 ∈_ξλξ_1 + β/2γ^t+1 - ξ_2^2. Below, we describe how to solve each subproblem in details. §.§.§ (W,γ)-subproblem The (W, γ)-subproblem given in (<ref>) cannot be solved in closed form because the loss function ℒ̃(W, γ) is a composition of several nonlinear functions. Typically, when training a CNN, this subproblem would be solved by (stochastic) gradient descent. To formulate (<ref>) as a gradient descent step, we follow a prox-linear strategy as follows: (W^t+1, γ^t+1) ∈_W, γℒ̃(W^t, γ^t)+ ⟨∇_W ℒ̃(W^t, γ^t), W - W^t⟩ + ⟨∇_γℒ̃(W^t, γ^t), γ - γ^t⟩ +α/2W - W^t_2^2+α/2γ - γ^t_2^2+ β/2γ - ξ^t_2^2, where α > 0. By differentiating with respect to each variable, setting the partial derivative equal to zero, and solving for the variable, we have W^t+1 = W^t - 1/α∇_W ℒ̃(W^t, γ^t) γ^t+1 = αγ^t +βξ^t/α+β - 1/α+β∇_γℒ̃(W^t, γ^t). We see that (<ref>) is gradient descent on W^t with step size 1/α while (<ref>) is gradient descent on a weighted average of γ^t and ξ^t with step size 1/α+β. These steps are straightforward to implement in practice when training a CNN because the gradient (∇_W ℒ̃(W^t, γ^t), ∇_γℒ̃(W^t, γ^t)) can be approximated by backpropagation. §.§.§ ξ-subproblem To solve (<ref>), we perform a proximal update by minimizing the following subproblem: ξ^t+1 ∈_ξλξ_1 + α/2ξ - ξ^t_2^2 + β/2γ^t+1 - ξ_2^2. Expanding it gives ξ^t+1 =_ξξ_1 + 1/2 ( λ/β+α)ξ - αξ^t + βγ^t+1/α + β_2^2 =𝒮(αξ^t + βγ^t+1/α + β, λ/β+α), where 𝒮(x, λ) is the soft-thresholding operator defined by (𝒮(x, λ))_i = sign(x_i) max{0, |x_i| - λ} for each entry i. Therefore, ξ is updated by performing soft thresholding on the weighted average between ξ^t and γ^t+1. We summarize the new algorithm for NS in Algorithm <ref> as proximal NS. § CONVERGENCE ANALYSIS To establish global convergence of proximal NS, we present relevant definitions and assumptions. A proper, lower-semicontinuous function f: ℝ^m → (-∞, ∞] satisfies the Kurdyka-Łojasiewicz (KL) property at a point x̅∈dom(∂ f) {x ∈ℝ^m: ∂f(x) ≠∅} if there exist η∈ (0, +∞], a neighborhood U of x̅, and a continuous concave function ϕ:[0, η) → [0, ∞) with the following properties: (i) ϕ(0) = 0; (ii) ϕ is continuously differentiable on (0, η); (iii) ϕ'(x) > 0 for all x ∈ (0, η); and (iv) for any x ∈ U with f(x̅) < f(x) < f(x̅)+η, it holds that ϕ'(f(x) - f(x̅))dist(0, ∂ f (x)) ≥ 1. If f satisfies the KL property at every point x ∈dom(∂ f), then f is called a KL function. Suppose that * ℒ̃(W, γ) is a proper, differentiable, and nonnegative function. * ∇ℒ̃(W, γ) is Lipschitz continuous with constant L. * ℒ̃(W, γ) is a KL function. Assumption <ref> (a)-(b) are common in nonconvex analysis (e.g., <cit.>). For Assumption <ref>, most commonly used loss functions for CNNs are verified to be KL functions <cit.>. Some CNN architectures do not satisfy Assumption <ref>(a) when they contain nonsmooth functions and operations, such as the ReLU activation functions and max poolings. However, these functions and operations can be replaced with their smooth approximations. For example, the smooth approximation of ReLU is the softplus function 1/clog(1+exp(c x)) for some parameter c>0 while the smooth approximation the max function for max pooling is the softmax function ∑_i=1^n x_i e^c x_i/∑_i=1^n e^cx_i for some parameter c >0. Besides, Fu <cit.> made a similar assumption to establish convergence for their algorithm designed for weight and filter pruning. Regardless, our numerical experiments demonstrate that our proposed algorithm still converges for CNNs containing ReLU activation functions and max pooling. For brevity, we denote F(W, γ, ξ) ℒ̃(W, γ) + λξ_1 + β/2γ -ξ_2^2. Now, we are ready to present the main theorem: Under Assumption <ref>, if {(W^t, γ^t, ξ^t)}_t=1^∞ generated by Algorithm <ref> is bounded and we have α > L, then {(W^t, γ^t, ξ^t)}_t=1^∞ converges to a critical point (W^*, γ^*, ξ^*) of F. The proof is delayed to the appendix. It requires satisfying the sufficient decrease property in F and the relative error property of ∂F <cit.>. § NUMERICAL EXPERIMENTS We evaluate proximal NS on VGG-19 <cit.>, DenseNet-40 <cit.>, and ResNet-110/164 <cit.> trained on CIFAR 10/100 <cit.>. The CIFAR 10/100 dataset <cit.> consists of 60,000 natural images of resolution 32 × 32 with 10/100 categories. The dataset is split into two sets: 50,000 training images and 10,000 test images. As done in recent works <cit.>, standard augmentation techniques (e.g., shifting, mirroring, and normalization) are applied to the images before training and testing. The code for proximal NS is available at <https://github.com/kbui1993/Official-Proximal-Network-Slimming>. §.§ Implementation Details For CIFAR 10/100, the implementation is mostly the same as in <cit.>. Specifically, we train the networks from scratch for 160 epochs using stochastic gradient descent with initial learning rate at 0.1 that reduces by a factor of 10 at the 80th and 120th epochs. Moreover, the models are trained with weight decay 10^-4 and Nesterov momentum of 0.9 without damping. The training batch size is 64. However, the parameter λ is set differently. In our numerical experiments, using Algorithm <ref>, we set ξ∼Unif[0.47,0.50] for all networks while λ =0.0045 and β = 100 for VGG-19, λ = 0.004 and β = 100 for DenseNet-40, and λ = 0.002 and β = 1.0, 0.25 for ResNet-110 and ResNet-164, respectively. We have initially α = 10, the reciprocal of the learning rate, and it changes accordingly to the learning rate schedule. A model is trained five times on NVIDIA GeForce RTX 2080 for each network and dataset to obtain the average statistics. §.§ Results We apply proximal NS to train VGG-19, DenseNet-40, and ResNet-164 on CIFAR 10/100. According to Table <ref>, proximal NS drives a significant number of scaling factors to be exactly zeroes for each trained CNN. In particular, for VGG-19 and DenseNet-40, at least 55% of the scaling factors are zeroes while for ResNet-164, at least 58% are zeroes. We can safely remove the channels with zero scaling factors because they are unnecessary for inference. Unlike the original NS <cit.>, proximal NS does not require us to select a scaling factor threshold based on how many channels to remove and how much accuracy to sacrifice. We compare proximal NS with the original NS <cit.> and variational CNN pruning (VCP) <cit.>, a Bayesian version of NS. To evaluate the effect of regularization and pruning on accuracy, we include the baseline accuracy, where the architecture is trained without any regularization on the scaling factors. For completeness, the models trained with original NS and proximal NS are fine tuned with the same setting as the first time training but without ℓ_1 regularization on the scaling factors. The results are reported in Tables <ref>-<ref>. After the first round of training, proximal NS outperforms both the original NS and VCP in test accuracy while reducing a significant amount of parameters and FLOPs. Because proximal NS trains a model towards a sparse structure, the model accuracy is less than the baseline accuracy by at most 1.56% and it remains the same between before and after pruning, a property that the original NS does not have. Although VCP is designed to preserve test accuracy after pruning, it does not compress as well as proximal NS for all architectures. With about the same proportion of channels pruned as the original NS, proximal NS saves more FLOPs for both VGG-19 and ResNet-164 and generally more parameters for all networks. To potentially improve test accuracy, the pruned models from the original and proximal NS are fine tuned. For proximal NS, test accuracy of the pruned models improve slightly by at most 0.42% for DenseNet-40 and ResNet-164 while worsen for VGGNet-19. Moreover, proximal NS is outperformed by the original NS in fine-tuned test accuracy for all models trained on CIFAR 100. A more accurate model from original NS might be preferable. However, the additional fine tuning step requires a few more training hours to obtain an accuracy that is up to 1.5% higher than the accuracy of a pruned model trained once by proximal NS. For example, for ResNet-164 trained on CIFAR 100, proximal NS takes about 7 hours to attain an average accuracy of 75.26% while the original NS requires about 12 hours to achieve 1.42% higher accuracy. Therefore, the amount of time and resources spent training for an incremental improvement may not be worthwhile. Finally, we compare proximal NS with other pruning methods applied to Densenet-40 and ResNet-110 trained on CIFAR 10. The other pruning methods, which may require fine tuning, are L1 <cit.>, GAL <cit.>, and Hrank <cit.>. For DenseNet-40, proximal NS prunes the most parameters and the second most FLOPs while having comparable accuracy as the fine-tuned Hrank and post-pruned GAL-0.05. For ResNet-110, proximal NS has better compression than L1, GAL-0.5, and Hrank with its post-pruned accuracy better than GAL-0.5's fine-tuned accuracy and similar to L1's fine-tuned accuracy. Although GAL or Hrank might be advantageous to use to obtain a sparse, accurate CNN, they have additional requirements besides fine tuning. GAL <cit.> requires an accurate baseline model available for knowledge distillation. For Hrank <cit.>, the compression ratio needs to be specified for each convolutional layer, thereby making hyperparameter tuning more complicated. Overall, proximal NS is a straightforward algorithm that yields a generally more compressed and accurate model than the other methods in one training round. Although its test accuracy after one round is slightly lower than the baseline accuracy, it is expected because of the sparsity–accuracy trade-off and being a prune-while-training algorithm (which automatically identifies the insignificant channels during training) as discussed in <cit.>. Lastly, the experiments show that fine tuning the compressed models trained by proximal NS marginally improves the test accuracy, which makes fine tuning wasteful. § CONCLUSION We develop a channel pruning algorithm called proximal NS with global convergence guarantee. It trains a CNN towards a sparse, accurate structure, making fine tuning optional. In our experiments, proximal NS can effectively compress CNNs with accuracy slightly less than the baseline. Because fine tuning CNNs trained by proximal NS marginally improves test accuracy, we will investigate modifying the algorithm to attain significantly better fine-tuned accuracy. For future direction, we shall study proximal cooperative neural architecture search <cit.> and include nonconvex, sparse regularizers, such as ℓ_1 - ℓ_2 <cit.> and transformed ℓ_1 <cit.>. § APPENDIX First, we introduce important definitions and lemmas from variational analysis. Let f:ℝ^n→ (-∞, +∞] be a proper and lower semicontinuouous function. * The Fréchet subdifferential of f at the point x ∈dom f {x ∈ℝ^n: f(x) < ∞} is the set ∂̂f(x) = { v ∈ℝ^n^2: lim inf_y ≠ x, y → xf(y)-f(x) - ⟨ v, y-x ⟩/y-x≥ 0 }. * The limiting subdifferential of f at the point x ∈dom f is the set ∂ f(x) = { v ∈ℝ^n^2: ∃{(x^t,y^t)}_t=1^∞ s.t. x^t → x, f(x^t) → f(x), ∂̂f(x^t) ∋ y^t → y }. A function f(x) is called strongly convex with parameter μ if and only if one of the following conditions holds: * g(x) = f(x) - μ/2x_2^2 is convex. * f(y) ≥ f(x) + ⟨∇ f(x), y-x ⟩ + μ/2y-x_2^2, ∀ x,y. If ∇ f(x) is Lipschitz continuous with parameter L >0, then f(y) ≤ f(x) + ⟨∇ f(x), y- x ⟩ + L/2x-y_2^2, ∀ x,y. For brevity, denote W̃ (W, γ), the overall set of weights in a CNN, and Z (W̃, ξ) = (W, γ, ξ). Before proving Theorem <ref>, we prove some necessary lemmas. Let {Z^t}_t=1^∞ be a sequence generated by Algorithm <ref>. Under Assumption <ref>, we have F(Z^t+1) - F(Z^t) ≤L- α/2Z^t+1 - Z^t_2^2. for all t ∈ℕ. In addition, when α >L, we have ∑_t=1^∞Z^t+1 - Z^t_2^2 < ∞. First we define the function L_t(W̃) = ℒ̃(W̃^t) + ⟨∇ℒ̃(W̃^t), W̃ - W̃^t ⟩+ α/2W̃- W̃^t _2^2 + β/2γ - ξ^t _2^2. We observe that L_t is strongly convex with respect to W̃ with parameter α. Because ∇ L_t(W̃^t+1) = 0 by (<ref>), we use Lemma <ref> to obtain L_t(W̃^t) ≥ L_t(W̃^t+1) + ⟨∇ L_t(W̃^t+1), W̃^t - W̃^t+1⟩ + α/2W̃^t+1 - W̃^t _2^2 ≥ L_t(W̃^t+1) + α/2W̃^t+1 -W̃^t_2^2, which simplifies to ℒ̃(W̃^t) + β/2γ^t - ξ^t_2^2 - αW̃^t+1 - W̃^t_2^2 ≥ ℒ̃(W̃^t) + ⟨∇ℒ̃(W̃^t), W̃^t+1 - W̃^t ⟩ + β/2γ^t+1 - ξ^t_2^2. Since ∇ℒ̃(W̃) is Lipschitz continuous with constant L, we have ℒ̃(W̃^t+1) ≤ℒ̃(W̃^t) + ⟨∇ℒ(W̃^t+1), W̃^t+1 - W̃^t⟩ + L/2W̃^t+1 - W̃^t_2^2 by Lemma <ref>. Combining the previous two inequalities gives us ℒ̃(W̃^t) + β/2γ^t - ξ^t_2^2 + L - 2α/2W̃^t+1 - W̃^t_2^2 ≥ℒ̃(W̃^t+1) + β/2γ^t+1 - ξ^t_2^2. Adding the term λξ_1 on both sides and rearranging the inequality give us F(W̃^t+1, ξ^t) - F(Z^t) ≤L-2α/2W̃^t+1 - W̃^t_2^2 By (<ref>), we have λξ^t+1_1 + β/2γ^t+1 - ξ^t+1_2^2 + α/2ξ^t+1 - ξ^t_2^2 ≤λξ^t_1 + β/2γ^t+1 - ξ^t_2^2. Adding ℒ̃(W̃^t+1) on both sides and rearranging the inequality give F(Z^t+1) - F(W̃^t+1, ξ^t) ≤ - α/2ξ^t+1 - ξ^t_2^2 Summing up (<ref>) and (<ref>) and rearranging them, we have F(Z^t+1) - F(Z^t) ≤L-2α/2W̃^t+1- W̃^t_2^2 - α/2ξ^t+1 - ξ^t_2^2 ≤L-α/2Z^t+1 - Z^t_2^2. Summing up the inequality for t = 1, …, N-1, we have ∑_t=1^N-1α-L/2Z^t+1 - Z^t_2^2 ≤ F(Z^1) - F(Z^N) ≤ F(Z^1). Because α >L, the left-hand side is nonnegative, so as N →∞, we have (<ref>). Let {Z^t}_t=1^∞ be a sequence generated by Algorithm <ref>. Under Assumption <ref>, for any t ∈ℕ, there exists some w^t+1∈∂ F(Z^t+1) such that w^t+1_2 ≤ (3α + 2L +β) Z^t+1 - Z^t_2. We note that ∇_W ℒ̃(W̃^t+1) ∈∂_W F(Z^t+1), ∇_γℒ̃(W̃^t+1) + β(γ^t+1 - ξ^t+1) ∈∂_γ F(Z^t+1), λ∂_ξξ^t+1_1 -β(γ^t+1 - ξ^t+1) ∈∂_ξF(Z^t+1). By the first-order optimality conditions of (<ref>) and (<ref>), we obtain ∇_W ℒ̃(W̃^t) + α(W^t+1-W^t) = 0, ∇_γℒ̃(W̃^t) + α(γ^t+1 - γ^t) + β(γ^t+1 - ξ^t) = 0, λ∂_ξξ^t+1_1 +α(ξ^t+1 - ξ^t) - β(γ^t+1 - ξ^t+1) ∋ 0. Combining (<ref>) and (<ref>), (<ref>) and (<ref>), and (<ref>) and (<ref>), we obtain ∇_Wℒ̃(W̃^t+1) - ∇_W ℒ̃(W̃^t) - α (W^t+1 - W^t) =w_1^t+1∈∂_W F(Z^t+1), ∇_γℒ̃(W̃^t+1) - ∇_γℒ̃(W̃^t) - α (γ^t+1 - γ^t) - β(ξ^t+1 - ξ^t) =w_2^t+1∈∂_γ F(Z^t+1), -α(ξ^t+1 - ξ^t) =w_3^t+1∈∂_ξF(Z^t+1), where w^t+1=(w_1^t+1, w_2^t+1, w_3^t+1) ∈∂ F(Z^t+1). As a result, by triangle inequality and Lipschitz continuity of ∇ℒ̃, we have w_1^t+1_2 ≤αW^t+1 - W^t_2 + ∇_Wℒ̃(W̃^t+1) - ∇_W ℒ̃(W̃^t)_2 ≤αW^t+1 - W^t + L W̃^t+1 - W̃^t_2 ≤ (α + L) Z^t+1 - Z^t_2, w_2^t+1_2 ≤αγ^t+1 - γ^t_2 + βξ^t+1 - ξ^t_2 + ∇_γℒ̃(W̃^t+1) - ∇_γℒ̃(W̃^t)_2 ≤ (α +L) W̃^t+1 - W̃^t_2 + βξ^t+1 - ξ^t_2≤ (α + β+ L) Z^t+1 - Z^t_2 , and w_3^t+1_2 ≤αξ^t+1 - ξ^t_2 ≤αZ^t+1- Z^t_2 . Therefore, for all t ∈ℕ, we have w^t+1_2 ≤w_1^t+1_2+ w_2^t+1_2+ w_3^t+1_2 ≤ (3α + 2L +β) Z^t+1 - Z^t_2. The result follows from Lemmas <ref>-<ref> combined with <cit.> splncs04
http://arxiv.org/abs/2307.01781v1
20230704153718
Faster Detours in Undirected Graphs
[ "Shyan Akmal", "Virginia Vassilevska Williams", "Ryan Williams", "Zixuan Xu" ]
cs.DS
[ "cs.DS", "cs.DM" ]
The Path to Fault- and Intrusion-Resilient Manycore Systems on a Chip blinded Disrupt Paper No. 5073 Ali Shoker Paulo Esteves-Verissimo RC3 Center, CEMSE Division, King Abdullah University of Science and Technology (KAUST) <ali.shoker@kaust.edu.sa>, <paulo.verissimo@kaust.edu.sa> Marcus Völp University of Luxembourg Interdisciplinary Center for Security, Reliability and Trust (SnT) - CritiX group <marcus.voelp@uni.lu> August 1, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The problem is a basic path-finding problem: given a graph G on n vertices, with specified nodes s and t, and a positive integer k, the goal is to determine if G has an st-path of length exactly (s,t) + k, where (s,t) is the length of a shortest path from s to t. The problem is -hard when k is part of the input, so researchers have sought efficient parameterized algorithms for this task, running in f(k)(n) time, for f(·) as slow-growing as possible. We present faster algorithms for in undirected graphs, running in 1.853^k(n) randomized and 4.082^k(n) deterministic time. The previous fastest algorithms for this problem took 2.746^k(n) randomized and 6.523^k(n) deterministic time [Bezáková-Curticapean-Dell-Fomin, ICALP 2017]. Our algorithms use the fact that detecting a path of a given length in an undirected graph is easier if we are promised that the path belongs to what we call a “bipartitioned” subgraph, where the nodes are split into two parts and the path must satisfy constraints on those parts. Previously, this idea was used to obtain the fastest known algorithm for finding paths of length k in undirected graphs [Björklund-Husfeldt-Kaski-Koivisto, JCSS 2017], intuitively by looking for paths of length k in randomly bipartitioned subgraphs. Our algorithms for stem from a new application of this idea, which does not involve choosing the bipartitioned subgraphs randomly. Our work has direct implications for the problem, another related path-finding problem. In this problem, we are given the same input as in , but are now tasked with determining if G has an st-path of length at least (s,t)+k. Our results for imply that we can solve in 3.432^k(n) randomized and 16.661^k(n) deterministic time. The previous fastest algorithms for this problem took 7.539^k(n) randomized and 42.549^k(n) deterministic time [Fomin et al., STACS 2022]. § INTRODUCTION The problem is a well-studied task in computer science: Given: k ∈ℕ^+, a graph G, nodes s and t. Determine: Does G contain a simple path of length k from s to t? For graphs G with n nodes, this problem can be easily solved in O(kn^k) time by enumerating all sequences of k vertices. In the 1980s, Monien <cit.> showed that the problem is actually fixed-parameter tractable () in k, presenting a k!(n) time algorithm solving . Since then, significant research has gone into obtaining faster algorithms for , with better dependence on k (see <cit.> for an overview of the many results in this line of work). This research culminated in the work of Koutis and Williams <cit.>, who showed that can be solved in 2^k(n) (randomized) time, and Björklund, Husfeldt, Kaski, and Koivisto <cit.>, who proved that in undirected graphs, can be solved even faster in 1.657^k(n) (randomized) time. Throughout this paper, we assume that algorithms are randomized (and return correct answers with high probability in the stated time bounds), unless otherwise specified. The problem is a parameterized version of the 𝖭𝖯-complete problem, but it is not the only natural parameterization. Various other parameterizations of have been proposed and studied, which we consider in the present paper. * Finding a path of length at least k. Instead of looking for a path of length exactly k from s to t, one can try to determine the existence of an st-path of length at least k: Given: k ∈ℕ^+, a graph G, nodes s and t. Determine: Does G contain a simple path of length at least k from s to t? Observe that in the problem, the length of a solution path is not necessarily bounded as a function of k. However, it is known that is also : work of Zehavi <cit.> and Fomin, Lokshtanov, Panolan, and Saurabh <cit.> implies that can be solved in 4^k(n) time. More recently, Eiben, Koana, and Wahlström <cit.> proved that over undirected graphs, can be solved in 1.657^k(n) time, matching the fastest known runtime for . * Finding an st-path longer than a polynomial-time guarantee. Another parameterization for is motivated by the fact that one can find a shortest path from s to t in polynomial time. If the shortest path distance (s,t) happens to already be long, then it is actually “easy” to find a long path from s to t. Therefore, it is natural to consider the parameterized complexity of searching for an st-path that is k edges longer than the shortest path length from s to t. Our work focuses on these so-called “detour” variants of the path detection problems discussed above. (a.k.a. ) Given: k ∈ℕ^+, a graph G, nodes s and t. Determine: Does G contain a simple path of length (s,t) + k from s to t? Since efficiently reduces to solving a single instance of ,[Given an instance of , add an edge from s to t. Then a solution to in this new graph corresponds to a solution to in the original graph.] the problem is at least as hard as the classical problem. The problem was introduced by Bezáková, Curticapean, Dell, and Fomin <cit.>, who showed that it can be solved by calling polynomially many instances of , for path lengths ℓ≤ 2k+1. Employing the fastest known algorithms, this implies that can be solved in 2^2k(n) = 4^k(n) time in general, and even faster over undirected graphs in 1.657^2k(n) ≤ 2.746^k(n) time. The two parameterizations above can be combined to produce the following problem: Given: k ∈ℕ^+, a graph G, nodes s and t. Determine: Does G contain a simple path of length at least (s,t) + k from s to t? Observe that is at least as hard as . Unlike the problems discussed above, over directed graphs is not known to be : in fact, it remains open whether is in 𝖯 even for the special case of k=1! However, in undirected graphs, Fomin, Golovach, Lochet, Sagunov, Simonov, and Saurabh <cit.> showed that can be reduced to solving for p≤ 2k, and then solving polynomially many instances of , for ℓ≤ 3k/2. Employing the fastest known algorithms for and as subroutines, this implies that can be solved over undirected graphs in 7.539^k(n) time. The algorithms for and discussed above are significantly slower than the fastest known algorithms for the analogous and problems. This motivates the questions: can be solved as quickly as , and can be solved as quickly as ? Given the extensive and influential line of work that has gone into finding faster algorithms for and , obtaining faster algorithms for these detour problems as well is an interesting open problem in parameterized complexity and exact algorithms. §.§ Our Results The main result of our work is a faster algorithm for on undirected graphs. theoremdetourthm In undirected graphs, can be solved in 1.853^k(n) time. This marks a significant improvement over the previous fastest 2.746^k(n) time algorithm for (and shows, for example, that this problem can be solved in faster than 2^k(n) time, which is often a barrier for parameterized problems). Since the fastest known algorithms for over undirected graphs have a bottleneck of solving , <Ref> implies the following result. theoremlongestdetour In undirected graphs, can be solved in 3.432^k(n) time. Again, this is a significant improvement over the previous fastest algorithm for on undirected graphs, which ran in 7.539^k(n) time. Our algorithm for Theorem <ref> applies the fact that is easier to solve on undirected graphs which have a prescribed vertex partition into two sets, where we constrain the path to contain a particular number of nodes from one set, and a particular number of edges whose vertices are in the other set. Formally, we consider the problem: given a graph G on n nodes, whose vertices are partitioned into parts V_1 and V_2, with distinguished vertices s and t, the goal is to determine if G contains a simple path from s to t of length ℓ, which uses exactly k_1 vertices from V_1, and exactly ℓ_2 edges whose endpoints are both in V_2. A careful application of the following result from <cit.> is the main source of the speed-up in our algorithm for . [<cit.>]lemmabipartite Let ℓ, k_1, ℓ_2 be nonnegative integers satisfying the inequality ℓ+1 ≥ k_1 + 2 ℓ_2. Then over undirected graphs, the problem can be solved in 2^k_1+ℓ_2(n) time. Although this “Bipartitioned Path” problem may appear esoteric at first, Lemma <ref> plays a crucial role in obtaining the fastest known algorithm for in undirected graphs <cit.>, and an analogue of <Ref> for paths of length at least k is the basis for the fastest known algorithm for in undirected graphs <cit.>. For completeness, we include a proof of <Ref> in <Ref>. In <Ref>, we provide an intuitive overview of how <Ref> helps us obtain our algorithm for . The fastest known algorithms for the path and detour problems discussed above all use randomness. Researchers are also interested in obtaining fast deterministic algorithms for these problems. We note that a simplified version of our algorithm for implies faster deterministic algorithms for these detour problems over undirected graphs. theoremdetdetour The problem can be solved over undirected graphs by a deterministic algorithm in 4.082^k(n) time. Prior to this work, the fastest known deterministic algorithm for on undirected graphs ran in 6.523^k(n) time <cit.>. theoremdetlongestdetour The problem can be solved over undirected graphs by a deterministic algorithm in 16.661^k(n) time. Prior to this work, the fastest known deterministic algorithm for on undirected graphs ran in 42.549^k(n) time <cit.>. In summary, we obtain new randomized and deterministic algorithms for and over undirected graphs, whose runtimes present significant advances over what was previously known for these problems. §.§ Organization The remainder of the main body of this paper presents our new algorithms . In <Ref> we include a thorough discussion of additional related work, and in <Ref> we include a proof of <Ref>. In <Ref>, we introduce relevant notation, assumptions, and definitions concerning graphs. In <Ref>, we provide an overview of our algorithm for . In <Ref>, we present the details of our algorithm, and prove its correctness. The runtime analysis for our algorithm (and thus the formal proofs of <Ref>, given correctness of our algorithm) are presented in <Ref>. In <Ref>, we highlight some open problems. § PRELIMINARIES Given positive integers a and b, we let [a] = 1, 2, …, a, and [a,b] = a, a+1, …, b. Given an integer a and a set of integers S, we define a + S = a + s| s∈ S. Throughout, we let G denote the input graph. We assume that G is undirected, has vertex set V with |V|=n, and, without loss of generality, is connected.[If G were not connected, we could replace it with the connected component containing s, and solve the detour problems on this smaller graph instead.] Throughout, we let s and t denote the two distinguished vertices that come as part of the input to the problem. Given a vertex u, we let d(u) = (s,u) denote its distance from s. This distance is well-defined, since G is connected. Given a path P containing vertices u and v, we let P[u,v] denote the subpath from u to v on P. Given an edge e = (u,v) from u to v, we say e is forward if d(v) = d(u) + 1, backwards if d(v) = d(u)-1, and stable if d(v) = d(u). In an undirected graph, by triangle inequality and symmetry of distance, adjacent vertices u and v have |d(u)-d(v)|≤ 1, so every edge in a path falls into one of these three categories. Given two vertices u,v∈ V, let uv denote the induced subgraph of G on the set u∪w∈ V| d(u) < d(w)≤ d(v). Let u denote the induced subgraph of G on the set u∪w∈ V| d(u) < d(w). Note that for every u and v, the subgraphs uv and v overlap at vertex v, but are disjoint otherwise. § TECHNICAL OVERVIEW In this section, we provide an overview of how our algorithm works. Our starting point is the algorithm for this problem presented in <cit.>, which we review in <Ref>. Then in <Ref> we review how the algorithm from <Ref> for has previously been used to obtain the fastest known algorithm for in undirected graphs. With this context established, in <Ref> we outline how we combine the techniques from <Ref> with new ideas to prove <Ref>. §.§ Previous Detour Algorithm The previous algorithm for from <cit.> performs dynamic programming over nodes in the graph, starting from t and moving to vertices closer to s. In the dynamic program, for each vertex x with d(x)≤ d(t), we compute all offsets r≤ k such that there is a path of length d(t)-d(x) + r from x to t in the subgraph x. Determining this set of offsets for x=s solves the problem, since s = G. If d(t)-d(x)≤ k, we can find all such offsets just by solving for ℓ≤ 2k. So, suppose we are given a vertex x with d(t)-d(x) ≥ k+1 and an offset r≤ k, and wish to determine if there is a path of length d(t)-d(x) + r from x to t in x. If there is such a path P, then <cit.> argues that P can always be split in as depicted in <Ref>: for some vertex y with d(y) > d(x), we can decompose P into two subpaths: * a subpath A from x to y of length at most 2k+1, such that all internal vertices v in A satisfy d(x) < d(v) < d(y), and * a subpath B from y to t in y of length at most d(t)-d(y)+k. We can always split a path P in this manner because P has length at most d(t)-d(x)+k, so at most k edges in P are not forward edges. Intuitively, as we follow the vertices along the path P, the distance of the current vertex from s can decrease or stay the same at most k times, and so P cannot contain too many vertices which are the same distance from s. This allows one to argue that there is a vertex y with d(y)≤ d(x)+k+1 such that all internal vertices v of the subpath P[x,y] have d(x) < d(v) < d(y). Since d(y)≤ d(x)+k+1 and P has length at most d(t)-d(x)+k, it turns out that P[x,y] has length at most 2k+1. Note that since y only contains vertices v with d(v) ≥ y, the paths A and B must be disjoint. We can find the length of A using an algorithm for , and the length of B will have already been computed in our dynamic program (since y is further from s than x). So, by trying out all possible y, finding the possible lengths for subpaths A and B, and then adding up these lengths, we can get all possible lengths for P in the dynamic program, and solve . §.§ Previous Path Algorithm The fastest known algorithm for in undirected graphs goes through the problem. Recall that in this problem, we are given a bipartition V_1⊔ V_2 of the vertices in the graph, and want to find a path of length k from s to t, which uses k_1 vertices in V_1 and ℓ_2 edges with both endpoints in V_2. The authors of <cit.> showed that can be solved in 2^k_1 + ℓ_2(n) time over undirected graphs. Why does this imply a faster algorithm for in undirected graphs? Well, suppose the input graph contains a path P of length k from s to t. Consider a uniform random bipartition of the vertices of the graph into parts V_1 and V_2. We expect (k+1)/2 vertices of P to be in V_1, and k/4 edges of P to have both endpoints in V_2. In fact, this holds with constant probability, so we can solve by solving in the randomly partitioned graph. By <Ref> this yields a 2^3k/4(n) ≈ 1.682^k(n) time algorithm for . We can obtain a faster algorithm using the following modification: take several uniform random bipartitions of the graph, and solve separately for each bipartition, for k_1 + ℓ_2 ≤ 3(1-ε)k/4, where ε > 0 is some constant. The number of bipartitions used is some function of k and ε, set so that with high probability at least one of the partitions V_1⊔ V_2 has the property that the total number of vertices of P in V_1 and number of edges of P with both endpoints in V_2 is at most 3(1-ε)k/4. Setting the parameter ε optimally yields a 1.657^k(n) time algorithm for <cit.>. §.§ Our Improvement As in the previous approach outlined in <Ref>, our algorithm for performs dynamic programming over vertices in the graph, starting at t, and then moving to vertices closer to s. For each vertex x with d(x)≤ d(t), we compute all offsets r≤ k such that there is a path of length d(t)-d(x) + r from x to t in the subgraph x. Obtaining this information for x=s and r=k solves the problem. Given a vertex x and offset r≤ k, we wish to determine if G contains a path of length d(t)-d(x) + r from x to t in x. Suppose there is such a path P. If d(t)-d(x) is small enough, it turns out we can find P by solving for small values of p. So, for the purpose of this overview, suppose that d(t)-d(x) is sufficiently large. In this case, as outlined in <Ref>, previous work showed that P can be split into two subpaths A and B contained in disjoint subgraphs, such that A has length at most 2k+1. This splitting argument holds even for directed graphs. Our first improvement comes from the observation that in undirected graphs, we can decompose the path P with a smaller prefix: as depicted in <Ref>, there must exist a vertex y with d(y) > d(x), such that P splits into a subpath A from x to y in xy of length at most 3k/2+1, and a path B from y to t in y of length at most d(t)-d(y)+k. We can find the length of A by solving , and the length of B will already have been computed by dynamic programming, since d(y) > d(x). This split is possible because any consecutive vertices u and v in P have |d(u)-d(v)|≤ 1 (this is true for undirected graphs, but is not true in general for directed graphs). Since P has length at most d(t)-d(x)+k, it turns out that P has at most k/2 backwards edges. This lets us argue that there exists a vertex y with d(y)≤ d(x)+k/2+1 such that P[x,y] is contained in xy and P[y,t] is contained in y. Finally, A = P[x,y] should have length at most k more than d(y)-d(x), which means it has length at most 3k/2+1. This simple modification already yields a faster algorithm[In fact, this observation already yields the fast deterministic algorithms for <Ref>.] for . We get further improvements by performing casework on the number of stable edges in P (recall that an edge (u,v) is stable if both its endpoints have the same distance d(u)=d(v) from s). First, suppose P has at least m stable edges, for some parameter m. Since P has length at most d(t)-d(x)+k, we can argue that P has at most (k-m)/2 backwards edges. With this better upper bound on the number of backwards edges, we can improve the splitting argument and show that P decomposes into subpaths A and B, such that the length of A is at most (3k-m)/2, and the length of B was already computed by our dynamic program. It then suffices to solve , which yields a speed-up whenever m≥Ω(k). Otherwise, P has at most m stable edges. In this case, we consider the bipartition V_1⊔ V_2 of the vertex set, where V_1 has all vertices at an odd distance from s, and V_2 has all vertices with even distance from s. Since G is undirected, consecutive vertices on the path P differ in their distance from s by at most one. In particular, all forward and backward edges in P cross between the parts V_1 and V_2. Only the stable edges can contribute to edges with both endpoints in V_2. Since we assumed that the number of stable edges is small, it turns out we can find the length of the subpath A of P by solving with respect to the given bipartition, for some ℓ_2 which is very small. In particular, this approach computes the length of A faster than naively solving . By setting an appropriate threshold for m, we can minimize the runtimes of the algorithm in both of the above cases, and establish <Ref>. So in summary, our faster algorithms come from two main sources of improvement: using the structure of shortest paths in undirected graphs to get a better “path-splitting” argument in the dynamic program from , and cleverly applying the fast algorithm from <Ref> for with carefully chosen bipartitions. We note that our application of is qualitatively different from its uses in previous work. As discussed in <Ref>, previous algorithms for work by solving instances of , and as described in <Ref>, the fastest algorithms for on undirected graphs work by reduction to various instances of . Thus, previous algorithms for on undirected graphs implicitly rely on the fast algorithm for , applied to random bipartitions of the input graph. We obtain a faster algorithm for arguing that in certain cases, we can “beat randomness,” by constructing bipartitions which leverage structural information about the graph (namely, whether the shortest path distance from s to a given vertex is even or odd). § DETOUR ALGORITHM In this section, we present <Ref>, our new algorithm for the problem. As mentioned in the previous section, our algorithm behaves differently depending on the number of stable edges that a potential solution path contains. In particular, the algorithm depends on a parameter α∈ (0,1), which determines the threshold for what counts as “many” stable edges. Later, we will set α to optimize the runtime of <Ref>. Certain lines of <Ref> have comments indicating case numbers, which are explained in <ref>. Our algorithm computes a set L(x) for each vertex x in the graph, corresponding to the possible lengths of potential subpaths from x to t of a solution path from s to t. In step <ref> of <Ref>, we compute L(x) for all x that are “far” from s, by solving instances of for ℓ≤ (3-α)k/2. Starting in step <ref>, we compute L(x) for vertices x closer to s, in terms of the previously computed sets L(y) for vertices y further from s. In steps <ref> through <ref>, we compute some lengths in L(x) by solving instances of for appropriate a, k_1, ℓ_2 values, and in <ref> and <ref> we compute the remaining lengths in L(x) by solving for a≤ (3-α)k/2 + 1. §.§ Correctness In this section, we show that <Ref> correctly solves the problem for any choice of α∈ (0,1). The main technical part of the proof lies in inductively showing that every possible solution path from s to t will be considered by the algorithm and its length will be included in the set L(s). In <Ref>, we try out values of the variable m from 0 to k, and execute differently depending on how m compares to α k. This is interpreted as follows: suppose there is a solution path P from x to t, then m corresponds to a guess of the number of stable edges in P. In Case 1, we guess that P has few stable edges m < α k which corresponds to steps <ref> to <ref>. Under Case 1, there are two possible structures a potential solution path might take on depending on how d(x) compares to d(t). We refer to the case where d(x) - d(t) is small as Case 1(a) considered by step <ref>, and the case where d(x) - d(t) is large as Case 1(b) considered by step <ref>. In Case 2, we guess that m ≥α k, so P has many stable edges, which corresponds to steps <ref> to <ref>. These cases are also formally defined in our proof of correctness. For any fixed α∈ (0,1), <Ref> correctly solves the problem. We prove that upon halting, each set L(x) computed by <Ref> has the property that for all integers ℓ∈ [d(t)-d(x), d(t)-d(x)+k], we have ℓ∈ L(x) if and only if there is a path of length ℓ from x to t in x. If this property holds, then step <ref> of <Ref> returns the correct answer to the problem, since (s,t) + k is in L(s) if and only if there is a path from s to t of length (s,t) + k in s = G. So, it suffices to show that <ref> holds for all vertices x. We prove this result by induction on the distance of x from s in the graph. Base case: For the base case, suppose x is a vertex with d(x)∈ [d(t)-(1-α)k/2, d(t)]. Then L(x) is computed in step <ref> of <Ref>. We now verify that <ref> holds. First, suppose ℓ∈ L(x). Then, ℓ must be the length of some path from x to t in x by design. Conversely, suppose we have a path P from x to t in x of some length ℓ≤ d(t)-d(x) + k. Then by the assumption on x from <ref> in this case, we have ℓ≤ d(t) - d(x) + k ≤ (1-α)k/2 + k = (3-α)k/2 so step <ref> of <Ref> correctly includes ℓ in L(x). Thus <Ref> holds for all vertices x satisfying <ref>. Inductive case: For the inductive step, suppose x is a vertex with d(x) ≤ d(t) - (1-α)k/2 - 1. We may inductively assume that we have computed sets L(y) satisfying <ref>, for all vertices y with d(y) > d(x). Suppose ℓ∈ L(x) at the end of <Ref>. Then either ℓ was added to L(x) in step <ref>, or ℓ was added to L(x) in steps <ref> or <ref> of <Ref>. In the former case, ℓ is the length of a path from x to t in x by design. In the latter cases, we have ℓ = a + b, where a is the length of some path from x to y (for some vertex y with d(y) > d(x)) in xy, and (by the inductive hypothesis) b is the length of some path from y to t in y. Since xy and y intersect only at y, the union of these paths is a path from x to t in t. So, every integer in L(x) is a valid length of a path from x to t in x as desired. Conversely, suppose there is a path P from x to t in x of length ℓ∈ [d(t)-d(x), d(t)-d(x)+k]. We prove that ℓ appears in L(x). To do this, we will analyze the number of forward, backward, and stable edges appearing in P. Note that P has at least d(t)-d(x) forward edges, since P begins at a vertex at distance d(x) from s, ends at a vertex at distance d(t) from s, and only the forward edges allow us to move to vertices further from s. Let m denote the number of stable edges in P. We have m≤ k, since the length of P is at most d(t)-d(x) + k, and P has at least d(t)-d(x) forward edges. Suppose d(x) ≤ d(t) - (k-m)/2 - 1. Then P contains a vertex y such that * d(y)∈ [d(x)+1,d(x) + (k-m)/2+1], * every vertex u∈ P[y,t] with u≠ y has d(u) > d(y), and * every vertex v∈ P[x,y] has d(v)≤ d(y). For each i∈ [(k-m)/2+1], let z_i denote the last vertex on P satisfying d(z_i) = d(x) + i. These vertices exist because we are assuming that d(x)≤ d(t) - (k-m)/2-1, and P must contain vertices v with d(v) = d for every d∈ [d(x), d(t)]. By definition, each z_i satisfies conditions 1 and 2 from the claim. If some z_i satisfies condition 3 as well, then the claim is true. So, suppose that none of the z_i satisfy condition 3. This means that for each index i, the subpath P[x,z_i] contains a vertex u with d(u) > d(z_i). Consecutive vertices in P differ in their distance from s by at most one, so P[x,z_i] must contain an edge e = (v,w) such that d(v) = d(w)+1 and d(w) = d(z_i) = d(x) + i. That is, P contains a backwards edge from a vertex at distance i+1 from s to a vertex at distance i from s, as depicted in <Ref>. Note that z_1, z_2, …, z_(k-m)/2+1 occur on P in the listed order. This is because d(z_1) < d(z_2) < … < d(z_(k-m)/2+1) and each z_i satisfies condition 1 from the claim. Combined with the discussion from the previous paragraph, this means that P contains at least (k-m)/2+1 backwards edges. We now argue that this violates the assumption on the length of P. Let f and b denote the number of forward and backwards edges in P respectively. Since P starts at x and ends at t, we have f - b = d(t) - d(x), which implies that f = d(t) - d(x) + b. Then the total length of P is f + b + m = d(t) - d(x) + m + 2b by <ref>. However, since P has at least (k-m)/2+1 backwards edges, this length satisfies d(t) - d(x) + m + 2b≥ d(t) - d(x) + m + 2(k-m)/2 + 1 > d(t) - d(x) + k which contradicts the fact that the length ℓ of P satisfies <ref>. Thus our assumption was incorrect, and one of the z_i satisfies all three conditions from the claim, as desired. We now perform casework on the number of stable edges m in P. We start with Case 2 from step <ref> of <Ref>, since this is the easiest case to analyze. *Case 2: Many Stable Edges (m ≥α k) Suppose m ≥α k. In this case, by <ref> we have d(x) ≤ d(t) - (1-α)k/2 - 1 ≤ d(t) - (k-m)/2 - 1 So, by <Ref>, there exists a vertex y in P satisfying the three conditions of <Ref>. By condition 3 from <Ref>, the subpath A = P[x,y] is contained in xy. By condition 2 from <Ref>, the subpath B = P[y,t] is contained in y. Let a denote the length of A, and b denote the length of B. Since A has length at least d(y) - d(x), and P has length at most d(t)-d(x) + k by <ref>, we know that the length B satisfies b ≤ d(t) - d(y) + k. By the inductive hypothesis, L(y) satisfies <ref>, so b∈ L(y). Similar to the reasoning that established <ref>, we can prove that a ≤ d(y) - d(x) + k. By condition 1 of <Ref>, we know that d(y)≤ d(x) + (k-m)/2 + 1. Since m ≥α k, this implies that d(y)≤ d(x) + (1-α)k/2 + 1. Substituting this into <ref> yields a≤ (1-α)k/2 + 1 + k = (3-α)k/2 + 1. Thus, the length a of A will be found in step <ref> of <Ref>. As mentioned before, b∈ L(y). Thus, ℓ = a+b∈ (a + L(y)) is correctly added to the set L(x) in step <ref> of <Ref>, which proves the desired result in this case. *Case 1: Few Stable Edges (m < α k) If we do not fall into Case 2, we must have m < α k. Recall that in step <ref> of <Ref>, we defined V_1 = u| d(u) is odd and V_2 = u| d(u) is even. We want to argue that most edges in path P cross the bipartition V_1⊔ V_2. To that end, the following claim will be helpful. claimlabelbounding Let Q be a path of length q, with at most m stable edges. Let k_1 denote the number of vertices of Q in V_1, and let ℓ_2 denote the number of edges in Q with both endpoints in V_2. Then we have k_1 + ℓ_2 ≤ (q + m + 1)/2. Let k_2 denote the number of vertices of Q in V_2. Consider the cycle C formed by taking Q together with an additional edge between its endpoints (this new edge is imagined for the purpose of argument, and does not change the definition of V_1 and V_2). Let q_1, q_2, and q_cross denote the number of edges of C with both endpoints in V_1, both endpoints in V_2, and endpoints in both V_1 and V_2 respectively. We have 2k_1 = 2q_1 + q_cross because both sides of the above equation count the number of pairs (u, e) such that u is a vertex in C∩ V_1, and e is an edge in C incident to u. A symmetric argument implies that 2q_2 + q_cross = 2k_2. Adding <Ref> and <Ref> together and simplifying yields k_1 + q_2 = k_2 + q_1. This implies that k_1 + q_2 = k_1 + k_2 + q_1 + q_2/2. Since C is Q with one additional edge, we have ℓ_2 ≤ q_2. So the above equation implies that k_1 + ℓ_2 ≤k_1 + k_2 + q_1 + q_2/2. We have k_1 + k_2 = q+1 since the total number of vertices in Q must be one more than its length. By assumption on the number of stable edges in Q, we have q_1 + q_2 ≤ m. Substituting <ref> and <ref> into the right hand side of <ref> yields k_1 + ℓ_2 ≤ (q + m + 1)/2 which proves the desired result. With <Ref> established, we are now ready to analyze the two subcases under Case 1, based on the relative distances of x and t from s. *Case 1(a): d(t) - d(x) is small Suppose that d(x) ∈ [d(t)-(k-m)/2, d(t)]. In this case, <ref> implies that P has length ℓ≤ d(t) - d(x) + k ≤ (3k-m)/2. Let k_1 denote the number of vertices of P in V_1, and k_2 denote the number of edges in P with both endpoints in V_2. Then by setting Q = P and q = ℓ in <Ref>, we have k_1 + ℓ_2 ≤ (ℓ + m + 1)/2 ≤ (3k+m+2)/4. Also, note that P has length ℓ≤ 2 k_1 + ℓ_2, since 2k_1 is greater than or equal to the number of edges in P incident to a vertex in V_1. This observation, together with <ref>, shows that in this case, the length ℓ is correctly included in L(x) in step <ref> of <Ref>. *Case 1(b): d(t) - d(x) is large If we do not fall into Case 1(a), it means that d(x)≤ d(t) - (k-m)/2 - 1. Thus, by <Ref>, there exists a vertex y in P satisfying the three conditions of <Ref>. The proof that ℓ∈ L(x) in this case is essentially a combination of the proofs from Case 2 and Case 1(a). As in Case 2, by condition 3 from <Ref>, the subpath A = P[x,y] is contained in xy. By condition 2 from <Ref>, the subpath B = P[y,t] is contained in y. Let a and b denote the lengths of paths A and B respectively. Reasoning identical to the arguments which established <ref> prove that in this case we also have b≤ d(t) - d(y) + k and a≤ d(y) - d(x) + k. Condition 1 of <Ref> implies that d(y)≤ d(x) + (k-m)/2 + 1. Substituting this into <ref> implies that a≤ (3k-m)/2 + 1. Let k_1 denote the number of vertices of A in V_1, and let ℓ_2 denote the number of edges in A with both endpoints in V_2. Then by setting Q = A and q = a in <Ref>, we have k_1 + ℓ_2 ≤ (a + m + 1)/2 ≤ (3k + m + 2)/4. Also, we know that a≤ 2k_1 + ℓ_2, because 2k_1 is greater than or equal to the number of edges in A incident to a vertex in V_1. Combining this observation with <ref>, we see that the length a is indeed computed in step <ref> of <Ref>. By the inductive hypothesis (<ref>) and <ref>, we know that b∈ L(y). Thus we have ℓ = a+b ∈ (a+L(y)), so in this case, ℓ is correctly included in L(x) in step <ref> of <Ref>. This completes the induction, and proves that <ref> holds for all vertices x in the graph. In particular, <ref> holds for x equal to s. This implies that step <ref> of <Ref> returns the correct answer to the algorithm. § APPLICATIONS In this section, we present consequences of our new algorithm for from <Ref>. * By <Ref>, <Ref> correctly solves , for any value α∈ (0,1), What is the runtime of <Ref>? Well, steps <ref> and <ref> of <Ref> involve solving polynomially many instances of , for ℓ≤ (3-α)k/2 + 1. Using the fastest known algorithm for in undirected graphs <cit.>, these steps take 1.657^(3-α)k/2(n) time. The remaining computationally intensive steps of <Ref> occur in steps <ref> and <ref>, which can be implemented by solving (n) instances of , for k_1 + ℓ_2 < (3k + α k + 2)/4. By <Ref>, these steps then take 2^(3+α)k/4(n) time overall. Then by setting α = 0.55814 to balance the above runtimes, we see that we can solve over undirected graphs in 1.657^(3-α)k/2 + 2^(3+α)k/4(n) ≤ 1.8526^k(n) time, as desired. * The proof of <cit.> shows that in undirected graphs reduces, in polynomial time, to solving for all p≤ 2k and (n) instances of on graphs with at most n nodes. The proof of <Ref> implies that can be solved over undirected graphs in 1.8526^k(n) time. Previous work in <cit.> shows that can be solved over undirected graphs in 1.657^k(n) time. Combining these results together with the above discussion shows that can be solved over undirected graphs in 1.8526^2k + 1.657^3k/2(n) ≤ 3.432^k(n) time, as desired. * By <Ref>, we can solve over an undirected graph by running <Ref> with parameter α = 0. When α = 0 in <Ref>, steps <ref>, <ref>, <ref> never occur. In this case, the algorithm only needs to solve (n) instances of , for ℓ≤ 3k/2 + 1, in steps <ref> and <ref>. Since can be solved deterministically in 2.554^k(n) time <cit.>, this means that we can solve deterministically in 2.554^3k/2(n) ≤ 4.0817^k(n) time, as desired. * The proof of <cit.> shows that in undirected graphs reduces, in deterministic polynomial time, to solving for p≤ 2k, and (n) instances of on graphs with at most n nodes. The proof of <Ref> implies that can be solved over undirected graphs deterministically in 4.0817^k(n) time. Previous work <cit.> shows that can be solved deterministically in 4.884^k(n) time. Combining these results together with the above discussion shows that can be solved over undirected graphs deterministically in 4.0817^2k + 4.884^3k/2(n)≤ 16.661^k(n) time, as desired. § CONCLUSION In this paper, we obtained faster algorithms for and over undirected graphs. However, many mysteries remain surrounding the true time complexity of these problems. We highlight some open problems of interest, relevant to our work. * The most pertinent question: what is the true parameterized time complexity of and ? In particular, could it be the case that can be solved as quickly as , and can be solved as quickly as ? No known conditional lower bounds rule out these possibilities. * The current fastest algorithm for in directed graphs has a bottleneck of solving . The current fastest algorithm for in directed graphs has a bottleneck of solving . Similarly, the fastest known algorithm[In fact, even the recent alternate algorithm of <cit.> for requires solving first.] for in undirected graphs requires first solving . Is this parameter blow-up necessary? Could it be possible to solve these harder problems with parameter k faster than solving these easier problems with parameter 2k? * The speed-up in our results crucially uses a fast algorithm for the problem in undirected graphs. In directed graphs no (2-ε)^ℓ(n) time algorithm appears to be known for this problem, for any constant ε > 0 and interesting range of parameters k_1 and ℓ_2. Such improvements could yield faster algorithms for in directed graphs. Can we get such an improvement? Also of interest: can we get a faster deterministic algorithm for ? * An easier version of the previous question, also raised in <cit.>: can we solve in directed bipartite graphs in (2-ε)^k(n) time, for some constant ε > 0? In the unparameterized setting, the ( for k=n) problem admits several distinct algorithms running in (2-ε)^n(n) time in directed bipartite graphs. Specifically, <cit.> shows in directed bipartite graphs can be solved in 1.888^n(n) time, and <cit.> uses very different methods to solve this problem even faster in 3^n/2(n) time.[It is also known that sufficient improvements to algorithms for multiplying two n× n matrices together would imply that even the weighted version of in directed bipartite graphs can be solved in (2-ε)^n(n) time, for some constant ε > 0 <cit.>.] We conjecture that the same speed-up is possible for , so that this problem can be solved over directed bipartite graphs in 3^k/2(n) time. § ADDITIONAL RELATED WORK *Detours in Directed Graphs The problem is not known to be over directed graphs. However, this problem has been proven to be for certain restricted classes of graphs. For example, the algorithm of <cit.> shows that is on any class 𝒢 of graphs where the [For any positive integer p, in the problem, we are given a graph along with source vertices s_1, …, s_p and target vertices t_1, …, t_p, and tasked with finding disjoint paths from s_i to t_i for each index i. For constant p, this problem can be solved in polynomial time over undirected graphs. Already for p=2, this problem is -hard over general directed graphs.] problem can be solved in polynomial time. This implies, for example, that is over directed planar graphs (see also <cit.>, which presents a more direct argument showing is in directed planar graphs). More recently, <cit.> showed that is on any class 𝒢 of graphs where the problem can be solved in polynomial time. *Deterministic Algorithms There has been significant research into obtaining fast deterministic algorithms for path and detour problems. The fastest known deterministic algorithm for runs in 2.554^k(n) time <cit.>, and the fastest known deterministic algorithm for runs in 4.884^k(n) time <cit.> (the runtimes reported in <Ref> for these problems come from randomized algorithms). The work of <cit.> implies that the fastest known deterministic algorithm for runs in 6.523^k(n) time <cit.>. Interestingly, for the , , and problems, no faster deterministic algorithms are known for the special case of undirected graphs. The work of <cit.> implies that the fastest known deterministic algorithm for in undirected graphs runs in 42.549^k(n) time. *Above-Guarantee Parameterization The study of the and problems belongs to a large subarea of parameterized algorithms which focuses on “above-guarantee parameterizations.” In these problems, we are given an input which is guaranteed to contain a structure of some size σ, and our task is to determine if the input contains a similar structure of size k “more than” σ (the definition of this increase in size depending on the problem of interest). In the detour problems we discuss, we are guaranteed a path of length (s,t) from s to t. We refer the reader to <cit.> for an accessible survey of results and open problems in this area. *Conditional Lower Bounds In this section, we first define some problems, recall popular conjectures concerning the exact time complexity of those problems, and then state implications of those conjectures for the exact time complexity of variants of the problem. In the problem, we are given a φ (i.e., a Boolean formula which can be written as a conjunction of clauses, where each clause is a disjunction of at most three variables or their negations) over n variables, and tasked with determining if φ is satisfiable (i.e, some assignment to the variables makes every clause in φ evaluate to true). The posits that there exists a constant δ > 0 such that cannot be solved in 2^δ n(n) time. In the problem, we are given a set U of n elements, a family ℱ of subsets of U, and a target integer t. We are tasked with determining if there exists a collection of at most t sets from ℱ whose union equals U. The asserts that for any constant δ > 0, there exists a positive integer Δ, such that on instances where every set in the family ℱ has size at most Δ cannot be solved in 2^(1-δ)n(n) time. The combined with the classical -hardness reduction for Hamiltonian Path implies that cannot be solved in 2^o(k)(n) time <cit.>. So, it is reasonable to look for algorithms of the form c^k(n) time for and its variants. However, does not rule out the possibility that could be solved in 1.001^k(n) time (for example). In general, no strong lower bounds on the exact time complexity of are known. However, interesting lower bounds have been proven for stronger versions of this problem. In the problem, we are given an r-graph,[For a positive integer r, recall that an r-graph, also known as an r-uniform hypergraph is set of vertices together with subsets of r vertices known as hyperedges. A 2-graph is a graph in the usual sense.] and are tasked with determining if it contains a sequence of k vertices, such that any r consecutive vertices in the sequence belong to a hyperedge. Assuming the , this generalization of requires 2^((r-2)/(r-1))k - o(k)(n) time to solve <cit.>. In the problem, we are given a graph whose vertices are colored, and are tasked with finding a path which passes through at least k vertices with distinct colors. Assuming , this generalization of requires 2^k - o(k)(n) time to solve <cit.>. § BIPARTITIONED PATH ALGORITHM In the problem, we are given a graph G on n nodes, whose vertices are partitioned into two parts V_1 and V_2, with distinguished vertices s and t, and are tasked with determining if G contains a simple path from s to t of length ℓ, which uses exactly k_1 vertices from V_1, and exactly ℓ_2 edges whose endpoints are both in V_2. Below, we include a proof that can be solved over undirected graphs in 2^k_1+ℓ_2(n) time, following the exposition from <cit.>. The idea behind this algorithm is to construct a polynomial whose monomials correspond to walks of length k from s to t in G, some of whose vertices and edges are annotated with labels. These labels are used in a clever way to sieve out simple paths from this set of walks. * A walk is a sequence of vertices u_1, …, u_a+1 and edges e_1, …, e_a in G, such that for each index i, e_i is an edge between u_i and u_i+1. Given a set L, an L-labeled walk is a walk, together with a subsequence of its vertices and edges f_1, …, f_b, annotated with corresponding elements c(f_1), …, c(f_q) of L (formally, each f_j can be thought of as (u_i, i) or (e_i, i) for some index i). A vertex or edge f_j annotated with an member of L is called a labeled element, and the member c(f_j) of L associated with the labeled element is referred to as its label. Looking ahead, we will introduce a special class of labeled walks, and then construct a polynomial summing over these labeled walks. We will then argue that this polynomial is nonzero if and only if there is a solution to the problem. Define W̧_L to be the set of all L-labeled walks W with the following properties: * The walk W has length ℓ. * The walk W begins at vertex u_1 = s and ends at vertex u_ℓ+1 = t. * For any index i, if u_i∈ V_2 and u_i+1∈ V_1, then u_i+2≠ u_i. In other words, if the walk W leaves a vertex in V_2 for a vertex in V_1, it never “immediately returns” to the same position in V_2. * The walk W has exactly k_1 vertices in V_1, and ℓ_2 edges with both endpoints in V_2. * The walk W contains exactly |L| labeled elements, and all labels in the walk are distinct. Any labeled element in W is either a vertex in V_1 or edge with both endpoints in V_2. Given a walk W∈W̧_L, we introduce the corresponding monomial f(W) = ∏_i=1 ^ℓ x_u_i, u_i+1∏_j=1^|L| y_f_j, c(f_j). Intuitively, the monomial f(W) uses the x_u_i, u_i+1 variables to keep track of the edges in W and the y_f_j, c(f_j) variables to keep track of its labels and labeled elements. Now, set L = [k_1 + ℓ_2]. Define the polynomial P(x⃗,y⃗) = ∑_W∈W̧_L f(W). A simple dynamic programming argument shows that we can evaluate P at a point in a given field in 2^k_1+ℓ_2(n) field operations. The dynamic programming table simply holds evaluations of polynomials which are sums of f(W) for labeled walks W which have length at most ℓ, start at s and end at some arbitrary vertex v, satisfy condition 3 above, have at most k_1 vertices in V_1, at most ℓ_2 edges with both endpoints in V_2, and use pairwise distinct labels from L. The 2^k_1+ℓ_2 factor comes from having to perform dynamic programming over all distinct subsets of L, and the (n) factor follows from keeping track of counters for the walk length, number of vertices in V_1, number of edges with both endpoints in V_2, and a constant number of vertices near the ends of the walk W. Now, take q = log (n(ℓ + |L|)) and work over the field 𝔽_2^q. Arithmetic operations over this field can be performed in Õ(q) time. Consider the polynomial P over this field. We claim that the net contribution of the monomials of terms in W∈W̧_L which are not simple paths (i.e., W has a repeat vertex) vanishes in P over this field. Indeed, let W∈W̧_L have a repeat vertex. There are two cases to consider. *Case 1: Repeat in V_1 Suppose first that W has a repeat vertex in V_1. Let i be the smallest index such that u_i∈ V_1 occurs at least twice in W. Let j be the smallest index such that u_j = u_i. By conditions 4 and 5, and the fact that |L| = k_1 + ℓ_2, we see that u_i and u_j are both labeled elements of W, with distinct labels c_i and c_j respectively. So, consider the labeled walk W' which has the same vertex and edge sequence, and has the same label sequence as W except that u_i has label c_j and u_j has label c_i. By inspection, W'∈W̧_L and f(W) = f(W'). Moreover, applying this label swap argument to W' recovers W. So this pairs up the walks W and W' together, and over 𝔽_2^q the contribution f(W) + f(W') will vanish. *Case 2: All Repeats in V_2 Suppose W contains no repeats in V_1. Let i be the smallest index such that u_i occurs at least twice in W. By assumption, u_i∈ V_2. Let j be the smallest index such that u_j = u_i. In this case, consider the closed walk C = W[u_i, u_j] between these first two occurrences of u in W. If the sequence of vertices in C is not a palindrome, we can construct the labeled walk W' formed by reversing C (both the vertices and corresponding labels) in W. By inspection, W'∈W̧_L and f(W') = f(W). Moreover, applying this reversing procedure on W' recovers W. So this pairs up the walks W and W' together, and over 𝔽_2^q the contribution f(W) + f(W') will vanish. Otherwise, suppose that the vertices of C do form a palindrome. Then the sequence of vertices for C looks like u_i, u_i+1, …, u_(i+j)/2, …, u_j-1, u_j. where u_i+p = u_j-p for all p≤ (j-i)/2. Since W has no repeat vertices in V_1, we know that u_i+p, u_j-p∈ V_2 for all p < (j-i)/2. This together with condition 3 implies that u_(i+j)/2∈ V_2. Thus, C has at least two edges with both endpoints in V_2. By conditions 4 and 5 together with the fact that |L| = k_1 + ℓ_2, we see that all such edges are labeled in W. So, take the edges e_i = (u_i, u_i+1) and e_j-1 = (u_j-1, u_j) with distinct labels c_i and c_j-1. Since C is a palindrome, e_i = e_j-1. Thus, we can consider the labeled walk W' which has the same vertex and edge sequence, and has the label sequence as W except that e_i has label c_j and e_j has label c_i. By inspection, W'∈W̧_L and f(W) = f(W'). Moreover, applying this label swap argument to W' recovers W. So this pairs up the walks W and W' together, and over 𝔽_2^q the contribution f(W) + f(W') will vanish. Thus, in either case, we see that all elements of W̧_L with repeat vertices can be paired up so that their monomials have net zero contribution over 𝔽_2^q in the definition of P. On the other hand, by definition, all W∈W̧_L with no repeat vertices (i.e., W whose vertex and edge sequences correspond to simple paths) produce distinct f(W). Thus, we see that over 𝔽_2^q, the polynomial P is nonzero if and only if there is a solution to the problem. Take a uniform random evaluation of P over 𝔽_2^q. By the Schwartz-Zippel Lemma, this evaluation is nonzero with probability at least 1 - (ℓ + |L|)/2^q = 1 - 1/n if P is nonzero (and of course is always zero if P is the zero polynomial). Thus, we can solve the problem by checking whether a uniform random evaluation of P over 𝔽_2^q is nonzero, which by the preceding discussion can be done in 2^k_1+ℓ_2(n) time.